Helen Orr, professor of religious studies, wasn’t surprised when the boom of generative artificial intelligence hit, nor by its high level of influence on education and technology use. Her partner focuses on AI and humancomputer interaction as a computer scientist, so she said she could see the shift coming. She wanted to get started right away, using her course that she can develop with a religion topic of her choice as an opportunity to make an AI-focused course last year. She now teaches REL1702: Religion and AI course every semester to students of all grade levels, majors and experience levels with AI.

This course is one of the many indicators that the rise of AI tool integration and software has changed the way Elon University approaches AI inside and outside of the traditional classroom setting. ElonGPT is an AI chatbot that provides information and answers questions about the university. The Elon AI Hub provides links to resources for learning more about AI and how to use it, as well as AI tools that are free for students to use. The Center for Design Thinking launched a student-led workshop this semester entitled ‘How might we use AI for more effective problem-solving?’

As a response to the questions raised today surrounding AI’s role in higher education, Elon University created six principles of its own in October 2024: human-centered work, overcoming the digital divide, information literacy, responsibility, learning evolving technology and using AI as a tool rather than a replacement were the goals set.

Nearly a year and a half later, these principles have led some faculty to incorporate AI tool use and experimentation, from the brainstorming stage to final edits.


Religion and AI

When Orr first pitched the idea for her religion and AI course, she said AI was treated like a specialty or novelty topic.

“I would say even when generative AI first came out, I did not get the impression that people realized what a big deal it was,” Orr said. “Now we’re at the point where it’s so clear to everyone that it’s going to be integrated into everything.”

When Orr first pitched the idea for her religion and AI course, she said AI was treated like a specialty or novelty topic. “I would say even when generative AI first came out, I did not get the impression that people realized what a big deal it was,” Orr said. “Now we’re at the point where it’s so clear to everyone that it’s going to be integrated into everything.” Orr’s course relates critical thinking skills foundational to religious studies and relates them to how AI gathers and presents its information. The understanding of intersectionality, historical context and diversity of thought and experience within groups is applied to scrutinize generative AI outputs.

“When so much of its training data is based upon a white male Protestant population, or often white and Asian men, algorithmic bias is such a big concern,” Orr said.

Orr said her biggest successes within the course have been a prompt engineering workshop and an algorithmic bias workshop.

“I think my students have been responding well, and I think that it allows creativity to flourish,” Orr said.

Orr also integrates AI into the introductory course for religious studies by having students use it to check sources or edit their writing.

“Students quickly learn that AI hallucinates,” Orr said. “AI makes mistakes. So it can also be an empowering experience in the sense that you realize, ‘Oh, I have to check this tool or keep it in line a little bit to do good work.’”


Student-to-student and professor-to-student

Professor of strategic communications Michele Lashley has regular student exercises brainstorming and writing with ChatGPT in her courses. The exercises involve forming an idea, asking ChatGPT to refine it, and further prompting edits and specific requests from the chatbot until satisfied with the product. Lashley said figuring out what prompts generative AI respond best to through experimenting is important for students to learn.

“I take the responsibility of preparing my students for life after Elon really seriously,” Lashley said. “Because of the nature of the field that we’re in in terms of communications, there’s no ignoring AI. It’s here. We don’t know what that’s going to look like because it’s changing so quickly, but I don’t want my students to be afraid of it.”

Lashley, who teaches all grade levels from strategic communications courses to COM4000: Media Law and Ethics, has had Center for Design Thinking workshops come to her classes multiple times. She said she was impressed with the ‘How might we use AI for more effective problem-solving?’ workshop, which is new this semester.

“It’s all about preparing students to be empowered when they leave Elon to start working,” Lashley said. “I don’t want my students to feel like AI is off limits to them, but I do want them to learn how to use it effectively and ethically, and the only way I can teach them how to do that is to actually use it in class.”


Student responses

Sophomore computer science and communication design student Sophie Shartzer felt that AI is being incorporated too deeply into education at Elon.

“I think AI is really bad for the environment, for our creativity, for humankind, for jobs,” Shartzer said. “Ethically, I just think it’s a bad idea.”

Softwares such as the Microsoft Suite and Canva that are heavily used in Elon courses have added AI-powered design elements, from providing suggestions for layout to creating custom templates based on prompts to fully-AI art. Shartzer said a main concern for her and for fellow graphic designers is that generative AI creating art will eliminate jobs and diminish art and design.

“Unfortunately, I think a lot of times with Elon, a lot of people aren’t really thinking about the people’s jobs that are lost,” Shartzer said. “Also about how bad it is for the environment, I feel like people aren’t really thinking about that.”

Shartzer said that while AI can make workloads less difficult and faster to complete, working hard and thinking independently and creatively is part of what makes us human.

“The majority of opinions I’ve seen at Elon, if not all, are really positive toward AI,” Shartzer said. “Everyone’s saying, ‘It’s so unique. It’s just like a fantastical innovation from the future. But at what cost?”

Lashley said she understands the perspectives of those who are hesitant to use AI because of the association with using it as a replacement for critical thinking and as a crutch instead of a tool. She said she tries to give students clear guidelines about when and how AI should be used, but that students’ use can’t be constantly monitored. Still, she said she is passionate about incorporating AI into all of her classes.

“AI is here and it’s not going anywhere,” Lashley said. “We can’t just say, ‘Well, it’s going to decrease their critical thinking skills.’ Maybe in some cases, if we’re not teaching students how to use it correctly, but not if we teach them how to use it in a way that supplements and complements their creative thinking.”

A strategy Lashley has to implement this is asking students to submit to her the prompt they put into an AI chatbot, the AI output, and how they modified it.

“It’s not about seeing whether they’ve cheated or not,” Lashley said. “It’s just so I can see how they process things. I’m giving them permission to use it, but I want to see their work. It’s like a math problem.”

Lashley said she’s gotten helpful feedback from her students, and that they have accepted and been excited about the AI use in her classes.

“I find that students like working with AI,” Lashley said. “They’re just not quite sure about how to do it yet.”