Students taking a class taught or organized by Mathematics and Electrical and Systems Engineering professor Robert Ghrist—who goes by prof–g—often encounter a unique type of tutor. It is available 24/7, not bound by a strict office hours schedule or a reticence to answer emails late at night. It never runs out of practice questions or examples. And it is always able to get back to you in moments, even if dozens of students are asking it questions simultaneously. 

This tutor isn’t a TA or even an entire team of them connected through a listserv. In fact, it isn’t even human. Instead, it’s an artificial intelligence designed by prof–g to aid in students’ learning of key mathematical concepts and offer opportunities for self–assessment. The tutor is available through OpenAI’s ChatGPT platform and based on a 400–line prompt instructing the chatbot to follow the syllabus and syntax prof–g uses in courses. 

Beyond providing example problems and visual representations of complicated topics, the chatbot possesses a slew of interactive features. For example, when a student types "/intro" into CALCBLUEBOT—trained to correspond to course content in MATH 1410—they are first prompted to respond to a series of personal introductory questions. The chatbot then uses this information to generate various examples and explanations that make convoluted course material easier to comprehend.

The custom chatbots prof–g creates—which can be found in multiple courses—are just one example of how professors are integrating AI into their classes and research at Penn. In the last few years, advanced computer chatbots have made their way into classroom instruction, assessment, and student learning, prompting questions and creating new opportunities. 




AI—broadly defined as a set of technologies that ask computers to learn and extrapolate rather than simply analyze inputs—is nothing new. For years, AI was evolving and being incorporated into many sectors, including education. Even before ChatGPT, many teachers had become increasingly reliant upon AI tools and platforms to take on more tedious tasks such as grading students’ essays. Instructors discovered that if they simply input a detailed rubric into AI, the platform could grade essays with a reasonable degree of accuracy. 

A massive step forward began in earnest in November 2022, when OpenAI released an early demo of ChatGPT to the public. The technology quickly went viral and gained widespread attention, with people flocking to social media to share their personal experiences asking the advanced chatbot a variety of questions and receiving detailed information in return. 

Even if ChatGPT seems like a revolutionary leap forward, it builds on decades of information science work. It is based on natural language processing—a discipline dealing with computer analysis of text and speech in human or “natural” languages—which can be traced back to work done by Russian mathematician Andrei Markov, according to Computer and Information Science professor Chris Callison–Burch. He said that ChatGPT operates similarly to autocomplete, a common smartphone feature which predicts ensuing words given the existing content of a statement. 

What separates generative AI from its predecessors is an ability to use more context to inform answers and the capacity to respond to user prompts rather than simply list similar statements or questions. These innovations have given ChatGPT and similar platforms incredible flexibility. They can analyze documents, offer synopses, and answer user questions with remarkable comprehension. Some can even generate audio clips, photos, and videos in response to text prompts. 

Generative AI has been widely adopted among college students, who rely on it to summarize readings, find sources for outlines, and solve math problems. According to Ryan Baker, a professor of Learning, Teaching, and Literacies at Penn’s Graduate School of Education, students prefer adaptive learning methods to more traditional methods of instruction, such as lectures. “When we use [ChatGPT] in our classes, we get a lot of praise from students for it and a lot of positive feedback anonymously about our uses,” he says. 

One thing that makes the current generation of generative AI systems different from prior products is that people now interact with artificial intelligence directly. In the past, this technology would exist in the background of other products, such as social media apps, but now users can ask an AI model questions with minimal intermediaries. 

Given its popularity and support—as of December 2024, ChatGPT had over 120 million daily active users—it seems that generative AI is here to stay in higher education. Therefore, a major priority for Penn has been ensuring that professors develop clear policies about the acceptable use of AI in class. 

As the Executive Director of Penn’s Center for Excellence in Teaching, Learning, and Innovation, Bruce Lenthall’s job partially consists of assisting professors through the transition to a generative AI world. The center’s overarching goal is to ensure that teachers have all the tools necessary to be successful in their instruction. Recently, many of their conversations have surrounded responsible use of artificial intelligence, and the center hosts a faculty group which discusses these issues monthly. 

Lenthall says that he has encountered a variety of attitudes from professors about ChatGPT. “There are some people who really don't yet have a great handle on what generative AI is or know what it is but see it as something to be alarmed by and concerned about. [However], there are some people who are really enthusiastic about the ways it might open up different ways of teaching, and there are some people in a range of positions in between.”

Instead of trying to dictate a one–size–fits–all approach to artificial intelligence, Lenthall encourages professors to communicate their policies around its use to students. Penn’s official policy surrounding generative AI allows professors to design their own regulations. 




Many professors are embracing generative AI for the benefits it can have in terms of streamlining processes. Lenthall cited writing computer code as an example of an area where ChatGPT has improved efficiency. 

“Sometimes [in fields like statistics or data science], the intellectual work that you want people to do is not necessarily learning how to code,” says Lenthall. “You [can] use AI to help people learn coding so that they don't spend all their time there but spend more time thinking about how they interpret the results they're getting.”

One professor who believes that generative AI has led to efficiency gains for both students and instructors is prof–g, who stated that the GPT–based tutor helps students master course material. It is available around the clock, while instructors and TAs only hold office hours in short time slots and can’t respond to students’ emails instantaneously. Its ability to answer questions about why a certain topic is important, especially when given information about a student’s particular background and interests, is also important to prof–g. 

In this regard, prof–g is spot–on. I recently tried CALCBLUEBOT and it works just as advertised. Even though I haven’t taken multivariable calculus for multiple semesters, I could access the material as if I were a current student with just a few clicks, and it remained on call for further questions. When asked to introduce myself, I told the AI that I was studying history, economics, and statistics. After that, it gave me examples tailored to my fields of interest. 

Generative AI platforms also assist prof–g’s research and writing work. In December 2024, Claude—another GPT–based chatbot—helped prof–g write a book. In this project, prof–g engaged collaboratively with Claude, allowing the AI to draft passages with a similar tone to work prof–g wrote in the past. With this collaboration, prof–g claims to work five times faster than he could without the help of AI. Furthermore, prof–g says that most tasks that would traditionally be performed by a search engine have now been replaced by generative AI platforms. 

An even greater role for generative AI, especially around assessment, is in prof–g’s hopeful future expectations for education. Currently, professors are not allowed to upload student work to cloud–based platforms like ChatGPT, limiting their ability to use it for grading and administering tests. However, prof–g imagines a future where AI chatbots can examine student knowledge by simulating an oral exam or performing grunt work calculations. 

Beyond simply being a tool to boost productivity for students and professors alike, ChatGPT has become an integral part of certain courses at Penn. Kevin Werbach, the chair of Wharton’s Legal Studies and Business Ethics Department, teaches a class on the benefits and drawbacks of artificial intelligence. For him, not allowing students to use generative AI would seem illogical. 

“I can't, in good conscience in a class about AI, tell students [that] all use of AI is banned because the whole class is about investigating what it is,” Werbach says. “I want students to actually have hands–on familiarity with the technology … It's important to not try to hold it at arm's length and pretend that we can make it go away.”

Werbach incorporates ChatGPT into assignments and has students learn about the capabilities and drawbacks of it through hands–on use. For a final project in his artificial intelligence governance class, students have to use generative AI and reflect on the output that they receive. An assignment designed to study hallucination—a phenomenon where a generative AI model will invent sources or facts that don’t exist but seem plausible—asks students to intentionally make ChatGPT "hallucinate" and then explain what happened and why it gave a wrong answer. 

For certain fields, generative AI has forced fundamental rethinking about what’s important to teach and what’s potentially outdated. Michael Chiappini—a lecturer in the Critical Writing department and part of a pilot program which integrates AI into the writing seminar sequence—wonders how much writing instructors should still expect students to complete on their own, and what tasks can be delegated to AI. He understands that while there is still significant amounts of work to do in terms of evaluating the role of generative AI, he accepts that it is here to stay and may have a role to play in assisting students with certain writing and research tasks. 

Generative AI also offers individuals a new medium to express creativity. Stuart Weitzman School of Design lecturer Lisa Park has used technology to create interactive art installations for several years before ChatGPT was released. One project incorporated brain wave and heart rate monitors to create bespoke audio–visual pieces. 

As both an instructor and an artist, she has worked with generative AI tools—such as image generator Midjourney—to create unique visuals based on text prompts. Some of the assignments in her courses ask students to use generative AI to both seek inspiration for pieces and generate images. She believes that it can help artists discover which subjects or styles might work best for a specific piece. Students can now focus less on the technical aspects of creation and more on empowering their own creativity in ways that wouldn’t have been possible just a few years ago. 




For all of its efficiency–optimizing, clutter–reducing wonder, there are significant drawbacks to generative AI models. Baker says that proper use of AI chatbots is made difficult due to platforms constantly evolving. A prompt which may have led to a certain answer a few months ago could result in the chatbot returning a completely different output on the latest version. 

“I think that it changes over time, which also makes it hard,” Baker says. “Prompt engineering isn't the same art it was 18 months ago … The functionality of GPT is different which actually makes it a continually moving target that people have to continually be relearning as they're using it.”

One issue Werbach sees with generative AI platforms involves their use of copyrighted material as training data. Similarly to how Google scrapes the entire internet in order to provide an accurate, all–encompassing search engine, generative AI models read copyrighted material in an attempt to provide the most informed responses to user queries. This has led to conflict between content publishers and AI platforms, with OpenAI agreeing to pay News Corporation an undisclosed amount to license the company’s content catalog as training data. 

Werbach also knows that these systems can be prone to hallucination, particularly when asked about complex technical topics that lack a large base of data to draw from. Furthermore, there is evidence that the platforms can be biased when asked to answer questions on sensitive subjects. This predisposition to favor certain kinds of language and information leads means that generative AI output is something to be engaged with critically rather than accepted at face value. For instance, Dana Walker, a lecturer in Critical Writing, emphasizes to her students to push back against what a chatbot will say and not take its output as infallible. 

Yet another problem with generative AI is the potential for students to exploit the tool to help them cheat on assignments, generate essays, or provide answers to problem sets, when it is not permitted by an instructor’s policy. Werbach and other professors acknowledge the possibility of cheating and agree that responsibility lies with both students and instructors to prevent this. 

“If instead of learning the material, you're just typing the prompt for the assignment into ChatGPT and pasting in the result, that to me is a problem, because if all you wanted was just to get a Penn degree for doing no work, then I shouldn't be doing this at all,” Werbach said. “We should just have you pay your tuition and we give you a degree.” 

Werbach’s sentiment is similar to that of prof–g, who says that professors should design courses in such a way that forces students to actually learn the material themselves rather than cheat. However, prof–g also believes that students have a moral obligation to themselves to not misrepresent generative AI outputs as their own work. 

Baker says he has heard from instructors that ChatGPT is able to generate essays good enough to receive around a B–minus grade. This worries some professors who are “concerned about students doing that rather than learning the material,” Baker says. But research done by Callison–Burch suggests that people can improve their ability to detect AI–generated text. He found that people can learn to identify writing students using AI to generate work and pass it off as their own but acknowledged that more advanced models make this task more difficult. 

In spite of these failures, generative AI has so much inertia in higher education that it is not going anywhere. None of the professors interviewed wanted to make a definitive prediction over what the next generation of artificial intelligence platforms would be able to do. But all of them anticipated that it will continue to have a large role in academic life for the foreseeable future. 

In the meantime, generative AI will continue reshaping education, just as it has for the last two years. And for every math problem it solves, every source it finds, and every question it answers, AI platforms help professors and students alike ignore menial tasks and focus on what’s important: the pursuit of knowledge.