Skip to main
University-wide Navigation

Defining Generative AI

What is Generative AI?

Generative AI (GenAI) is a form of artificial intelligence capable of producing new content using predictive algorithms. Text-based GenAI tools like OpenAI’s ChatGPT or Google’s Gemini are powered by Large Language Models (LLMs). LLMs are machine learning models pre-trained with large amounts of data to learn patterns and norms. In response to a user’s prompt, GenAI uses those learned patterns to predict and create plausible outputs. These are predictive models that generate new content based on learned patterns so no output will be the same (even though sometimes outputs have a common style or tone). 

For those first encountering GenAI, outputs can be startling in their ability to effectively address prompts and to sound human-written. GenAI operates much like an online chat. The user enters a question or request, and the tool then answers within a few seconds by providing a plausible response based on learned patterns. For text-based GenAI, outputs can come in the form of coherent sentences and paragraphs. There are also GenAI tools that generate code, images, videos, audio, data visualizations, and even slide decks. These are called Large Multimodal Models (LMMs).

Outputs are novel content informed by 1) data sets with which the tool is trained, 2) the version of transformer the AI runs, and 3) whether the tool is directly pulling from up-to-date internet sources. Early free versions of ChatGPT (3.5 for instance) was limited to training data that only included sources from before September 2021. However, most current models have some general connection to modern sources via the internet. Paywalls, subscriptions, and other barriers to internet resources also impact GenAI outputs. GenAI outputs are also influenced by the construction and complexity of the prompts and the extent to which the user continues to engage with the chatbot to refine the responses.

What does that mean for educators?

Because GenAI tools can create plausible outputs in response to quality prompts, instructors should spend time exploring those tools and reflecting on their own assignment design. An instructor who inserts a basic essay prompt into ChatGPT may find that the tool generates something akin to what a student might write given the same prompt. Likewise, GenAI tools that create visual outputs can create student-like outputs to prompts that ask students to create a graphic.

Instructors will want to explore how GenAI tools react to their assignment prompts to get familiar with typical outputs and consider how they might want to reframe their assignments. Exploring typical GenAI outputs will also prepare instructors to talk with their students about AI in the context of their course. In some cases, restructuring or reimagining an assignment might be worthwhile. For instance, asking students to move beyond broad discussions of content or key terms and toward more individualized analysis rooted in the unique course experience might be fruitful. Contact CELT if you would like to talk further about reframing assignments. 

If you liked to read more about how generative AI works watch this introduction video from UPenn's Wharton School. Click here for more definitions and descriptions of generative AI terms as they relate to higher education.

Exploration Activity 1

Choose a chatbot like ChatGPT, Bard, Bing, or Claude 2. Create a free account and use the following prompt outline to create your own. 

"Think like you are a scholar in the field of [insert field or discipline]. Create a five-paragraph essay that describes the causes of [insert event, phenomenon, or reaction relevant to your field], including specific examples. Also include citations for your sources."

After the chatbot generates an output, read it closely and consider what it does effectively and where it falls short. Then enter a follow-up request asking the bot to correct or edit one of its weaknesses. Re-evaluate the output and reflect on what the tool did well and its shortcomings. If you'd like to engage with a CELT staff member about this activity, click the button below for an interactive reflection activity.

Click here to reflect with CELT

Three Principles for Educators to Keep in Mind

1. The capabilities and sophistication of these tools are constantly changing.

The versions, capabilities, accessibility, cost, etc. of GenAI in all its various forms continues to evolve. Since its release in November 2022, OpenAI's ChatGPT has undergone extensive changes, updates, upgrades, etc. And that is just one example. Whether through backend updates from developers, new restrictions, or expanded capacity, what you know about these tools, and therefore, how you can utilize them for teaching and learning, will shift. A few examples to indicate the quick shifts in these tools include:

  • ChatGPTs original free version (GPT 3.5) has been vastly surpassed in ability by the new free version, ChatGPT 4o mini.
  • While ChatGPT emerged early on as the most impressive LLM, others have caught up. Microsoft's Copilot, Google's Gemini, and Anthropic's Claude now present more apt competition and competencies.  

Instructors should experiment with common GenAI tools to stay aware of what they are capable of and how that might impact their teaching and in-class uses from day to day.

2. Concerns over ethics, hallucinations, biases, and privacy persist.

The ethical questions and unknowns surrounding generative AI are numerous. For instance, debates are ongoing about topics like:

  • Responsible use (should AI be cited?);
  • Use of training data without permission;
  • Uncited or made-up sources;
  • Unknown embedded biases;
  • Privacy of data;
  • Accessibility barriers to AI;
  • Environmental impacts of use;
  • Potential misuse (i.e., misinformation, hate speech, etc.). 

Instructors should explore these lingering questions with their students. Given these existing questions, instructors and students are right to approach the use of Generative AI with questions and scrutiny. When asked to use AI for class assignments, instructors should provide students who prefer not to sign up or use those tools with appropriate accommodations. 

3. Educators should approach AI from the perspectives of their own teaching styles, disciplines, and learning goals to help students develop critical literacies.

Varied positions on the use of AI in the classroom are to be expected and are positive signs of critical thinking and informed teaching. There is not a one-size-fits-all approach to teaching with AI. The amount of information about the ever-changing capabilities of generative AI can be overwhelming, but approaching emerging narratives with an eye for data-informed research and with your own teaching experience in mind can help you stay grounded. Instructors should also rely on their disciplinary expertise as they approach the unfamiliar terrain of AI in teaching and learning. There are many useful tools to be found in these established ways of thinking and will continue to ground us in moments of change.

Instructors should invite students into these conversations about and experimentations with  generative AI to help them recognize and scrutinize AI abilities, shortcomings, questions, and potentials.

 

Exploration Exercise 2

Take time to explore what scholars in your field or discipline have to say about generative AI and how to address it in teaching. Search notable conferences, journals, style guides, and societies in your field for any recent (since January 2023) publications or white papers on AI and read at least two. If you struggle to find some helpful sources, consider one of the following to get you started:

After reading at least two sources in your field related to the perceived impact of generative AI on teaching and learning, spend some time reflecting on your own takeaways or have a conversation with a colleague about your ideas and questions.

For a guided reflection, click the button below to take you to a CELT facilitated reflection activity.

AI in Your Discipline Reflection

Misuse and AI Detectors

Misuse of AI

Tips for dealing with Misuse of AI

Establishing clear assignment guidelines and expectations about what constitutes misuse as it relates to AI can avoid confusion, mistakes, and academic offenses. Instructors should avoid assuming that all students already know what constitutes misuse of AI, as that will vary from course to course. In conjunction with recommendations from the UK ADVANCE team, CELT encourages all instructors to include a course AI policy in their syllabus, regardless of whether AI is allowed or not. See our discussion of what should be included in a course AI policy here

A clear indication of appropriate use provides the foundation for dealing with incidents of suspected misuse. When an instructor suspects a student has used an unauthorized tool to help with an assignment, CELT’s primary recommendation is that the instructor have a conversation with that student. That remains especially true in the age of generative AI. We want to uphold academic integrity while putting students first. During these conversations, ask students about their thought processes, takeaways and conclusions, as well as any tools they used to complete the assignment. Allow them to communicate their learning verbally to gauge the extent to which the assignment submission represents their work. You may find that the use of the tool (AI or otherwise) was the result of a misunderstanding.

Should those conversations lead an instructor to suspect that a student has used generative AI inappropriately for an activity or assignment, we suggest consultation with the department chair or school director and the Academic Ombud.

About AI Detectors

CELT advises against the use of AI detectors generally, especially as the sole data point in instances of perceived academic dishonesty. While there are several generative AI detectors now available (i.e., GPTZero, Turnitin’s AI detector), they are known to exhibit several problems. For instance, AI detectors:

  • Can be prone to false positives;
  • Provide no solid evidence supporting their scores;
  • Can be evaded through the manipulation of prompts or rewriting outputs;
  • Can lead to undue grading biases; and
  • Do not guarantee privacy of any data they receive.

Additional Resources and Links

Ombud Academic Offense Procedures

Associate Provost's Statement on AI Detection (Spring 2023)

UK ADVANCE recommendations (re: detectors)

 

Exploration Activity 3

The details of how AI detectors work can be as complicated as the operations of GenAI itself. Read "Why AI Detectors Think the US Constitution was Written by AIArs Technica (14 July 2023). Then reflect or discuss with a colleague on the following questions:

  • What are my general reactions to AI detectors?
  • How do the shortcomings of AI detection impact how I design my assignments and courses?
  • How can I still feature writing as an important component of learning even if there is no fail-safe way to detect AI-generated text?
  • How can I involve students in conversations about what is innately human about writing?

Detectors Reflection Activity