Skip to content

AI in the Classroom: Where Are We Now?

Professional Development Day – August 24, 2023 – Jerry Slezak

AI tools like ChatGPT are changing the way teaching and learning is happening. While these tools can be helpful, there is considerable concern that AI allows students to submit assignments (writing, math, computer code, art, music, etc.) that shortcut learning. This presentation will update faculty on the state of AI, provide general strategies faculty can use now to prepare their courses, and present the ongoing plans to help faculty navigate the impacts of AI in the classroom.

What We Will Discuss Today:

  • Understanding what AI is, what it isn’t, and how you already use it
  • What does an AI do and how?
  • Limitations, ethical issues, and AI detectors
  • What can I do for my classes this semester?
  • Communicating with students and setting expectations
  • Looking forward
  • Your questions and comments

Brief Overview – What Is AI?

ChatGPT is what is generally referred to as AI, but it does not encompass all that is AI – it is only one application that uses generative AI technology, but it is the one that has garnered the most press.

AIs exist for creation of not just writing, but also computer code, music, and many others. The website AI Scout shows the breadth and depth of AI tools available:

AI isn’t new – you already use it in technologies like Maps and Navigation, Facial Detection and Recognition, text editors, spell checkers, next word prediction, etc.

This article compares the wave of technological change that is AI, and particularly ChatGPT, to something we have seen before: microwave ovens.

What Does ChatGPT Do, and How?

ChatGPT is known as a Large Language Model (LLM). It is not actually “intelligent” but more of a prediction machine. When ChatGPT is asked a question, it responds with what it thinks the most common response would be based on all of the writing it has been “trained” on. It continues to “predict” the next word until it has created “human readable” text.

ChatGPT produces coherent written responses to a user’s prompt, and can further refine the result based on feedback from the user. Responses are based on predicting the next word that would follow the previous word based on the prompt given. Responses are clearly structured and grammatically correct. It can also generate and debug computer code, do math problems (to varying degrees of accuracy), proofread and edit your writing.

LLMs are trained on data. Their learning a lot more like biological learning (e.g. evolution) than a computer program where someone writes instructions in the code.

Limitations of ChatGPT (and Other AIs)

ChatGPT doesn’t really know anything, except what text is likely to follow preceding text. This is why it “hallucinates” non-existent things (like science journal names, article titles, sources) into existence when it gets stuck – they are statistically plausible. (UPDATE: Article from CNBC on which AI model hallucinates most)

Limitations include:

  • creation of plausible-sounding but incorrect or false results
  • biased or inappropriate content
  • can spread misleading information (intentionally and unintentionally)
  • difficulties generating citations
  • Not trained on data prior to September of 2021
  • Can get better (or worse) over time
  • May require accounts or paid memberships to use

Ethical Implications

  • Responses are based on the information that was used to train the AI. This can lead to racial and gender biased responses based on the training material used.
  • The sources used to generate responses are difficult or impossible to cite as the system does not provide a citation for what sources it uses to create responses
  • Intellectual property of others has been used to “train” the system without their permission
  • Since ChatGPT conversations are stored and used to train future models, any inaccurate information entered can later be reproduced as a response.

AI Detectors

  • False positives – Accuracy (or lack thereof) – no detection method is 100% accurate. Many claim accuracy over 90%, but this is difficult to verify.
  • False negatives – Detectors can be fooled by paraphrasing some or all of the text generated by an AI.
  • Detector creation becomes an “arms race” similar to hackers that write viruses and software that detects viruses. AI writing will get better over time, and there will be new strategies to foil the detectors.
  • Results from a detector alone are not sufficient evidence of an Honor Code violation at UMW.
  • Approaching students from a place of trust – what is the impact in your classroom of falsely accusing a student of cheating based on a faulty tool?
  • ChatGPT took down their own detector because of accuracy issues:
  • AI-text detection tools are really easy to fool:

What Can I Do for My Classes This Semester?

  • Test your writing prompts in ChatGPT or other writing AI. Is the response that you get back something you would consider good? This can let you know the types of answers an AI might return, and if your assignment might benefit from some changes.
  • Add reflection or envisioning the future a regular part of writing assignments – AI is not very good at this (yet).
  • Modify your writing prompts to include citations (some AI will make them up, so you may need to check them for accuracy).
  • Chunk your large writing assignments – instead of one due date for a paper at the end, break the assignment into parts with separate due dates. This might include project outline, notes on research articles, first draft, and final paper.
  • Annotated Bibliographies are more difficult for AI to create. AI might hallucinate (make up sources or summaries).
  • Flip the classroom on occasion – provide content online for a class that you might otherwise provide a lecture, and then do some in-person writing assignments in class instead.
  • Connect assignments to real-world experience that AI will not have, such as recent events, issues specific to local community, or build on discussions that took place in class.
  • Ask students to add a brief summary of their research or creation process – what did they do and how did they do it?
  • Ask student to respond to visuals like images or videos in their assignments.
  • Ask students to use reference materials, notes, discussions, or sources not freely available on the internet as the basis of the assignment.
  • Consider alternative assignments – replace a writing assignments with a digital one like a podcast, video, infographic (see our DLS workshop series on Going Digital for resources).
  • Reward students for engaging with the process of learning – If a perfect paper is the only path to an A, students are more likely to resort to cheating. Consider evaluation of the process needed to be a strong learner (reading, viewing, speaking, improving, reflection, etc.).

Montclair State University:

Harper College:

Most Important: Talk to Your Students About AI and Your Expectations

  • Include a syllabus statement on acceptable use of AI.
  • If students are permitted to use AI, hold them responsible for accuracy and bias in what gets generated.
  • Consider making a note for students in the instructions for every assignment on AI acceptable use. Students have multiple classes that might be using AI in different ways, so making it clear on every assignment can help students from getting confused or misunderstanding what you expect.

Looking Forward

  • AI is here…
  • We need to better understand it and teach our students to use it ethically – they will be using AI after graduation.
    • This article by Ryan Watkins lays out an overall approach for the use of AI in the classroom – Conversations, Ethical Boundaries, and Putting It Into Practice.
    • Short-cutting learning vs. short-cutting process might be a way to think through good uses of AI in your classroom.

UMW AI Working Group

This academic year Provost O’Donnell has created an AI Working Group charged with using current developments in AI to organize and provide faculty/staff/student learning opportunities, coordinate resource development, and, as appropriate, make recommendations re: policies/processes. We hope you will engage with the AI Working Group this year and add your voice to the ongoing discussion. The group will be co-chaired by Victoria Russell, Director of the Center for Teaching, and Jerry Slezak, Director of Digital Learning Support. Please feel free to contact Victoria or Jerry with any questions or thoughts you might have.

css.php Skip to content