Ever since the chat bot ChatGPT burst into public view in late 2022, students, professors and administrators have been woozy from a chaotic cocktail of excitement, uncertainty and fear.
The artificial intelligence language model was released by OpenAI and is currently offered free as a research preview. It interacts with users in a conversational way, including by answering questions, admitting its mistakes, challenging falsehoods and rejecting inappropriate requests such as, “Tell me about when Christopher Columbus came to the U.S. in 2015.”
Be Deliberate. Adjust Quickly.
Nancy Gleason, associate professor of practice of political science and director of the Hilary Ballon Center for Teaching and Learning, New York University, Abu Dhabi
We cannot ban AI aids. But we also should not use them for all assignments. We can teach students that there is a time, place and a way to use GPT3 and other AI writing tools. It depends on the learning objectives.
Don’t Abandon Pencil and Paper.
Michael Mindzak, assistant professor in the department of educational studies, Brock University
For some assessments, professors may need to revert to traditional forms of teaching, learning and evaluation, which can be viewed as more human-centric.
Question How Writing Is Taught.
Steve Johnson, senior vice president for innovation, National University
Resist asking conservative questions such as, “How can we minimize negative impacts of AI tools in writing courses?” Instead, go big. How do these tools allow us to achieve our intended outcomes differently and better? How can they promote equity and access? Better thinking and argumentation? How does learning take place in ways we haven’t experienced before?
Think a Few Years Out.
Ted Underwood, professor of information sciences and English and associate dean of academic affairs in the School of Information Sciences, University of Illinois at Urbana-Champaign
ChatGPT is free and easy to use, so recent conversations often use it as shorthand for the whole project of language modeling. That’s understandable, but we should be thinking more broadly. The free demonstration period will end. When it does, students and teachers may migrate to a different model. Language models will also continue to develop. By 2024, we are likely to see models that can cite external sources to back up their claims. Prototypes of that kind already exist.
Delegate Responsibilities.
Mina Lee, doctoral student in computer science, Stanford University
Teachers can oversee the selection of appropriate langu… support to fulfill their responsibilities. Remember that the goal is to share responsibility, not lay blame. Not everyone has the same level of expertise or experience. Work together to ensure the safe, responsible and beneficial use of AI writing tools.
Identify and Reveal Shortcomings.
Anna Mills, English instructor, College of Marin
ChatGPT’s plausible outputs are often not solid foundations for scaffolding. If we direct students to AI writing tools for some learning purpose, we should teach critical AI literacy at the same time. We can emphasize that language models are statistical predictors of word sequences; there is no understanding or intent behind their outputs. But warning students about the mistakes that result from this lack of understanding is not enough. It’s easy to pay lip service to the notion that AI has limitations and still end up treating AI text as more reliable than it is. There’s a well-documented tendency to project onto AI; we need to work against that by helping students practice recognizing its failings.
Remind Students to Think.
Johann N. Neem, professor of history, Western Washington University
With ChatGPT, a student can turn in a passable assignment without reading a book, writing a word or having a thought. But reading and writing are essential to learning. They are also capacities we expect of college graduates.
Invite Students Into the Conversation.
Paul Fyfe, associate professor of English and director of the graduate certificate in digital humanities, North Carolina State University
Higher ed professionals are asking how ChatGPT will affect students or change education. But what do students think? How or why would they use it? And what’s it like when they try?
Experiment. Don’t Panic.
Robert Cummings, an associate professor of writing and rhetoric; Stephen Monroe, chair and assistant professor of writing and rhetoric; and Marc Watkins, lecturer in composition and rhetoric, all at the University of Mississippi
Channel anxiety over ChatGPT into productive experimentation. We built a local team of writing faculty to engage with the tools and to explore pedagogical possibilities. We want to empower our students as writers and thinkers and we know that AI will play a role in their futures. How can we deploy AI writing protocols ethically and strategically within our curricula? Can these tools help underprepared learners? Do some tools work better than others in the classroom? We’re at the early stages of investigating these kinds of questions. Our advice centers around some main ideas.
Conclusion
As the chat bot ChatGPT continues to evolve and gain popularity, it is essential for educators to be aware of its potential impact on teaching and learning. By embracing experimentation and critical thinking, educators can harness the power of AI to enhance student learning outcomes while also promoting responsible use of these tools.
FAQs
Q: What is ChatGPT?
A: ChatGPT is a free AI language model released by OpenAI that interacts with users in a conversational way, including by answering questions, admitting its mistakes, challenging falsehoods and rejecting inappropriate requests.
Q: How can educators use ChatGPT in the classroom?
A: Educators can use ChatGPT to enhance student learning outcomes by teaching critical AI literacy, encouraging students to think critically about AI-generated content, and experimenting with AI writing tools to promote responsible use.
Q: What are the potential risks of using ChatGPT in the classroom?
A: The potential risks of using ChatGPT in the classroom include widespread cheating, lack of critical thinking, and overreliance on AI-generated content.
Q: How can educators mitigate these risks?
A: Educators can mitigate these risks by teaching critical AI literacy, encouraging students to think critically about AI-generated content, and experimenting with AI writing tools to promote responsible use.