Facebook founder Mark Zuckerberg once advised tech founders to “move fast and break things.” But in moving fast, some argue that he “broke” those young people whose social media exposure has led to depression, anxiety, cyberbullying, poor body image and loss of privacy or sleep during a vulnerable life stage.
Now, Big Tech is moving fast again with the release of sophisticated AI chat bots, not all of which have been adequately vetted before their public release.
OpenAI launched an artificial intelligence arms race in late 2022 with the release of ChatGPT—a sophisticated AI chat bot that interacts with users in a conversational way, but also lies and reproduces systemic societal biases. The bot became an instant global sensation, even as it raised concerns about cheating and how college writing might change.
In response, Google moved up the release of its rival chat bot, Bard, to Feb. 6, despite employee leaks that the tool was not ready. The company’s stock sank after a series of product missteps. Then, a day later, and in an apparent effort not to be left out of the AI–chat bot party, Microsoft launched its AI-powered Bing search engine. Early users quickly found that the eerily human-sounding bot produced unhinged, manipulative, rude, threatening, and false responses, which prompted the company to implement changes—and AI ethicists to express reservations.
Rushed decisions, especially in technology, can lead to what’s called “path dependence,” a phenomenon in which early decisions constrain later events or decisions, according to Mark Hagerott, a historian of technology and chancellor of the North Dakota University system who earlier served as deputy director of the U.S. Naval Academy’s Center for Cyber Security Studies. The QWERTY keyboard, by some accounts (not everyone agrees), may have been designed in the late 1800s to minimize jamming of high-use typewriter letter keys. But the design persists even on today’s cellphone keyboards, despite the suboptimal arrangement of the letters.
“Being deliberate doesn’t mean we’re going to stop these things, because they’re almost a force of nature,” Hagerott said about the presence of AI tools in higher ed. “But if we’re engaged early, we can try to get more positive effects than negative effects.”
AI Policies Take Shape—and Require Updates
When Emily Pitts Donahoe, associate director of instructional support at the University of Mississippi’s Center for Teaching and Learning, began teaching this semester, she understood that she needed to address her students’ questions and excitement surrounding ChatGPT. In her mind, the university’s academic integrity policy covered instances in which students, for example, copied or misrepresented work as their own. That freed her to craft a policy that began from a place of openness and curiosity.
Donahoe opted to co-create a course policy on generative AI writing tools with her students. She and the students engaged in an exercise in which they all submitted suggested guidelines for a class policy, after which they upvoted each other’s suggestions. Donahoe then distilled the top votes into a document titled “Academic integrity guidelines for use and attribution of AI.”
An Often-Missing Ingredient AI Chat Bot Policy
Bing AI is “much more powerful than ChatGPT” and “often unsettling,” Mollick wrote in a tweet thread about his engagement with the bot before Microsoft imposed restrictions.
“I say that as someone who knows that there is no actual personality or entity behind a [large language model],” Mollick wrote. “But, even knowing that it was basically auto-completing a dialog based on my prompts, it felt like you were dealing with a real person. I never attempted to ‘jailbreak’ the chat bot or make it act in any particular way, but I still got answers that felt extremely personal, and interactions that made the bot feel intentional.”
Conclusion
As AI chat bots continue to infiltrate higher education, it is crucial that colleges and universities develop policies that address not only academic integrity and creative classroom uses but also the potential mental health risks associated with students’ emotional relationships to these tools. By acknowledging the limitations and biases of AI chat bots, educators can help students develop critical thinking skills and navigate the complex landscape of AI-infused learning.
FAQs
Q: What are the concerns surrounding AI chat bots in higher education?
A: Concerns include academic integrity, accuracy, bias, and potential mental health risks associated with students’ emotional relationships to these tools.
Q: What are some potential solutions to these concerns?
A: Solutions include developing policies that address academic integrity, accuracy, and bias, as well as providing students with AI literacy training to help them navigate their emotional responses to these tools.
Q: How can educators help students develop critical thinking skills in an AI-infused learning environment?
A: Educators can help students develop critical thinking skills by acknowledging the limitations and biases of AI chat bots, encouraging students to question and evaluate the information they receive, and promoting critical thinking and problem-solving skills.
Q: What is path dependence, and how does it relate to AI chat bots in higher education?
A: Path dependence refers to the phenomenon in which early decisions constrain later events or decisions. In the context of AI chat bots, path dependence can lead to the persistence of suboptimal design choices, such as the QWERTY keyboard layout, despite the availability of better alternatives.