Date:

AI Policies Are Already Obsolete

The End of AI Policies?

For the past two years, a lot of us have written course, program, and university policies about generative artificial intelligence. Maybe you prohibited AI in your first-year composition course. Or perhaps your computer science program has a friendly disposition. And your campus information security and academic integrity offices might have their own guidelines.

But Does It Matter?

Our argument is that the integration of AI technology into existing platforms has rendered these frameworks obsolete.

A World of Jagged Integration

We all knew this landscape was going to change. Some of us have been writing and speaking about “the switch,” wherein Gemini and Copilot are embedded in all the versions of the Google and Microsoft suites. A world where when you open up any new document, you will be prompted with “What are we working on today?”

When AI is Everywhere

This world is here, sort of, but for the time being, we are in a moment of jagged integration. A year ago, Ethan Mollick started referring to the current AI models as a “jagged frontier,” with models being better suited to some tasks while other capabilities remained out of reach. We are intentionally borrowing that language to refer to this moment of jagged integration where the switch has not been flipped, but integration surrounds us in ways it was difficult to anticipate and impossible to build traditional guidance for.

Reframing the Conversation

Nearly every policy we have seen, reviewed, or heard about imagines a world where a student opens up a browser window, navigates to ChatGPT or Gemini, and initiates a chat. Our own suggested syllabus policies at California State University, Chico, policies we helped to draft, conceptualize this world with guidance like, “You will be informed as to when, where, and how these tools are permitted to be used, along with guidance for attribution.” Even the University of Pennsylvania guidelines, which have been some of our favorites from the start, have language like “AI-generated contributions should be properly cited like any other reference material”—language that assumes the tools are something you intentionally use.

But What About Unintentional Use?

That is how AI worked for about a year, but not in an age of jagged integration. Consider, for example, AI’s increasing integration in the following domains:

Research

When we open up some versions of Adobe, there is an embedded “AI assistant” in the upper right-hand corner, which is ready to help you understand and work with the document. Open a PDF citation and reference application, such as Papers, and you are now greeted with an AI assistant ready to help you understand and summarize your academic papers. A student who reads an article you uploaded, but who cannot remember a key point, uses the AI assistant to summarize or remind them where they read something.

Development

The new iPhone was purpose-built for the new Apple Intelligence, which will permeate every aspect of the Apple operating system and text input field and often work in ways that are not visible to the user. Apple Intelligence will help sort notes and ideas. According to CNET, “The idea is that Apple Intelligence is built into your iPhone, iPad, and Mac to help you write, get things done, and express yourself.”

Production

Have you noticed the autocomplete features in Google Docs and Word have gotten better in the last 18 months? It is because they are powered by improved machine learning that is AI adjacent. Any content production we do includes autocomplete features.

Beyond Policy

We don’t mean to be flippant; these are incredibly difficult questions that undermine the policy foundations we were just starting to build. Instead of reframing policies, which will likely have to be rewritten again and again, we are urging institutions and faculty to take a different approach.

A Framework for Understanding

We propose replacing AI policies, especially syllabus policies, with a framework or a disposition. The most seamless approach would be to acknowledge that AI is omnipresent in our lives in knowledge production and that we are often engaging with these systems whether we want to or not.

Conclusion

There continues to be a mismatch between the pace of technological change and the relatively slow rate of university adaptation. Early policy creation followed the same frameworks and processes we have used for centuries—processes that have served us well. But what we are living through at the moment cannot be solved with Academic Senate resolutions or even the work of relatively agile institutions.

FAQs

Q: What is jagged integration?
A: Jagged integration refers to the moment when AI technology is integrated into existing platforms in ways that are difficult to anticipate and impossible to build traditional guidance for.

Q: Why do AI policies need to be rewritten?
A: AI policies need to be rewritten because the integration of AI technology into existing platforms has rendered these frameworks obsolete.

Q: Can we still use traditional syllabus policies?
A: No, traditional syllabus policies are no longer relevant in an age of jagged integration. Instead, we need to think about AI as an omnipresent technology that is part of our daily lives.

Q: What is the future of AI policies?
A: The future of AI policies is to move beyond policy and adopt a framework or disposition that acknowledges AI as an integral part of our lives and work.

Q: What should we do instead of policy?
A: Instead of policy, we should engage in ongoing conversations with students and colleagues about AI integration, acknowledging its omnipresence and encouraging responsible use.

Q: Is it still possible to encourage students to work independently of AI?
A: Yes, it is still possible to encourage students to work independently of AI, but this will require framing the conversation in a way that acknowledges AI as an integral part of our lives and work.

Q: What about Google NotebookLM?
A: Google NotebookLM is a remarkable platform that allows the user to upload a large volume of data and then the system generates summaries in multiple formats and answers questions. However, it is not designed to produce full essays; instead, it generates what we would think of as study materials.

Q: Is AI becoming too integrated into our lives?
A: Yes, AI is becoming too integrated into our lives, making it difficult to separate what we do with AI from what we do without it.

Q: What can institutions do to adapt to these changes?
A: Institutions can adapt to these changes by acknowledging the omnipresence of AI in knowledge production and engaging in ongoing conversations with students and colleagues about AI integration.

Latest stories

Read More

Docker Exercises: Part 1

Table of Contents Questions Create a Dockerfile that installs...

Chipmakers Qualcomm and Arm post sales rise on smartphone strength

Qualcomm and Arm Post Strong Quarterly Sales Growth Qualcomm and...

DeepSeek A.I. Is a Danger to Party Control

China's Ambitious AI Plan In 2017, China watched in awe...

AI Discovers Hidden Cancer Markers

AI Tool Finds Cancer Signs Missed by Doctors Pioneering Research...

Amazon Nova: Cost-Effective and Performant Cloud Computing Options

Security Teams Leverage Amazon Nova Micro to Automate Threat...

Elegoo Mercury Plus V3: Neater 3D Printing

Resin 3D Printing: The Elegoo Mercury V3 Wash and...

LEAVE A REPLY

Please enter your comment!
Please enter your name here