Discussions about Artificial Intelligence (AI) and its impact on work are now virtually inescapable. Of course, debates about how new technologies will change or reduce work, or replace workers, are hardly new. Warnings of robots and machines taking our jobs have been sounded since the Industrial Revolution. And yet, amid the maelstrom of hype on AI, some things have already changed, higher education being one example. Research has revealed students to be early adopters of large language model chatbots such as ChatGPT, Google Gemini and Claude. The question is not whether students are using AI, but how.
In light of this, academia is currently wrestling with the issue of whether to prevent or embrace the use of AI in teaching and assessment. A key question is whether students’ use of AI can prepare them for their future careers, or if instead it is preventing them from learning key skills required for the workplace. We explored this point in our research via interviews and focus groups with over 50 students. We draw on the idea of a ‘friend yet foe’ paradox to argue that students see AI as both a useful learning tool while simultaneously being sceptical of its ability to produce high-quality and reliable outputs. In some cases, students felt AI could undermine their learning, but used it nonetheless.
Various reasons were put forward for how AI can act as a ‘friend’. For example, students prepare for seminars by answering questions with AI-generated talking points for the assigned readings, or they write essays by summarising literature, finding references or building on AI-generated structures. Students often suggested that using AI in these ways made them more efficient in completing their work.
One student told us that “[AI] can help inspire your thinking or provide fresh perspectives to stimulate your own ideas”. AI is thus becoming a learning assistant akin to having a personal tutor available 24 hours a day. AI can help students directly and immediately with their work and is capable of a number of tasks which are often time consuming and intellectually challenging. In many cases, these tasks involve deep reading, argument construction, synthesis and the integration of evidence in large volume. AI provides a shortcut by skipping aspects of this process to produce an output.
However, students also hint at the idea that AI can also be a ‘foe’. We find clear evidence of their lack of trust in the technology. They question its reliability in using sources that meet academic rigour, and recognise the risk of its producing hallucinated references. Moreover, students also highlight AI’s limitations in meeting specific assignment requirements, identifying that although it is largely effective at producing generic outputs, these outputs are ‘shallow’, ‘general’, ‘not innovative’ and even ‘not usable’.
Because of this scepticism, all of the students we spoke to use AI in specific ways to help them write, but in ways that constitute the augmentation rather than the replacement of their work, incorporating significant human oversight of the process. For example, one student told us “I never let AI write large parts of my paper. I only use it in the early stages for suggestions”. Others highlighted how they would use AI to provide feedback, or even mark their work, although they noted these marks were not commensurate with the ones they were ultimately awarded.
Where AI really becomes a foe is in relation to the learning process. We find that students’ criticism is largely centred on AI’s outputs rather than on how using AI could undermine skill and knowledge development. In fact, many seem to prioritise perceived efficiency over learning. As one student put it, “I don’t think information searching skills are that important relatively. If there’s a more efficient way, why would I insist on doing it the hard way?”. However, a small proportion of students acknowledge their potential loss in skipping the ‘old-fashioned’ way of learning: “Students from the past, who didn’t have AI and had to go to the library and search through books themselves, […] probably absorbed more of the knowledge deeply.”
Given the importance of developing knowledge and skills for future employment, it is concerning that students might be overlooking this in favour of taking shortcuts to produce outputs that they themselves question. So why do students still use AI? They believe AI has become a ubiquitous tool in the workplace and that they will be expected to be familiar with it because AI is ‘required’ or ‘unavoidable’. However, in chasing efficiency, students are potentially denying themselves the opportunity to develop higher-order capacities such as cognitive, critical and creative thinking that are essential in the workplace but are accumulated through deliberate training. Instead, they train themselves by ‘asking the right questions’ of AI.
It is of course worrying for educators if students are so focused on outputs that they overlook the development of their own knowledge and skills. But this should be equally worrying for employers: even prompt engineers will need to think critically and have subject knowledge to get the most out of AI. It is evident that AI is here to stay, and we must find ways to ensure that students understand that technology cannot replace their own thinking and skills development. One approach is to emphasise the human qualities that AI cannot replace, such as empathy, emotional intelligence, critical thinking and interpersonal skills. If students understand that these skills are required in order to complement – or make effective use of – AI, it might still be possible to accentuate the importance of the learning process while allowing them to develop the AI skills they feel they need for their future careers.
Xiaoting Luo is a Lecturer in Strategy and International Business Management at the University of Bristol Business School.
Christopher Pesterfield is a Lecturer in Management at the University of Bristol Business School.
Image credit: Igor Omilaev via Unsplash

