
Is AI dulling critical-thinking skills? As tech companies court students, educators weigh the risks
Artificial intelligence is the latest technology stoking fears of human decline. Early studies stoke doubt about whether the fear is warranted

Is AI dulling critical-thinking skills? As tech companies court students, educators weigh the risks
In Depth
Your brain on AI
Will tools designed to help us instead atrophy our critical-thinking skills? As tech companies court students, educators calculate the long-term risks
Joe Castaldo
The Globe and Mail
Published Yesterday

After Michael Gerlich published a study this year, his inbox was flooded. He got so many messages, mostly from teachers, he wondered about closing his account. This was unusual for the professor, who teaches at SBS Swiss Business School in Zurich, where he heads the rather staid-sounding Center for Strategic Corporate Foresight and Sustainability.
The paper that triggered the response, published in a peer-reviewed journal called Societies, looked at the relationship between the use of generative artificial intelligence applications, such as ChatGPT, and critical-thinking skills. He had sensed during lectures that students didn’t seem to be thinking as deeply as they once had been, and wondered whether AI was playing a role. Judging from his inbox, he hadn’t been the only one.
Prof. Gerlich had seen firsthand how AI tools can offload the thinking process. Once, while listening to a guest lecturer, he peered over the shoulder of a student who prompted ChatGPT for questions to ask. “ChatGPT recommended a question that the student then asked, which had been answered extensively five minutes ago,” he recalled.
His research found a “significant negative correlation” between AI use and critical-thinking abilities: a higher dependence on AI tools is associated with lower critical-thinking scores, which was more pronounced among those aged 17 to 25.
Prof. Gerlich suggested a vicious cycle could play out: relying on generative AI reduces the need for deep analysis and thought, which leads to more reliance on AI. “It inadvertently fosters dependence, which can compromise critical thinking skills over time,” he wrote.
The paper includes quotes from some of the 666 participants that hew neatly to his hypothesis. “I sometimes feel like I’m losing my own problem-solving skills,” one participant said.
Open this photo in gallery:
Michael Gerlich, a business-school professor in Zurich, tested ways that generative AI can 'inadvertently foster dependence' in its users.Christian Bobst/The Globe and Mail
It’s clear why the findings resonated with educators, some of whom have seen a deluge of AI-generated work from students and fret about how these young people are ever going to learn. A KPMG survey last fall found that 59 per cent of Canadians over the age of 18 use generative AI in their school work, with two thirds of that group saying they don’t think they’re learning or retaining as much knowledge.
Meanwhile, tech companies are desperate to spur adoption. OpenAI gave postsecondary students in Canada and the U.S. free access to a premium version of ChatGPT earlier this year for a limited time, while other companies are stuffing AI features into e-mail clients, writing software and social-media platforms, offering to compose whatever you need so you don’t have to.
In a way, AI is just the latest technology stoking fears of human decline. Calculators, computers, internet search – even writing – have all forced us at one time or another to reckon with what we stand to gain and what we stand to lose, and to devise ways of preserving and enhancing our skills.
With AI, the evidence for the benefits for learning and education is mixed, and there are indeed studies showing the technology can have a positive effect. For now, we can only speculate about the long-term impact of AI on thinking, or really how much we’ll be thinking at all.



Automation is a funny thing. Lisanne Bainbridge, a professor at University College London, understood this well, and wrote a brief but influential paper about it in 1983. She considered industrial settings that used automated control systems. To monitor these systems effectively, workers need experience with the underlying tasks, which, ironically, they can’t get in an oversight role.
Meanwhile, skills deteriorate when they’re not used. “A formerly experienced operator who has been monitoring an automated process may now be an inexperienced one,” she wrote. “Another problem arises when one asks whether monitoring can be done by an unskilled operator.”
You could say the same about generative AI, which promises to automate a range of tasks, including anything that involves writing. The notion that relying on AI can inhibit your skills certainly feels true. But the slow pace of science – research, analysis and replication – not to mention the fact that generative AI is so new, means that showing an empirical effect one way or the other will take time.
Prof. Gerlich, for one, gave surveys to participants asking about their AI usage, along with a questionnaire and an assessment of critical-thinking abilities. The correlation he found – not causation, mind you – is that higher AI usage is associated with lower critical-thinking scores. He suggests that delegating important mental work to AI may weaken these skills.
Nick Byrd, an assistant professor of cognitive science at the Geisinger College of Health Sciences in Pennsylvania, has a less dire interpretation for this connection, particularly among young people. These are perhaps people who lack confidence, struggle academically and use AI as a tool to help, similar to university students accessing a campus writing centre.
“You’re going to find a bunch of correlation between going to the writing centre and subpar writing,” he said, “but it’s not like the writing centre is causing their writing to be worse.”
Researchers from China and Australia also explored a link in a study published in December. They asked 117 university students to write and revise an essay, and split them into four groups. One group could tap a human expert while another could ask ChatGPT for advice, but not to write the essay. Those in the latter group scored the best on their essays, but when tested on the topic as part of the experiment, they didn’t fare any better than other participants. AI delivers a short-term boost while carrying the potential for “long-term skill stagnation,” according to the study. (The researchers also said some participants appeared to copy and paste from ChatGPT, despite being told not to.)
Open this photo in gallery:
Microsoft has done its own research into generative AI for education to see how it affects critical thinking.Gonzalo Fuentes/Reuters
Researchers at Microsoft Corp. issued a similar warning this year. They surveyed 319 knowledge workers about their use of generative AI, and found people still deployed critical-thinking skills for writing prompts, and fact-checking and editing the outputs.
The more confident users are in the capabilities of AI, however, the less critical thinking they deploy, the survey found. And when they’re not confident in their own abilities, they’re more likely to rely on AI. “Users adopt a mental model that assumes AI is competent for simple tasks,” according to the study, which can “lead to overestimating AI capabilities.” The technology boosts efficiency, but “can inhibit critical engagement with work and can potentially lead to long-term overreliance.”
I e-mailed the lead author of the paper, a PhD student. He did not respond, but must have contacted Microsoft, because an e-mail soon arrived from a PR rep with a statement from Lev Tankelevitch, a co-author and Microsoft behavioural scientist.
“When studying human behaviour, seemingly opposing ideas can both be true,” he wrote. He pointed to another study showing that AI tutors – guided by human teachers, he emphasized – helped students in Nigeria achieve two years of learning progress in six weeks. His own study noted that participants tended to eschew critical thinking for low-stakes tasks. “When the stakes are higher, people naturally engage in more critical evaluation,” he wrote.
The paper itself echoed Prof. Bainbridge’s 40-year-old observation on that front. Without regular practice in menial or mundane tasks, “cognitive abilities can deteriorate over time, and thus create risks if high-stakes scenarios are the only opportunities available for exercising such abilities.”
Companies like Microsoft are also determined to make AI more powerful and trustworthy. It is in their financial interest for the technology to assume more responsibility for human labour. As it becomes more sophisticated, a high-stakes task today becomes low-stakes tomorrow. What becomes of our skills then?
Open this photo in gallery: