Is AI dulling critical-thinking skills? As tech companies court students, educators weigh the risks

bnew

Veteran
Joined
Nov 1, 2015
Messages
64,580
Reputation
9,864
Daps
175,339

Is AI dulling critical-thinking skills? As tech companies court students, educators weigh the risks​


In Depth



Your brain on AI​


Will tools designed to help us instead atrophy our critical-thinking skills? As tech companies court students, educators calculate the long-term risks

Joe Castaldo

The Globe and Mail

Published Yesterday

Photo illustration by The Globe and Mail

After Michael Gerlich published a study this year, his inbox was flooded. He got so many messages, mostly from teachers, he wondered about closing his account. This was unusual for the professor, who teaches at SBS Swiss Business School in Zurich, where he heads the rather staid-sounding Center for Strategic Corporate Foresight and Sustainability.

The paper that triggered the response, published in a peer-reviewed journal called Societies, looked at the relationship between the use of generative artificial intelligence applications, such as ChatGPT, and critical-thinking skills. He had sensed during lectures that students didn’t seem to be thinking as deeply as they once had been, and wondered whether AI was playing a role. Judging from his inbox, he hadn’t been the only one.

Prof. Gerlich had seen firsthand how AI tools can offload the thinking process. Once, while listening to a guest lecturer, he peered over the shoulder of a student who prompted ChatGPT for questions to ask. “ChatGPT recommended a question that the student then asked, which had been answered extensively five minutes ago,” he recalled.

His research found a “significant negative correlation” between AI use and critical-thinking abilities: a higher dependence on AI tools is associated with lower critical-thinking scores, which was more pronounced among those aged 17 to 25.

Prof. Gerlich suggested a vicious cycle could play out: relying on generative AI reduces the need for deep analysis and thought, which leads to more reliance on AI. “It inadvertently fosters dependence, which can compromise critical thinking skills over time,” he wrote.

The paper includes quotes from some of the 666 participants that hew neatly to his hypothesis. “I sometimes feel like I’m losing my own problem-solving skills,” one participant said.

Open this photo in gallery:

Michael Gerlich, a business-school professor in Zurich, tested ways that generative AI can 'inadvertently foster dependence' in its users.Christian Bobst/The Globe and Mail

It’s clear why the findings resonated with educators, some of whom have seen a deluge of AI-generated work from students and fret about how these young people are ever going to learn. A KPMG survey last fall found that 59 per cent of Canadians over the age of 18 use generative AI in their school work, with two thirds of that group saying they don’t think they’re learning or retaining as much knowledge.

Meanwhile, tech companies are desperate to spur adoption. OpenAI gave postsecondary students in Canada and the U.S. free access to a premium version of ChatGPT earlier this year for a limited time, while other companies are stuffing AI features into e-mail clients, writing software and social-media platforms, offering to compose whatever you need so you don’t have to.

In a way, AI is just the latest technology stoking fears of human decline. Calculators, computers, internet search – even writing – have all forced us at one time or another to reckon with what we stand to gain and what we stand to lose, and to devise ways of preserving and enhancing our skills.

With AI, the evidence for the benefits for learning and education is mixed, and there are indeed studies showing the technology can have a positive effect. For now, we can only speculate about the long-term impact of AI on thinking, or really how much we’ll be thinking at all.



LHTSCKMJYJDYVAM6D3NMJQNHJ4.jpg
UKEDJRXVW5ER7P7HECYIQEPIYM.jpg
Pupils at this primary-school class in Colomiers, France, have been AI pioneers since their teacher brought in software called Mathia, for math, and Lalilo, for French-language lessons. The French education ministry approved both AI tools to assist in teaching. Matthieu Rondel/AFP via Getty Images

NMCJZNIQWNBSDEBJQSYAEB4CKM.jpg
AI has added a new level of strife to China’s college entrance exams, which these pupils were cramming for in Fuyang last month. Beijing is cracking down on AI-generated practice tests whose sellers claim they will anticipate the real questions. Some states will use AI-assisted surveillance to monitor for cheating. AFP via Getty Images



Automation is a funny thing. Lisanne Bainbridge, a professor at University College London, understood this well, and wrote a brief but influential paper about it in 1983. She considered industrial settings that used automated control systems. To monitor these systems effectively, workers need experience with the underlying tasks, which, ironically, they can’t get in an oversight role.

Meanwhile, skills deteriorate when they’re not used. “A formerly experienced operator who has been monitoring an automated process may now be an inexperienced one,” she wrote. “Another problem arises when one asks whether monitoring can be done by an unskilled operator.”

You could say the same about generative AI, which promises to automate a range of tasks, including anything that involves writing. The notion that relying on AI can inhibit your skills certainly feels true. But the slow pace of science – research, analysis and replication – not to mention the fact that generative AI is so new, means that showing an empirical effect one way or the other will take time.

Prof. Gerlich, for one, gave surveys to participants asking about their AI usage, along with a questionnaire and an assessment of critical-thinking abilities. The correlation he found – not causation, mind you – is that higher AI usage is associated with lower critical-thinking scores. He suggests that delegating important mental work to AI may weaken these skills.

Nick Byrd, an assistant professor of cognitive science at the Geisinger College of Health Sciences in Pennsylvania, has a less dire interpretation for this connection, particularly among young people. These are perhaps people who lack confidence, struggle academically and use AI as a tool to help, similar to university students accessing a campus writing centre.

“You’re going to find a bunch of correlation between going to the writing centre and subpar writing,” he said, “but it’s not like the writing centre is causing their writing to be worse.”

Researchers from China and Australia also explored a link in a study published in December. They asked 117 university students to write and revise an essay, and split them into four groups. One group could tap a human expert while another could ask ChatGPT for advice, but not to write the essay. Those in the latter group scored the best on their essays, but when tested on the topic as part of the experiment, they didn’t fare any better than other participants. AI delivers a short-term boost while carrying the potential for “long-term skill stagnation,” according to the study. (The researchers also said some participants appeared to copy and paste from ChatGPT, despite being told not to.)

Open this photo in gallery:



Microsoft has done its own research into generative AI for education to see how it affects critical thinking.Gonzalo Fuentes/Reuters

Researchers at Microsoft Corp. issued a similar warning this year. They surveyed 319 knowledge workers about their use of generative AI, and found people still deployed critical-thinking skills for writing prompts, and fact-checking and editing the outputs.

The more confident users are in the capabilities of AI, however, the less critical thinking they deploy, the survey found. And when they’re not confident in their own abilities, they’re more likely to rely on AI. “Users adopt a mental model that assumes AI is competent for simple tasks,” according to the study, which can “lead to overestimating AI capabilities.” The technology boosts efficiency, but “can inhibit critical engagement with work and can potentially lead to long-term overreliance.”

I e-mailed the lead author of the paper, a PhD student. He did not respond, but must have contacted Microsoft, because an e-mail soon arrived from a PR rep with a statement from Lev Tankelevitch, a co-author and Microsoft behavioural scientist.

“When studying human behaviour, seemingly opposing ideas can both be true,” he wrote. He pointed to another study showing that AI tutors – guided by human teachers, he emphasized – helped students in Nigeria achieve two years of learning progress in six weeks. His own study noted that participants tended to eschew critical thinking for low-stakes tasks. “When the stakes are higher, people naturally engage in more critical evaluation,” he wrote.

The paper itself echoed Prof. Bainbridge’s 40-year-old observation on that front. Without regular practice in menial or mundane tasks, “cognitive abilities can deteriorate over time, and thus create risks if high-stakes scenarios are the only opportunities available for exercising such abilities.”

Companies like Microsoft are also determined to make AI more powerful and trustworthy. It is in their financial interest for the technology to assume more responsibility for human labour. As it becomes more sophisticated, a high-stakes task today becomes low-stakes tomorrow. What becomes of our skills then?

Open this photo in gallery:
 

bnew

Veteran
Joined
Nov 1, 2015
Messages
64,580
Reputation
9,864
Daps
175,339
Microsoft now offers an app called Copilot to help users talk through problems.Jason Redmond/The Associated Press

Mél Hogan wonders about that in terms of literacy, whenever she’s reading a student paper she suspects is partly or wholly AI-generated, which happens all the time. “It’s everywhere,” she said. “It’s too easy as a workaround.” An associate professor in the film and media studies department at Queen’s University, she has even had graduate students turn in papers that appear to be entirely written by ChatGPT.

“It’s terrible to teach in times of AI,” she said. “Despair” is the word she uses to describe how she feels grading something a student likely didn’t even write. “Maybe I just let everyone write using ChatGPT, and I use ChatGPT to grade and let the absurdity take over,” she said.

Generative AI can exploit a human weakness: an aversion to putting in effort. But it is similarly exposing problems with our education system. Prof. Hogan said that students increasingly view universities as a mere pipeline to internships and jobs. It’s about jumping through hoops to get to the next one.

“I sense more and more from both undergrad and grad students that as a professor, I stand in the way of their credentials so they can go to work,” she said. “It’s not a privilege to go to university. It’s an obstacle.”

What better way to smash through it than with a tool that can do the work for you?



QKHYXHNW5ZDUFFLJUHRSA5YGG4.JPG
Computers were once bulky machines meant for engineers and scientists: Running the magnetic tape banks of this IBM 700-series from the 1950s was not for an amateur. The transistor revolution made the technology more accessible to everyone. Supplied by Avro Aircraft

TUBC6VYWJZEIJKIOQ7LW6MGITQ.JPG
The Globe took a look inside this Texas Instruments Datamath for a 1974 series on pocket calculators, a technology new to Canadian consumers at the time. Educators gave reporter Arnold Bruner a mix of opinions about whether they would help or hinder students' grasp of math. James Lewcun/The Globe and Mail



Technology has of course stirred fear and anxiety for ages, with warnings that the new thing of the day will leave society worse off. Socrates, for one, is said to have admonished the written word, arguing the act of encoding information in this way weakens our memories. How do we know he thought this? Because his student, Plato, wrote it down.

In more modern times, calculators struck fear into the hearts of some teachers. These devices went from cumbersome machines to cheap, pocket-sized pieces of plastic by the 1970s. A Globe and Mail article from 1974 pondered: “Is it a fad – or get-rich-quick gimmick that will fade out when people get tired of it? Or are calculators of real value and here to stay?”

Some parents and educators worried children wouldn’t develop math skills and would instead rely on calculators. They would mindlessly push buttons and lack the understanding to know whether they had made a mistake, trusting the output of a machine. (Sound familiar?)

“It is going too far to allow the young public school student the use of the calculator in class. How much better for the development of the young mind to first learn the art of arithmetic with the head and pencil,” reads a 1975 letter to The Globe.

“Most of us face daily situations requiring a quick mental computation,” a math professor wrote in later. “Even when a calculator is handy, the user can only limp along unless he has enough intuition of numbers.”

Others recognized that since kids might have calculators at home, schools couldn’t ignore their existence. Calculators could open students up to more advanced forms of math, and they would be left behind if they didn’t learn new technology.

Some school curricula have struck a balance, emphasizing that kids still need to learn the basics. But it took time, research and debate, and it’s a tricky balance. Three decades after calculators took off, a group of public schools in Edmonton limited usage because some kids were helpless without them.

Open this photo in gallery:



White-board desks give these Grade 3 and 4 pupils room to work out problems by hand.Fred Lum/The Globe and Mail

You will also find educators arguing that arithmetic skills really have eroded. Frances Woolley, an economics professor at Carleton University, has students who are dumbfounded by simple calculations. “If you want to make your students think that you’re brilliant, just do mental arithmetic,” she said. “Nobody needs to calculate HST in their head, but much more valuable skills are being lost, like being able to budget and make good financial decisions.”

Prof. Woolley has seen her share of what seems like banal AI-generated writing in student work, but isn’t entirely sure how to set them straight, at least not without drastic changes that would make students unhappy.

“When students delegate their work to AI, their skills atrophy,” she said. We all fall victim to something called hyperbolic discounting, she continued, meaning we don’t consider future costs and benefits and instead focus on the present. “It’s just really hard for anyone, in any circumstance, to incur pain now for uncertain distant future gains,” she said.

There is another, more recent technology that prompted similar concerns as generative AI: GPS. Its ubiquity led to speculation about how it would affect our spatial memories, which is governed by a part of the brain called the hippocampus. “People that constantly rely on GPS for navigation, they actually see a negative change in the hippocampus, or some form of atrophy,” said Oliver Hardt, an associate professor at McGill University who researches neuroscience and memory.

Those folks may build more grey matter density in other parts of the brain responsible for following directions, but the hippocampus is a powerful structure involved in learning and memory formation. A weak one can lead to less cognitive flexibility, Prof. Hardt said.

He, too, wonders about the harm some students may be incurring by relying on AI. “It’s really killing the ability to learn how to think, if they use it all the time.”

Writing is thinking, he said. The act of putting words to paper shows you what you know and what you don’t, and engages your mind in a way that daily life does not.

Prof. Hardt faulted some of the studies mentioned in this article for what he saw as questionable experimental design and analysis. Beyond being an impressive display of critical thinking, his assessment also illustrates how more study is needed. He does have advice on using new tech, though, whether it be GPS or AI. “If you don’t know how to do it, you shouldn’t use a tool to do it for you,” he said. “You have to be a master in something in order to outsource it.”



TH4EHJFCQ5HPNL4NZT34CY2KJQ.JPG
Satellite navigation technology is now a fact of life in many industries. The potato planter at this farm in Qingdao uses a Chinese-made system called Beidou to keep the rows straight over long distances. Costfoto/NurPhoto via Reuters Connect



Michael Gerlich, the SBS Swiss Business School professor who wrote about AI and critical thinking, does not want his research to be construed as doom-mongering. He thinks one positive way to engage with AI is to treat chatbots like intellectual sparring partners – push for evidence, ask for alternative views, look for logical gaps. That puts humans in control, rather than passively accepting answers.

“It can develop your own thinking to a different level, and that is where we will benefit,” he said. Students have to be taught to do so, however. “This is a concept that is new to young people.”

Nick Byrd, the Geisinger professor, advises his students to bounce ideas off one another to refine their thinking, but says they can do the same thing with a chatbot. “I struggle to imagine that just talking to a human or a machine is inherently problematic for critical thinking,” he said. “Talking to other humans tends to help people be more reflective. That’s kind of what Socrates was all about.”

The design of AI applications matters, too. Even before ChatGPT, humans tended to accept flawed or incorrect responses from AI applications. In 2021, researchers at Harvard University and Lodz University in Poland published a study about whether people could be forced to use their brains through different design strategies, including making them wait a bit for an answer from a simulated AI application, which allowed for a brief window for thought. In tests, participants were significantly less reliant on AI, but they did not enjoy the experience. We want our technology fast and efficient, and the researchers wrote that people are less likely to use software that introduces friction.

Tovi Grossman, an associate professor of computer science at the University of Toronto, is flipping the concept on its head: he wants students to teach the AI. It’s premised on the idea that the best way to know if you’ve mastered something is to teach it to others. “The AI would be trained in a way that has certain knowledge gaps, and you see if the student can fill those knowledge gaps in,” he said.

It’s hard to get an AI model to play dumb, however. “You’ll tell it that it doesn’t know anything about this topic, and you walk it through the first couple of steps,” Prof. Grossman said, “and then it writes it perfectly.”

He’s confident those kinks can be ironed out, and is working with a PhD student, Majeed Kazemitabaar, and a research scientist. For now, he is focused on developing the tool for computer coding education (it’s called CodeAid), though there is potential to expand. “It’s hard to think of an educational domain where you couldn’t apply the same concepts,” he said.

High schools are grappling with AI, too. David Hay can’t avoid incorporating the technology, as a teacher of robotics and computer science in Sherwood Park, Alta. The past few years, he’s noticed a larger portion of kids are slipping in terms of learning and achievement, a sentiment his colleagues share. There could be lots of reasons – the pandemic, the dopamine hits of social media – but the convenience of generative AI could be exacerbating the trend. “They realize, ‘Oh, I don’t really have to write an essay. I can just put a prompt into generative AI and I don’t have to engage my brain,’” he said.

It’s not an entirely new problem. There have always been those kids who soak up knowledge and those whose curiosity and drive have to be cultivated. “Teaching is about motivating students and curating experiences for them. They can get all the content they need from other sources,” Mr. Hay said. “How do we say to them, just like working out gives you stronger muscles, doing hard things will give you a better brain?”

Some are not all that interested in AI, perhaps surprisingly. Recently, Mr. Hay recommended an AI-powered coding tool to a student who wanted to make an interactive website. She tried it but the results were too generic. She decided to write the code herself.
 

Ghost Utmost

The Soul of the Internet
Supporter
Joined
May 2, 2012
Messages
20,203
Reputation
8,650
Daps
73,588
Reppin
the Aether
No way I was gonna read all that.

But AI makes me smarter.

As a kid the only thing was a set of encyclopedias for a quick lookup.

Google changed the random fact game, but it was still an index of other sources that you had to sift through.

I use AI at work to find instructions for consumer products, software, and error codes. The AI summary saves me countless steps.

Once I learn something I tend to retain it. The faster I can gather information the faster I can increase my overall knowledge.
 

Thavoiceofthevoiceless

Veteran
Supporter
Joined
Aug 26, 2019
Messages
45,370
Reputation
6,727
Daps
140,323
Reppin
The Voiceless Realm
No offense to you OP, but can’t take you serious anymore after what you we’re posting in that escapee thread.

Always trying to drop knowledgeable but don’t understand the concept of aiding and abetting :laff:
 

Piff Perkins

Veteran
Joined
May 29, 2012
Messages
53,604
Reputation
20,161
Daps
293,404
The answer is clearly yes but I'd argue this would be the case even if AI wasn't as prominent, because we're stacking generations that cannot and do not read. Large groups of people are no longer capable of understanding basic literary techniques. Allegory, repetition, foreshadowing, theme, etc. They cannot discern basic information from texts. They cannot follow basic instructions at work or school because they don't recognize a lot of words we take for granted.

Minds have already been weakened and AI is arriving at a perfect time to finish the job, basically. The thing that's most striking to me is the aggressive, almost violent stupidity. It's not just that they don't know basic things, they get angry at the mere suggestion that they need to know something, or that they should know something, or that they need to learn xyz. And when confronted with something they don't know they just double down on stupidity by dismissing it or asking AI for a half assed summary (that they don't even read in full). You can really see the path towards book burning, looting schools, etc. Like...the anger at education and expertise is gonna end in people just giving up all free thought and embracing AI to do everything for them.
 

Vandelay

Life is absurd. Lean into it.
Joined
Apr 14, 2013
Messages
25,376
Reputation
6,998
Daps
91,481
Reppin
Phi Chi Connection
Didn't read the whole thing, nor do I have to easily and emphatically say... "yeah"... get answers, without knowing how you got there... yes, that will eat your critical thinking skills and general intelligence. But convenience, right?
 

bnew

Veteran
Joined
Nov 1, 2015
Messages
64,580
Reputation
9,864
Daps
175,339
No offense to you OP, but can’t take you serious anymore after what you we’re posting in that escapee thread.

Always trying to drop knowledgeable but don’t understand the concept of aiding and abetting :laff:

I understand the concept of aiding and abetting. :comeon:

do you understand harm reduction?
 
Top