Over 90% of faculty say GenAI is killing critical thinking
Ninety per cent of over 1,000 faculty say they are concerned that generative artificial intelligence (GenAI) programmes like ChatGPT and Copilot will diminish students’ critical thinking skills, while 95% say it will increase students’ overreliance on AI tools, according to a recent survey.
The survey report, survey The AI Challenge: How College Faculty Assess the Present and Future of Higher Education in the Age of AI (AI Challenge) published last week by the American Association of Colleges and Universities (AAC&U), exposes what Professor Ann Mills, who teaches writing, reading, critical thinking and research skills at the College of Marin (Kentfield, California), believes is a crisis in need of serious response.
“This survey shows we are in a crisis, despite all efforts of institutions and faculty to teach AI literacy, there is still growing over-reliance on it,” said Mills who is the author of the textbook How Arguments Work: A Guide to Writing and Analysing Texts in College and a faculty member of the AAC&U’s Institute on AI, Pedagogy, and the Curriculum.
“There is learning loss and diminishment of critical thinking skills. We need to do more to protect learning and, while all the strategies that have been put forward help, the survey results highlight that much more is needed,” she said.
Professor Bryan Alexander, a futurist who teaches in Georgetown University’s (Washington) Learning, Design, and Technology programme and mentors faculty on AI through the AAUP Faculty Mentor Programme, told University World News the survey’s results “match what I’m seeing from a lot of college and university faculty, librarians, technologists, provosts and board members.
“There’s real anxiety, real dread and real confusion felt by a lot of faculty,” said Alexander, who, in his work as a futurist studying higher education, scans the horizon of academia in the United States and worldwide.
Faculty attitudes
AI Challenge was conducted by Elon University (Elon, North Carolina) between 29 October and 26 November 2025. Fifty-eight per cent of its respondents taught in the humanities, social sciences or communications departments, while 19% taught in biological, physical or computer sciences, agriculture or maths departments. The responses were anonymous and were not batched by university or college.
More than eight in 10 respondents said that “faculty resistance to using AI is a challenge to adopting tools in the courses in their departments. Eighty-three per cent pointed to faculty unfamiliarity with tools as the reason for this reticence.
Only 12% said they use GenAI a lot for teaching and instructional activities, with another 34% saying they used it a little.
One respondent pithily answered the question about whether they used GenAI tools for teaching, learning or research by saying: “Never. AI diminishes human agency.” Another’s answer was more pungent: "I never use it. F*** AI. I have a brain and decades of training. I would only use it if I want to write like a B student who did not do the reading and made up all their references, which is never.”
Still, a solid majority, 69%, said they had addressed GenAI literacy issues in class and believed it is important for students to understand the systems’ “potential for bias, hallucinations, capacity to generate misinformation and deepfakes, privacy implications, cybersecurity problems and environmental issues”.
Presumably this last point encompasses the issue of the huge amount of electricity AI server farms consume and the water resources necessary for cooling such farms.
A ‘difference in kind, not degree’
While the survey did not ask specifically about students’ reading skills, important components of reading are impacted by the study’s findings about critical thinking (poor reading, of course, impacts critical thinking) and students’ attention spans, which 83% of instructors report have decreased.
Dr C Edward Watson, the AAC&U’s vice-president for digital innovation, who co-authored the AI Challenge with Lee Raine, director of the Imagining the Digital Future Center at Elon University, told University World News that the student who uses GenAI to provide a one-page or half-page summary of an article could be said to be using AI productively except that getting the gist is no replacement for “squeezing the juice out of an article” by reading it.
Relying on a Gen AI summary “sidesteps the effort component” and impacts the students’ critical thinking skills “because he or she is not engaging with the content as richly,” Watson said.
One professor voiced their “anxiety that foundational literacies – deep reading, sustained attention, writing as thinking, and independent analysis – are actively undermined by AI.”
For her part, in drawing a distinction between earlier forms of media – comic books, television and video games – that were seen as being deleterious both to reading and critical thinking, Mills first broadly defined critical thinking as a two-step process.
“It is the ability to use a variety of cognitive strategies,” she said, “to understand what someone else is saying and to formulate your own assessment of that and your own response to that.”
The second part consists of using what you have understood “so that you can build a larger conversation and participate in a large conversation that makes decisions about it”.
Generative AI is not a difference of degree, Mills told University World News; it is a difference in kind.
“It is different because these systems are so responsive to our requests at any given moment. So, whatever cognitive task we’re facing, we’re going to have the option to offload it – and that’s a problem for learning because learning happens through cognitive activity and engagement, through struggle, through friction generated by dealing with the material.
“With GenAI, the temptation is so much greater than ever before to offload at any moment of struggle or insecurity and to skip that thinking process that is essential for learning,” said Mills.
How to define cheating
GenAI has also created a culture where cheating is ubiquitous: 78% of survey respondents said cheating on their campus has increased since GenAI tools became widely available.
Yet, the definition of cheating is not as clear-cut as the 78% suggests. While 76% think using an AI programme to write the first draft of a paper is cheating, 11% of instructors do not.
If, as seems safe, we apportion the 13% who say they don’t know according to the 76%/11% ratio, that means that 14% of instructors do not consider having a GenAI programme write the first draft of paper to be a grave violation of academic ethics.
Forty-five per cent think that inputting a paper into a GenAI programme and asking for its guidance to make sure the paper accords with the grading rubric is perfectly okay.
Thirty-four per cent consider this cheating, and 21% are not sure, meaning that if we apportion this figure according to the 45%/34% ratio, more than half of instructors are fine with having students ask GenAI to guide students in making sure their papers accord with the marking rubric.
Fully 60% of instructors are fine with their students using AI tools to fact-check, fix citations and “adjust a paper’s structure.”
Only a bare majority, 52%, of professors said using AI to develop a detailed outline was cheating.
‘Nostalgia is not a strategy’
Alexander poured cold water on those who hope that EdTech will solve EdTech’s problems.
“The problem of cheating, of academic integrity, is one which we’ve utterly failed to grapple with. There’s no technological solution. The GenAI (so-called) detectors are horrible. They give false positives and false negatives. They’re a mess, and there’s no sign that they’re getting better,” he said.
Nor does he think that returning to in-class written assignments, what North Americans of a certain vintage remember as the “blue books” used in proctored exams, will fix the problem.
“Nostalgia is not a strategy,” said Alexander, quoting Canadian Prime Minister Mark Carney’s speech at Davos last week in which Carney pronounced the rules-based trading system built up by the United States and Europe since the end of the Second World War to be over because of President Donald Trump’s tariffs.
Oral presentations too, he said, have problems, not least of which is they are not scalable. “If you’ve got a seminar of a dozen students, that’s one thing. But, if you have a lecture hall of 400, that’s another,” he noted.
And that’s before, Alexander added, you get to the question of redesigning the pedagogy and the fact that most instructors are not trained to assess oral presentations. “There are also students with neurodivergency and [there’s] the fact that oral presentations penalise introverts or, at the very least, people with anxiety disorder,” he said.
In her discussion of academic integrity and grading, Mills referenced both the University of Sydney’s “two-lane approach”, the first lane being a secure environment where access to AI can be controlled and the other an environment in which students “engage productively and responsibly with AI tools as part of their learning experience”, and the work of Dr Tricia Bertram Gallant, director of the Academic Integrity Office and Triton Testing Center (University of California, San Diego) and author of The Opposite of Cheating: Teaching for Integrity in the Age of AI (University of Oklahoma Press, 2025).
“A big part of the crisis is that we don’t have [a] secure, valid means of assessment. And people are opposing it because they don’t like the idea of policing, of surveillance, of distrust,” Mills said.
“But what I think we are seeing in the survey is that we are in a situation where you need some guardrails.
Mills was referring to the 68% of respondents who said that “their schools have not prepared faculty for using GenAI for effective teaching and mentoring of students.”
“You can’t just go on about revitalising pedagogy,” she said.
In addition to using some sort of in-person presentation and observed assessment, Mills told University World News that some form of online observation, like process tracking or writing documents, could be used.
These are not “perfect and have to be used cautiously”, she added. By way of example, she pointed to “producing a process report on a Google Doc” or using a “lockdown browser” that tracks where students access information.
Google Docs and other programmes are not, she said, foolproof. “But they do provide some guardrail that makes it less tempting to cheat.”
Attitudes towards authorship
One of the most surprising results of the survey was the faculty members’ report of their own use of GenAI and perceptions of cheating.
One half of the respondents said it was perfectly fine to use GenAI to produce a first draft of a syllabus; 30% said this was cheating, and 10% were not sure. Forty-six per cent said GenAI was a legitimate tool to produce PowerPoint presentations to use in class, while 32% felt this was cheating.
A bare majority, 52%, said using GenAI to answer students’ emails was illegitimate, but more than one quarter, 26%, believed this practice was legitimate. Two in 10 were unsure.
Using AI to grade was condemned by 71% of instructors. Ten per cent said it was legitimate. The 19% who were not sure was a surprise given the fact that every college or university has policies about grading and that grading is normally a part of the employment contract, often with time allocated to it.
At first glance, the 82% of instructors who said that using GenAI “to write portions of an article that they then submit to a journal” was cheating would seem to put the issue to rest.
However, 7% said this was a legitimate use of GenAI, and 11% were not sure. If this 11% is apportioned according to the 82%/7% ratio, the percentage of instructors who say that using GenAI to write publishable papers nears 10%.
That somewhere around 10% of instructors were willing to answer in a survey that using GenAI to write part of a paper meant for publication was acceptable did not surprise Mills.
“I think that attitudes to writing and authorship are changing so much in the larger culture, and so, to some extent, people are starting to see this in the academy,” she said.