Skip to main content

The Greatest Threat to Higher Education: AI Mimicry, Not Sentience

By Dr. James L. Norrie

Late last month, Eric Schmidt, Google’s former CEO, warned in The New York Times that Silicon Valley’s obsession with artificial general intelligence (AGI) distracts the U.S. from deploying practical AI while nations like China surge ahead in agriculture, health care, and education. He’s right—but only partly.

A sharper warning came a day later from Mustafa Suleyman, CEO of Microsoft AI, who says we’re approaching Seemingly Conscious AI (SCAI): systems that look sentient, though they are not. He fears people will mistake mimicry for mind, develop emotional attachments, and even argue for AI “rights.” He calls this “AI psychosis.”

The real danger is not whether AI becomes sentient. It’s whether humans lose the ability—or the will—to tell the difference between simulation and reality.

Dining vs. Eating: A Lesson for Academia

Consider the difference between eating and dining. Eating is expedient consumption, a mechanical act to satisfy hunger. Dining is something more—an experience, layered with meaning, shared with others.

AI can generate a fluent description of a five-course tasting menu. It can recommend a wine pairing and replicate the language of taste with uncanny precision. But what it cannot do is remember the aroma of garlic sizzling in olive oil, or the joy of laughter echoing around a convivial table.

The difference between description and experience, mimicry and meaning, is what higher education must protect. If universities allow surface imitation of knowledge to pass for authentic understanding, the very essence of education is hollowed out.

From the Table to the Thinkers

Philosophy is often dismissed as abstract. But to understand AI it is indispensable. For centuries, philosophy has given us tools to wrestle with reality, truth, and meaning.

Descartes doubted everything until he concluded: I think, therefore I am. Today’s question is whether machines can fool us into mistaking patterned output for thought. Baudrillard warned that simulacra—convincing imitations—can become more compelling than reality itself. By that measure, AI is the ultimate simulacrum.

Simulation may become more compelling than reality. And we may let it.

The Classroom at Risk

Students are already outsourcing essays, reflections, even “personal” narratives to generative AI. Faculty, pressured for time, are tempted to use AI for grading or lectures. The results look “good enough.”

But if education reduces to AI-aided shortcuts, graduates leave with mimicry, not mastery. The academy’s mission is to teach discernment, judgment, and depth. If we accept simulation as sufficient, that mission fails.

Mimicry > Sentience

We fret about AI “thinking for itself.” The greater threat is AI thinking just convincingly enough like us that we stop thinking critically.

Chatbots that claim to be in love. Voice clones of the dead. Student essays engineered to avoid detection. These are not intelligence—they are mirrors, not minds. Yet they risk blurring the critical boundary between reality and imitation.

What Higher Education Must Do

Higher education must now prioritize discernment: not only of accuracy but of authenticity.

  • Ask students to disclose when and how they use AI, then defend their claims aloud.
  • Require drafts and notes that reveal genuine thought processes.
  • Stage comparative critiques of AI vs. human work.
  • Assign flawed AI outputs for repair, teaching students to diagnose and correct.

Used wisely, AI can enrich inquiry. Used uncritically, it flattens learning and rewards imitation over thought.

Resist the Lure of Imitation

Industry leaders warn of AGI and superintelligence, but they miss the most immediate threat: our willingness to settle for simulation.

If universities fail to defend the line between mimicry and meaning, we risk losing our ability to know what it means to be human. The lesson is clear: to think is not to imitate but to discern, to judge, and to experience.

Reflective CTA:
Higher education faces a choice: defend authenticity or surrender to mimicry. Techellect’s SAFER AI framework equips leaders to recognize the difference and protect human judgment. Learn more here.

Author Bio:
Dr. James L. Norrie is a professor of Law and Cybersecurity, and Founding Dean of the Graham School of Business, at York College of Pennsylvania. He is the author of Beyond the Code: AI’s Promise, Peril and Possibility for Humanity (Kendall Hunt, 2025). Learn more at Techellect or visit cyberconIQ.

Leave a Reply