The use of Artificial Intelligence (AI) in the classroom is something that all faculties, from elementary to graduate school, need to address. Last week our upper school faculty broke into groups to do just this. It seemed fruitful but nowhere near final. I’ll admit that I’m something of an AI-skeptic. I won’t pretend that I understand how it all works but I do try to read articles and listen to podcast episodes where experts address the rapid changes that we’re seeing. To the best of my ability, I’ve formulated an opinion not so much on whether AI should be used in the classroom but whether it should be used in my classroom. I want to put those thoughts down somewhere, so here we go.
What do we mean by “AI”?
One problem with this discussion is that everything seems to be “AI” now. As one podcast I was listening to pointed out: AI has become a marketing label. It’s useful for gaining venture capital. It’s helpful for selling your product. AI means so many different things (does Word use AI? Grammarly? ChatGPT? and are these products all doing the same thing?) that a broad acceptance or denouncement is impossible. (I’m sure it’s linked below but I can’t remember which one of the podcasts this point is from!) Personally, I’m most concerned with “Large Language Models” or “LLMs”.
Is AI’s relevance the same for all subjects?
One thing I noticed during our faculty discussion is that my colleagues who teach in our “English” or “Social and Religious Studies” departments emphasized the dangers of AI while my colleagues who teach STEM topics emphasized the benefits. The educational goals of the humanities stand in tension with many of the educational goals of STEM. I’ve noticed that many STEM teachers are prone to celebrate what humans can do with new scientific discoveries and technological advances whereas many humanities teachers tend to sound the alarm with regard to what these discoveries and advances might do to our humanity. (On this note, I highly recommend Scott Stephens and Shannon Valor’s discussion: “What is AI doing to our humanity?”) This isn’t always the case. Some people involved in the humanities are convinced that the humanities need to embrace things like AI (e.g. “AI, reading, and the humanities”). They may be correct though as I’ll discuss below, I think the answer to the question of “Is AI good for us?” depends on the context in which it’s being asked.
Again, I return to my favorite “Jurassic Park” meme to explain how humanities teachers often feel about what’s happening in the world of STEM:

In a recent interview with Sean Illing (see “Yuval Norah Harari in the eclipsing of human intelligence”), Yuval Noah Harari talked about his new book Nexus: A Brief History of Information Networks from the Stone Age to AI. He frames history around information networks. Harari isn’t an alarmist but he’s concerned about the impact of AI (one information network) on democracy (another information network). This goes beyond Russian spam bots on X/Twitter and other social media. If someone like Harari is sounding the alarm, we should listen. The more we teach our students to outsource their own thinking to AI systems, or even Google search results, the less we should be surprised when we’re surrounded by people who are easily manipulated by technology for the simple reason that it’s technology!
For reasons like this, I won’t speak to what my colleagues in mathematics or the sciences are doing. I will say that those of us who teach students to read, write, philosophize, theologize, engage in politics, compile history, create art, etc., should be very concerned about what AI could do to our student’s brains.
Is AI’s dominance inevitable?
Another argument I heard for using AI in the classroom goes something like this: the dominance of AI is inevitable, it’s the future, so we better spend time teaching students how to use it. I’m not so sure that I’m convinced that this is true. One book that I want to read soon is AI Snake Oil: What Artificial Intelligence Can Do, What It Can’t, and How to Tell the Difference. One of the authors, Prof. Arvind Narayanan of Princeton University, was interviewed by Anthony Funnell (see “AI snake oil—its limits, risks, and its thirst for resources”), and I came away from listening to that interview wondering if many of us are buying into the marketing campaigns of the Elon Musks and Sam Altmans of the world who hope to continue make profit off of convincing us that they can see the future. Musk has been promising self-driving Teslas for a while now and we know that hasn’t been going well but if Musk, or Altman, tell investors and consumers that they don’t know if and when the technology will mature, they’ll lose investors and consumers. It’s important for them to convince us that we’re missing the train to the future and that they’re driving it!
Does AI need to be paired with maturity?
Let’s concede that AI’s dominance is inevitable, for the sake of argument. This doesn’t automatically answer whether or not students should use these tools in our classrooms. There are many things that may be inevitable for our students when they’re older. I would be shocked to see a third grade teacher putting a kid behind the wheel of a car because driving is inevitable! Similarly, if students haven’t learned how to read, write, analyze, etc., yet, it’s educational malpractice to emphasize tools for which they’re not ready!
There are stages of our development when handwriting is really good for students (see “Handwriting is good for the brain”). There are stages of development when less is more with regard to technology use and accessibility (see “Anecdotal evidence about phones in the classroom”). And I think there are stages in our development when once we’ve learned the basic skills that the humanities teach us, we may be ready for using AI. Personally, I’m happy for my students to wait until college and I’m satisfied with punting to the colleges and universities that have way more resources for dealing with student use of AI. When kids go to college, they have to make all sorts of decisions about how they spend their time, who they spend it with, etc., that we don’t ask them to make in high school.
I’ve heard some compare hesitancy to embrace AI with hesitancy to embrace the Internet in the 1990s. I don’t think this is the same thing but I do think that such a claim makes an unintentional observation. All of us wish we would’ve known how the Internet would be weaponized for things like misinformation, bullying, algorithms that feed on anger, etc. If we could go back and prepare ourselves for the ugly side of Internet use, we would. This is my warning! We know that LLMs bullshit (see “ChatGPT is Bullshit” by Michael Townsend, et al., and “Are LLMs Natural Born Bullshitters” by Anand Jayprakash Vaidya). They don’t know any better. If we don’t try to help our students develop skeptical thinking skills (see below), we’re feeding them to AI systems that have no way of caring whether or not what is being said is true or false. As J. Aaron Simmons has written about bullshitters (see “I’d Rather Be a Liar”):
“In contrast to the liar, the bullshitter doesn’t even care about truth at all. They are not intending to deceive their audience, but rather the bullshitter attempts to motivate behavior in their audience that supports their own self-interest.”
Systems like ChatGPT have one “goal”: engagement. They’re not concerned with truth, as Vaidya wrote in the article linked above:
“Research suggests that LLMs, left to their own devices, are natural-born bullshitters. The tendency for LLMs to hallucinate has only been reduced through reinforcement learning from human feedback. Without human intervention, they appear to lack the ability to control or reduce their hallucinations through training unaided by humans. Even if their hallucination rate is low, it might be that they have a fundamental disposition to bullshit as a result of the fact that they think* as opposed to think as well as care* as opposed to care for the truth.”
In other words, whatever seems “human” about LLMs is because we humans remain involved. One analogy Vaidya gives is helpful. He writes, “Just as we can say a car ‘runs’, when it is clear to everyone that the underlying mechanics of a functioning car and a running animal are fundamentally different, we can also apply words like ‘think’, ‘assert’, ‘understand’, and ‘know’ to LLMs without losing sight of the underlying mechanical and structural differences. Mental life need not be human mental life to be mental life.” Hence, the asterisks next to “think” and “care” in the above quote. LLMs “think” and “care” like us like cars “run” like us.
Creating Skeptical Thinkers/Avoiding AI’s “Mirror”
Personally, I don’t think many adolescents are ready to discern what bullshitters like ChatGPT are feeding them. This means that those of us who are fighting for the future of the humanities need to be very intentional in teaching our students to be skeptical thinkers. What do I mean by this? Well, I mean something like what Prof. Jamil Saki of Stanford University calls “hopeful skepticism” which he contrasts with cynicism:
“…hopeful skepticism is about applying a scientific mindset. Like a scientist, hopeful skeptics seek out facts and evidence instead of relying on feelings and fears. And rather than being fatalistic, they are critical and curious instead.”
We need to teach students to have a skeptical mindset that doesn’t just accept things at face value but, again, seeks “out facts and evidence” and is “critical and curious”. I can use ChatGPT this way. I can use Google search results this way. But my students could become easily susceptible to just embracing whatever ChatGPT or Google feeds them. If we don’t prepare them for this (which may mean walking them through the use of LLMs in our classes but doesn’t necessitate making that jump), we’ll be in trouble as a society. We’ll face a future were LLMs, like dogs returning to their vomit, consume AI generated information so that the cycle of information is AI feeding AI feeding AI. As Shannon Vallor argues in (another book I need to read) The AI Mirror, “today’s powerful AI technologies reproduce the past”. They reflect past, cumulative human knowledge (see the already linked above interview: “What is AI doing to our humanity?”). Whether they can create new knowledge is to be determined but we shouldn’t outsource the creativity of the human brain to AI anymore than we should start talking to someone’s reflection in a mirror while ignoring the person/people being reflected. When it comes to thinking, we’re still superior.


