Does Gen Z have pre-Internet nostalgia?

I was listening to the recent episode of the Ezra Klein Show when something Klein’s guest Emily Jashinsky said caused me to pause and google. Jashinsky claims about Gen Z’ers who are tired of social media and smart phones, who may want to give them up (starting at about 15:34):

“Do you know what Gen Z is binging hours of on YouTube? Its camcorder videos from the 1980s and 1990s of high schoolers. It’s the most boring camcorder videos on your old Sony that you could possibly imagine of people just at their lockers. No phones, just living in the moment and Gen Z is binging these hard, and it goes beyond just the curiosity of these historical artifacts. I think actually, if you asked a 22-year old that question and its through the lens of what your every day life would look like, and not just explicitly economic, I actually think a lot of them would take the deal. Not all of them, but the level of exhaustion with smartphones and social media…”

As she continues, she makes the case that younger conservatives, with whom she identifies, have a problem with modernity and that they would like to be free of some of its constraints, especially the dominance of technology and social media. I recommend listening to the episode yourself (embedded below) to hear her argument in its entirety, and I appreciate Klein, himself a progressive, hosting a conversation about the internal diversity of America’s conservative movement. I’m learning a lot as I listen but that’s not what I want to address. I want to address this claim about Gen Z’s nostalgia.

Unfortunately, I can’t find any information about Gen Z binging camcorder videos. I’m not doubting the claim, per se, just saying I can’t link to any study or news article on the topic. If someone out there finds something, feel free to share in the comments and I’ll update this post! I want to know if this is true because it would be eye opening, for one, but also affirming of the pedagogy I’ve implemented in my classroom.

What do I mean by this? Well, I share some of the concerns Jashinsky expressed about the Internet and social media. Obviously. In the past several weeks, I’ve written about how much better things seem at the school where I work since we’ve banned smartphones and smartwatches during school hours (see “Anecdotal evidence about phones in the classroom”). I’m skeptical of Artificial Intelligence’s ability to contribute to my students’ education (see “AI in the/my classroom”). Instead, I encourage my students to handwrite almost everything at this stage (see “Handwriting is good for the brain”). I have zero interest in engaging with trendy social media platforms like Snapchat and TikTok, as evidenced by the fact that I’m blogging like it’s 2010 (see “Why do I blog?”). This means that almost no one hears my views on topics like this one, and I’m fine with that! It’s freeing to do this sort of thing mostly for myself, to process my own thoughts in writing, to help me become clearer about my reasoning. I mean, I confess: I despise what algorithms are doing to us and I’m happy to pretend like the Internet is something else; something freer than what it’s become:

But most importantly: is Jashinsky’s claim true that many Gen Z’ers wish they could have the lives we had in the 80s and 90s? Would they trade social media and smart phones for camcorders, landline phones, and getting your sports scores either on cable TV or through tomorrow’s newspaper?

I don’t think I’d make a 1-for-1 trade but I do think there’s a lot about present modernity that we need to rethink, especially with regard to smartphone use, the Internet, and social media…especially for young, developing minds. To clarify, I was raised (partially) in fringe religious circles. The Internet provided me with information but also dialogue partners that made it impossible that I would continue in that religious movement once I became an adult. I imagine that pre-Internet, when your community was mostly people you know only in “real” life, I may have been more prone to settle for the sense of belonging that extreme religious groups can provide. But like the man being led out of Plato’s cave, the Internet gave me a map to freedom.

That being said, the Internet has also provided many people with a map into the cave. The conspiratorial thinking of QAnon is an Internet reality. Heaven’s Gate is famous for its use of the Internet to gain adherents and notoriety at the very beginning of the Internet Era. So the Internet has been used for variegated purposes since the beginning. It’s neither good nor bad in itself, nor are smart phones or social media.

But if Gen Z does have pre-Internet nostalgia, then we should pay attention to what it is that they wish they had from the eras of our childhoods. (I’m an older Millennial, or a “Xennial” as we who were born in the early 1980s are called, so by “our” I mean the childhoods of the 80s and 90s.) It may tell us what our young people need, including Gen Alpha who arrives in my classroom soon.

A final side note: I don’t remember being nostalgic for my parent’s youth. I had my own ups and downs as a kid and adolescent but I enjoyed my era. I liked some of the music from my parents era but I didn’t want to trade places. If even a sizable percentage of Gen Z does want to trade places with Millennials, or at least wishes that they had some of what made our childhoods unique, then this seems to be telling us a lot about what’s gone wrong over the past two decades. It may give our collective culture a guide for how to course correct. We should pay attention.

AI in the/my classroom

The use of Artificial Intelligence (AI) in the classroom is something that all faculties, from elementary to graduate school, need to address. Last week our upper school faculty broke into groups to do just this. It seemed fruitful but nowhere near final. I’ll admit that I’m something of an AI-skeptic. I won’t pretend that I understand how it all works but I do try to read articles and listen to podcast episodes where experts address the rapid changes that we’re seeing. To the best of my ability, I’ve formulated an opinion not so much on whether AI should be used in the classroom but whether it should be used in my classroom. I want to put those thoughts down somewhere, so here we go.

What do we mean by “AI”?
One problem with this discussion is that everything seems to be “AI” now. As one podcast I was listening to pointed out: AI has become a marketing label. It’s useful for gaining venture capital. It’s helpful for selling your product. AI means so many different things (does Word use AI? Grammarly? ChatGPT? and are these products all doing the same thing?) that a broad acceptance or denouncement is impossible. (I’m sure it’s linked below but I can’t remember which one of the podcasts this point is from!) Personally, I’m most concerned with “Large Language Models” or “LLMs”.

Is AI’s relevance the same for all subjects?
One thing I noticed during our faculty discussion is that my colleagues who teach in our “English” or “Social and Religious Studies” departments emphasized the dangers of AI while my colleagues who teach STEM topics emphasized the benefits. The educational goals of the humanities stand in tension with many of the educational goals of STEM. I’ve noticed that many STEM teachers are prone to celebrate what humans can do with new scientific discoveries and technological advances whereas many humanities teachers tend to sound the alarm with regard to what these discoveries and advances might do to our humanity. (On this note, I highly recommend Scott Stephens and Shannon Valor’s discussion: “What is AI doing to our humanity?”) This isn’t always the case. Some people involved in the humanities are convinced that the humanities need to embrace things like AI (e.g. “AI, reading, and the humanities”). They may be correct though as I’ll discuss below, I think the answer to the question of “Is AI good for us?” depends on the context in which it’s being asked.

Again, I return to my favorite “Jurassic Park” meme to explain how humanities teachers often feel about what’s happening in the world of STEM:

In a recent interview with Sean Illing (see “Yuval Norah Harari in the eclipsing of human intelligence”), Yuval Noah Harari talked about his new book Nexus: A Brief History of Information Networks from the Stone Age to AI. He frames history around information networks. Harari isn’t an alarmist but he’s concerned about the impact of AI (one information network) on democracy (another information network). This goes beyond Russian spam bots on X/Twitter and other social media. If someone like Harari is sounding the alarm, we should listen. The more we teach our students to outsource their own thinking to AI systems, or even Google search results, the less we should be surprised when we’re surrounded by people who are easily manipulated by technology for the simple reason that it’s technology!

For reasons like this, I won’t speak to what my colleagues in mathematics or the sciences are doing. I will say that those of us who teach students to read, write, philosophize, theologize, engage in politics, compile history, create art, etc., should be very concerned about what AI could do to our student’s brains.

Is AI’s dominance inevitable?
Another argument I heard for using AI in the classroom goes something like this: the dominance of AI is inevitable, it’s the future, so we better spend time teaching students how to use it. I’m not so sure that I’m convinced that this is true. One book that I want to read soon is AI Snake Oil: What Artificial Intelligence Can Do, What It Can’t, and How to Tell the Difference. One of the authors, Prof. Arvind Narayanan of Princeton University, was interviewed by Anthony Funnell (see “AI snake oil—its limits, risks, and its thirst for resources”), and I came away from listening to that interview wondering if many of us are buying into the marketing campaigns of the Elon Musks and Sam Altmans of the world who hope to continue make profit off of convincing us that they can see the future. Musk has been promising self-driving Teslas for a while now and we know that hasn’t been going well but if Musk, or Altman, tell investors and consumers that they don’t know if and when the technology will mature, they’ll lose investors and consumers. It’s important for them to convince us that we’re missing the train to the future and that they’re driving it!

Does AI need to be paired with maturity?
Let’s concede that AI’s dominance is inevitable, for the sake of argument. This doesn’t automatically answer whether or not students should use these tools in our classrooms. There are many things that may be inevitable for our students when they’re older. I would be shocked to see a third grade teacher putting a kid behind the wheel of a car because driving is inevitable! Similarly, if students haven’t learned how to read, write, analyze, etc., yet, it’s educational malpractice to emphasize tools for which they’re not ready!

There are stages of our development when handwriting is really good for students (see “Handwriting is good for the brain”). There are stages of development when less is more with regard to technology use and accessibility (see “Anecdotal evidence about phones in the classroom”). And I think there are stages in our development when once we’ve learned the basic skills that the humanities teach us, we may be ready for using AI. Personally, I’m happy for my students to wait until college and I’m satisfied with punting to the colleges and universities that have way more resources for dealing with student use of AI. When kids go to college, they have to make all sorts of decisions about how they spend their time, who they spend it with, etc., that we don’t ask them to make in high school.

I’ve heard some compare hesitancy to embrace AI with hesitancy to embrace the Internet in the 1990s. I don’t think this is the same thing but I do think that such a claim makes an unintentional observation. All of us wish we would’ve known how the Internet would be weaponized for things like misinformation, bullying, algorithms that feed on anger, etc. If we could go back and prepare ourselves for the ugly side of Internet use, we would. This is my warning! We know that LLMs bullshit (see “ChatGPT is Bullshit” by Michael Townsend, et al., and “Are LLMs Natural Born Bullshitters” by Anand Jayprakash Vaidya). They don’t know any better. If we don’t try to help our students develop skeptical thinking skills (see below), we’re feeding them to AI systems that have no way of caring whether or not what is being said is true or false. As J. Aaron Simmons has written about bullshitters (see “I’d Rather Be a Liar”):

“In contrast to the liar, the bullshitter doesn’t even care about truth at all. They are not intending to deceive their audience, but rather the bullshitter attempts to motivate behavior in their audience that supports their own self-interest.”

Systems like ChatGPT have one “goal”: engagement. They’re not concerned with truth, as Vaidya wrote in the article linked above:

Research suggests that LLMs, left to their own devices, are natural-born bullshitters. The tendency for LLMs to hallucinate has only been reduced through reinforcement learning from human feedback. Without human intervention, they appear to lack the ability to control or reduce their hallucinations through training unaided by humans. Even if their hallucination rate is low, it might be that they have a fundamental disposition to bullshit as a result of the fact that they think* as opposed to think as well as care* as opposed to care for the truth.”

In other words, whatever seems “human” about LLMs is because we humans remain involved. One analogy Vaidya gives is helpful. He writes, “Just as we can say a car ‘runs’, when it is clear to everyone that the underlying mechanics of a functioning car and a running animal are fundamentally different, we can also apply words like ‘think’, ‘assert’, ‘understand’, and ‘know’ to LLMs without losing sight of the underlying mechanical and structural differences. Mental life need not be human mental life to be mental life.” Hence, the asterisks next to “think” and “care” in the above quote. LLMs “think” and “care” like us like cars “run” like us.

Creating Skeptical Thinkers/Avoiding AI’s “Mirror”
Personally, I don’t think many adolescents are ready to discern what bullshitters like ChatGPT are feeding them. This means that those of us who are fighting for the future of the humanities need to be very intentional in teaching our students to be skeptical thinkers. What do I mean by this? Well, I mean something like what Prof. Jamil Saki of Stanford University calls “hopeful skepticism” which he contrasts with cynicism:

“…hopeful skepticism is about applying a scientific mindset. Like a scientist, hopeful skeptics seek out facts and evidence instead of relying on feelings and fears. And rather than being fatalistic, they are critical and curious instead.”

We need to teach students to have a skeptical mindset that doesn’t just accept things at face value but, again, seeks “out facts and evidence” and is “critical and curious”. I can use ChatGPT this way. I can use Google search results this way. But my students could become easily susceptible to just embracing whatever ChatGPT or Google feeds them. If we don’t prepare them for this (which may mean walking them through the use of LLMs in our classes but doesn’t necessitate making that jump), we’ll be in trouble as a society. We’ll face a future were LLMs, like dogs returning to their vomit, consume AI generated information so that the cycle of information is AI feeding AI feeding AI. As Shannon Vallor argues in (another book I need to read) The AI Mirror, “today’s powerful AI technologies reproduce the past”. They reflect past, cumulative human knowledge (see the already linked above interview: “What is AI doing to our humanity?”). Whether they can create new knowledge is to be determined but we shouldn’t outsource the creativity of the human brain to AI anymore than we should start talking to someone’s reflection in a mirror while ignoring the person/people being reflected. When it comes to thinking, we’re still superior.

Anecdotal evidence about phones in the classroom

I’m not a psychologist or a social scientist. But my own experience in the classroom has made me pay attention to the claims of people like Jonathan Haidt and Jean Twenge. Both have sounded the alarm with regard to adolescent (over)use of smartphones. I’ve confiscated student phones only to have my pocket buzz incessantly. I wondered how anyone could focus with notification after notification from Snapchat, Instagram, and TikTok vying for their attention. I’ve seen my students sit around together but not speaking to each other as each stared into their phone. Adults do this sort of thing too but as Haidt, Twenge, and other has noted: we had a chance to live through our brain’s important developmental stages before getting smartphones. Gen Z didn’t get the opportunity. For this reason, Haidt, Twenge, et al., have argued for causation between smartphone use/addiction and the ongoing mental health crisis we see about America’s youth (for example, see Haidt’s “End the Phone-Based Childhood Now”).

My wife and I have seen the children of parents who raised their kids without smartphones and tablets and those who allowed it. Our experience told us that there are drastic differences in these kids ability to wait, be patient, delay gratification, hold conversations, read books, be creative, and just enjoy being children with imaginations. Our kid won’t have a smartphone or a tablet at their disposal. If they use it at all in daycare or school, we’ll ask for limits. My plan is to keep these technologies out of their lives as long as I can.

For this reason, I was surprised when a recent episode of Freakonomics (“Is Screen Time as Poisonous as We Think?”) interviewed Andrew K. Przybylski of Oxford University who seemed to brush these concerns aside. I think his main point was that phones aren’t the end-all, be-all of Gen Z’s mental health crisis. But as I listened to him, I thought what he was saying didn’t match my experience at all. You see, this year our school went phone free. And I don’t know how many students are going to our student counselor. And I can’t tell you whether they feel happier in general. I can tell you what I see in the classroom though: they’re more focused; they contribute to class conversations more freely; they seem to have more patience when reading; they seem less stressed and distracted; they seem more in the moment. Several of my colleagues have noticed the same thing.

Our school is using Yondr. The kids were not happy about this at the beginning of the year but more and more are telling my colleagues that they admit that they kind of enjoy the freedom. Maybe Przybylski would agree that this can be good. Maybe his point has little to do with phones in schools and more to do with the smartphone-mental health causation argument. But a few weeks into this new school year and I think our school’s decision to remove phones has been one of the best ones we’ve made in years. The students seem happier!

Phones weren’t allowed last year, technically. We told the kids to keep them “off and away” during class. They could take them out between classes. This meant that in reality many students still had their phones on their bodies all day. All those notifications grabbing their attention endlessly from their pocket, making them want the class to be over now so that they could hurry to check their social media. Now my students often lose track of time as they lack phones and smart watches, and I rarely use computers in my class. Also, I don’t have a clock on my wall. The few students with traditional watches keep time but quite often it’s clear that they don’t know how much time has passed in class. This has made a huge difference.

I teach at a relatively affluent private school. My experience is limited to one demographic of kids. I don’t want to claim to be diving into the big picture psychology and social science of adolescents and phones. But for our school, and for my students, the removal of phones has been a gift. As an adult, I’ve noticed that when I spent too much time on social media, I feel worse about things. When I stare at my phone for too long, it’s rarely a good sign. As I try to use my phone and social media less, my brain feels freer, happier. If this is how things are for my forty-two year old brain, I can’t imagine that a fourteen through eighteen year old brain doesn’t benefit at least as much from time away from their phones and social media. For that reason, as the debate goes forward in universities and research labs, I’m going to go with my experience and root for limiting phone/social media use by young people.

Welcome Generation Alpha?

I realized that this year will be the first year (I think) that I start teaching students who are classified as “Generation Alpha,” according to people who categorize this sort of thing. For example, the “social analyst and demographer” Mark McCrindle organizes Generation Alpha between the years 2010-2024. The logic behind these years is as follows:

“Generational definitions are most useful when they span a set age range and so allow meaningful comparisons across generations. That is why the generations today each span 15 years with Generation Y (Millennials) born from 1980 to 1994; Generation Z from 1995 to 2009 and Generation Alpha from 2010 to 2024. And so it follows that Generation Beta will be born from 2025 to 2039.”

This sort of thing is pretty subjective. In her book Generations: The Real Differences Between Gen Z, Millennials, Gen X, Boomers, and Silents—and What They Mean for America’s Future, Jean Twenge offers a more concrete reason for arguing that “Generation Alpha” shouldn’t begin with 2010 but instead 2012. Twenge called “Generation Alpha” “Polars” because they’re born into an era of extreme political polarization. I like Twenge’s name better but also I liked “iGen” better than “Gen Z” and yet it’s clear that “Gen Z” is the more popular label. Anyway, for Twenge, “Gen Alpha/Polars” begins at 2012 because of the following reasons (from pp. 451-452):

  1. Technology: “smartphone ownership crossed 50% in the U.S. between the end of 2012 and the beginning of 2013”.
  2. Black Lives Matter: “founded in 2013”; “gained widespread support before the first Polars entered kindergarten”.
  3. COVID: one of the youngest groups to remember the global pandemic as Twenge argues “the time before March 2020 will be only vaguely remembered by those under age 7 at the time”.

I appreciate Twenge’s taxonomy because it provides a rationale like this one. That doesn’t mean “generations” can be found in nature. They’re social constructs of a weaker variety, for sure. But they’re helpful for understand trends and cultural transitions. That being said, they’re fragile. In many ways, when I was younger I shared in the optimism that was characteristic of the mid-2000s Millennial but as I’ve aged I’ve hardened in many ways that might place me among stereotypical Gen X’ers. I was born in 1982, so depending on who you ask, I’m one of the first Millennials. (Twenge marks 1980 as the start for Millennials.) But when I meet people born in the early to middle 90s, I have sometimes felt like there’s no way we’re from the same generational cohort. Often, I relate closer to the slightly older than me Gen X folk in my circles. So, let’s continue to embrace the subjectivity while respecting the effort made by people like Twenge, who organize generations around important methodological markers like major changes in technology (e.g. TV; home appliances; AC; birth control; computers; the Internet; social media) and to a lesser extent, major events (e.g. AIDS epidemic; 9/11; the Great Recession; COVID-19 pandemic).

Maybe I’m teaching Gen Z for a couple more years. Either way, if the sociologists who study this topic are right that in marking generational divisions along lines of about every 15 years or so, then we’re about the experience some transitions in the classroom. As Twenge writes, “generational differences are based on averages,” like how much time someone spends on the Internet or a social media app. Those changes are real and it’s best to be on the look out for whatever is coming next (e.g. the AI revolution?) if we want to be prepared to educate tomorrow’s children.