AI in the/my classroom

The use of Artificial Intelligence (AI) in the classroom is something that all faculties, from elementary to graduate school, need to address. Last week our upper school faculty broke into groups to do just this. It seemed fruitful but nowhere near final. I’ll admit that I’m something of an AI-skeptic. I won’t pretend that I understand how it all works but I do try to read articles and listen to podcast episodes where experts address the rapid changes that we’re seeing. To the best of my ability, I’ve formulated an opinion not so much on whether AI should be used in the classroom but whether it should be used in my classroom. I want to put those thoughts down somewhere, so here we go.

What do we mean by “AI”?
One problem with this discussion is that everything seems to be “AI” now. As one podcast I was listening to pointed out: AI has become a marketing label. It’s useful for gaining venture capital. It’s helpful for selling your product. AI means so many different things (does Word use AI? Grammarly? ChatGPT? and are these products all doing the same thing?) that a broad acceptance or denouncement is impossible. (I’m sure it’s linked below but I can’t remember which one of the podcasts this point is from!) Personally, I’m most concerned with “Large Language Models” or “LLMs”.

Is AI’s relevance the same for all subjects?
One thing I noticed during our faculty discussion is that my colleagues who teach in our “English” or “Social and Religious Studies” departments emphasized the dangers of AI while my colleagues who teach STEM topics emphasized the benefits. The educational goals of the humanities stand in tension with many of the educational goals of STEM. I’ve noticed that many STEM teachers are prone to celebrate what humans can do with new scientific discoveries and technological advances whereas many humanities teachers tend to sound the alarm with regard to what these discoveries and advances might do to our humanity. (On this note, I highly recommend Scott Stephens and Shannon Valor’s discussion: “What is AI doing to our humanity?”) This isn’t always the case. Some people involved in the humanities are convinced that the humanities need to embrace things like AI (e.g. “AI, reading, and the humanities”). They may be correct though as I’ll discuss below, I think the answer to the question of “Is AI good for us?” depends on the context in which it’s being asked.

Again, I return to my favorite “Jurassic Park” meme to explain how humanities teachers often feel about what’s happening in the world of STEM:

In a recent interview with Sean Illing (see “Yuval Norah Harari in the eclipsing of human intelligence”), Yuval Noah Harari talked about his new book Nexus: A Brief History of Information Networks from the Stone Age to AI. He frames history around information networks. Harari isn’t an alarmist but he’s concerned about the impact of AI (one information network) on democracy (another information network). This goes beyond Russian spam bots on X/Twitter and other social media. If someone like Harari is sounding the alarm, we should listen. The more we teach our students to outsource their own thinking to AI systems, or even Google search results, the less we should be surprised when we’re surrounded by people who are easily manipulated by technology for the simple reason that it’s technology!

For reasons like this, I won’t speak to what my colleagues in mathematics or the sciences are doing. I will say that those of us who teach students to read, write, philosophize, theologize, engage in politics, compile history, create art, etc., should be very concerned about what AI could do to our student’s brains.

Is AI’s dominance inevitable?
Another argument I heard for using AI in the classroom goes something like this: the dominance of AI is inevitable, it’s the future, so we better spend time teaching students how to use it. I’m not so sure that I’m convinced that this is true. One book that I want to read soon is AI Snake Oil: What Artificial Intelligence Can Do, What It Can’t, and How to Tell the Difference. One of the authors, Prof. Arvind Narayanan of Princeton University, was interviewed by Anthony Funnell (see “AI snake oil—its limits, risks, and its thirst for resources”), and I came away from listening to that interview wondering if many of us are buying into the marketing campaigns of the Elon Musks and Sam Altmans of the world who hope to continue make profit off of convincing us that they can see the future. Musk has been promising self-driving Teslas for a while now and we know that hasn’t been going well but if Musk, or Altman, tell investors and consumers that they don’t know if and when the technology will mature, they’ll lose investors and consumers. It’s important for them to convince us that we’re missing the train to the future and that they’re driving it!

Does AI need to be paired with maturity?
Let’s concede that AI’s dominance is inevitable, for the sake of argument. This doesn’t automatically answer whether or not students should use these tools in our classrooms. There are many things that may be inevitable for our students when they’re older. I would be shocked to see a third grade teacher putting a kid behind the wheel of a car because driving is inevitable! Similarly, if students haven’t learned how to read, write, analyze, etc., yet, it’s educational malpractice to emphasize tools for which they’re not ready!

There are stages of our development when handwriting is really good for students (see “Handwriting is good for the brain”). There are stages of development when less is more with regard to technology use and accessibility (see “Anecdotal evidence about phones in the classroom”). And I think there are stages in our development when once we’ve learned the basic skills that the humanities teach us, we may be ready for using AI. Personally, I’m happy for my students to wait until college and I’m satisfied with punting to the colleges and universities that have way more resources for dealing with student use of AI. When kids go to college, they have to make all sorts of decisions about how they spend their time, who they spend it with, etc., that we don’t ask them to make in high school.

I’ve heard some compare hesitancy to embrace AI with hesitancy to embrace the Internet in the 1990s. I don’t think this is the same thing but I do think that such a claim makes an unintentional observation. All of us wish we would’ve known how the Internet would be weaponized for things like misinformation, bullying, algorithms that feed on anger, etc. If we could go back and prepare ourselves for the ugly side of Internet use, we would. This is my warning! We know that LLMs bullshit (see “ChatGPT is Bullshit” by Michael Townsend, et al., and “Are LLMs Natural Born Bullshitters” by Anand Jayprakash Vaidya). They don’t know any better. If we don’t try to help our students develop skeptical thinking skills (see below), we’re feeding them to AI systems that have no way of caring whether or not what is being said is true or false. As J. Aaron Simmons has written about bullshitters (see “I’d Rather Be a Liar”):

“In contrast to the liar, the bullshitter doesn’t even care about truth at all. They are not intending to deceive their audience, but rather the bullshitter attempts to motivate behavior in their audience that supports their own self-interest.”

Systems like ChatGPT have one “goal”: engagement. They’re not concerned with truth, as Vaidya wrote in the article linked above:

Research suggests that LLMs, left to their own devices, are natural-born bullshitters. The tendency for LLMs to hallucinate has only been reduced through reinforcement learning from human feedback. Without human intervention, they appear to lack the ability to control or reduce their hallucinations through training unaided by humans. Even if their hallucination rate is low, it might be that they have a fundamental disposition to bullshit as a result of the fact that they think* as opposed to think as well as care* as opposed to care for the truth.”

In other words, whatever seems “human” about LLMs is because we humans remain involved. One analogy Vaidya gives is helpful. He writes, “Just as we can say a car ‘runs’, when it is clear to everyone that the underlying mechanics of a functioning car and a running animal are fundamentally different, we can also apply words like ‘think’, ‘assert’, ‘understand’, and ‘know’ to LLMs without losing sight of the underlying mechanical and structural differences. Mental life need not be human mental life to be mental life.” Hence, the asterisks next to “think” and “care” in the above quote. LLMs “think” and “care” like us like cars “run” like us.

Creating Skeptical Thinkers/Avoiding AI’s “Mirror”
Personally, I don’t think many adolescents are ready to discern what bullshitters like ChatGPT are feeding them. This means that those of us who are fighting for the future of the humanities need to be very intentional in teaching our students to be skeptical thinkers. What do I mean by this? Well, I mean something like what Prof. Jamil Saki of Stanford University calls “hopeful skepticism” which he contrasts with cynicism:

“…hopeful skepticism is about applying a scientific mindset. Like a scientist, hopeful skeptics seek out facts and evidence instead of relying on feelings and fears. And rather than being fatalistic, they are critical and curious instead.”

We need to teach students to have a skeptical mindset that doesn’t just accept things at face value but, again, seeks “out facts and evidence” and is “critical and curious”. I can use ChatGPT this way. I can use Google search results this way. But my students could become easily susceptible to just embracing whatever ChatGPT or Google feeds them. If we don’t prepare them for this (which may mean walking them through the use of LLMs in our classes but doesn’t necessitate making that jump), we’ll be in trouble as a society. We’ll face a future were LLMs, like dogs returning to their vomit, consume AI generated information so that the cycle of information is AI feeding AI feeding AI. As Shannon Vallor argues in (another book I need to read) The AI Mirror, “today’s powerful AI technologies reproduce the past”. They reflect past, cumulative human knowledge (see the already linked above interview: “What is AI doing to our humanity?”). Whether they can create new knowledge is to be determined but we shouldn’t outsource the creativity of the human brain to AI anymore than we should start talking to someone’s reflection in a mirror while ignoring the person/people being reflected. When it comes to thinking, we’re still superior.

Anecdotal evidence about phones in the classroom

I’m not a psychologist or a social scientist. But my own experience in the classroom has made me pay attention to the claims of people like Jonathan Haidt and Jean Twenge. Both have sounded the alarm with regard to adolescent (over)use of smartphones. I’ve confiscated student phones only to have my pocket buzz incessantly. I wondered how anyone could focus with notification after notification from Snapchat, Instagram, and TikTok vying for their attention. I’ve seen my students sit around together but not speaking to each other as each stared into their phone. Adults do this sort of thing too but as Haidt, Twenge, and other has noted: we had a chance to live through our brain’s important developmental stages before getting smartphones. Gen Z didn’t get the opportunity. For this reason, Haidt, Twenge, et al., have argued for causation between smartphone use/addiction and the ongoing mental health crisis we see about America’s youth (for example, see Haidt’s “End the Phone-Based Childhood Now”).

My wife and I have seen the children of parents who raised their kids without smartphones and tablets and those who allowed it. Our experience told us that there are drastic differences in these kids ability to wait, be patient, delay gratification, hold conversations, read books, be creative, and just enjoy being children with imaginations. Our kid won’t have a smartphone or a tablet at their disposal. If they use it at all in daycare or school, we’ll ask for limits. My plan is to keep these technologies out of their lives as long as I can.

For this reason, I was surprised when a recent episode of Freakonomics (“Is Screen Time as Poisonous as We Think?”) interviewed Andrew K. Przybylski of Oxford University who seemed to brush these concerns aside. I think his main point was that phones aren’t the end-all, be-all of Gen Z’s mental health crisis. But as I listened to him, I thought what he was saying didn’t match my experience at all. You see, this year our school went phone free. And I don’t know how many students are going to our student counselor. And I can’t tell you whether they feel happier in general. I can tell you what I see in the classroom though: they’re more focused; they contribute to class conversations more freely; they seem to have more patience when reading; they seem less stressed and distracted; they seem more in the moment. Several of my colleagues have noticed the same thing.

Our school is using Yondr. The kids were not happy about this at the beginning of the year but more and more are telling my colleagues that they admit that they kind of enjoy the freedom. Maybe Przybylski would agree that this can be good. Maybe his point has little to do with phones in schools and more to do with the smartphone-mental health causation argument. But a few weeks into this new school year and I think our school’s decision to remove phones has been one of the best ones we’ve made in years. The students seem happier!

Phones weren’t allowed last year, technically. We told the kids to keep them “off and away” during class. They could take them out between classes. This meant that in reality many students still had their phones on their bodies all day. All those notifications grabbing their attention endlessly from their pocket, making them want the class to be over now so that they could hurry to check their social media. Now my students often lose track of time as they lack phones and smart watches, and I rarely use computers in my class. Also, I don’t have a clock on my wall. The few students with traditional watches keep time but quite often it’s clear that they don’t know how much time has passed in class. This has made a huge difference.

I teach at a relatively affluent private school. My experience is limited to one demographic of kids. I don’t want to claim to be diving into the big picture psychology and social science of adolescents and phones. But for our school, and for my students, the removal of phones has been a gift. As an adult, I’ve noticed that when I spent too much time on social media, I feel worse about things. When I stare at my phone for too long, it’s rarely a good sign. As I try to use my phone and social media less, my brain feels freer, happier. If this is how things are for my forty-two year old brain, I can’t imagine that a fourteen through eighteen year old brain doesn’t benefit at least as much from time away from their phones and social media. For that reason, as the debate goes forward in universities and research labs, I’m going to go with my experience and root for limiting phone/social media use by young people.

Gen Z, social media, and mental health

Recently it dawned on me that in a few short years I’ll be teaching so-called “Generation Alpha” (we’ve got to get better named for the post-Millennials!) but for now, my concern remains “Gen Z”. If you parent and/or work with Gen Z-ers (c. 1994/96-2010/12), I have a couple of podcast episodes worth listening to:

The argument that there’s not just correlation between smartphone and social media use and mental health but causation, and negative causation at that, seems to be strengthening.

On a slightly related note, I deleted my Twitter account today, probably for the last time. I did it back in 2016 and I don’t know why I rebooted it. It’s truly a terrible platform. If, like me, you keep your account private, then there’s almost nothing “social” about it.

Book Note: David J. Chalmers’ “Reality +”

David J. Chalmers, Reality +: Virtual Worlds and the Problems of Philosophy (W.W. Norton, 2022).

(Amazon; Bookshop)

I’ve been intrigued by some form of simulation theory since I saw The Matrix a couple of decades ago. When I introduce Hinduism to my students, I connect simulation theory to the concept of “Brahman,” the name of existence itself, of which all of us are part. For many Indian philosophers, everything and everyone is Brahman since everything participates in “existence”. When Brahman is personified, questions can be asked as to why there is difference if all of us are ultimately the same thing: lila and maya. Lila is “divine play” where Brahman “decides” to experience endless realities as a way of “enjoying” all the different perspectives that all of us create. Maya is the negative illusion that we’re individuals. Our stress and anxiety come from the false separation of “I” from everything else. So, lila and maya are two sides of the same coin. In order to enjoy our experience of reality, and for Brahman to have that experience, we must believe we are individuals, unique and distinct from the whole of reality in some way. But that sense of self, that illusion, also leads to our own entrapment in samsara, cycling through almost endless lives, until we can realize our oneness with Brahman, releasing ourselves from the illusion of distinction, and merging back into the whole. This is called “moksha”.

Hinduism is said to be “monistic” as in there isn’t one “god” like the popular forms of Judaism, Christianity, and Islam, but just one “thing” or one “reality”. Again, that reality is Brahman.

Why do I connect this to simulation theory? Well, simulation theory asks whether or not we are in a simulation and if we could know if we were in one. I push my students to consider the possibility that we are in a simulation, or that we are emanations of Brahman, and then ask them whether discovering that we are simulated or emanated would change how they view themselves and their lives. Since many of my students have been raised in homes where Christianity is practiced, or where Christianity is the unspoken influence, they tend to think of themselves as creations distinct from a Creator—creations with a unique, eternal soul that will never lose its distinction. For these students, the concept of Brahman, and simulation theory, can be unnerving. For students who tend to be more naturalistic, who already see themselves as material beings emerging from a material world to which their bodies will return when they die, neither Brahman nor simulation theory causes much unease.

David J. Chalmers, one of the foremost philosophers in the area of the study of mind, has written a wonderful book titled Reality +: Virtual Worlds and the Problem of Philosophy that deals a lot with simulation theory. When I’ve told people about the book, some of them say something like, “I can’t imagine reading a whole book on that topic.” But it isn’t about simulation theory only, just like when I teach my students about simulation theory, I’m really trying to help them conceptualize Indian concepts of Brahman. The book uses simulation theory as a gateway to many of the fascinating “problems of philosophy,” as the subtitle suggests. Chalmers has chapters on epistemology, ontology, and ethics that all use virtual worlds as thought experiments. When we ask whether we can know if we’re in a simulation, we’re jumping into a conversation about how we can know what we know or if we can really know anything (and what we mean by the word “know”). When we consider simulation theory, we’re asking what is “real”. It physics the only “real” world. Is our perceptions “real” or completely constructed. And when we consider what it would be like to see sentient life emerge in a simulation—whether we are the created or the creator—it forces us to consider our own ethical paradigms around how we treat other minds.

For this reason, the book can serve not only as a niche study of virtual worlds and how we should consider them—whether that be wearing an Oculus, enjoying whatever Meta is creating, or participating in Second Life—but it can serve as a general introduction to many of the problems that philosophers have been addressing and will continue to address. Also, the illustrations found throughout the book are excellent which makes the book all that more effective at teaching difficult philosophical concepts.

Book Note: Carolyn Chen’s “Work Pray Code”

Carolyn Chen, Work Pray Code: When Work Becomes Religion in the Silicon Valley (Princeton University Press, 2022).

(Amazon; Bookshop)

In Work Pray Code: When Work Becomes Religion in the Silicon Valley, Carolyn Chen asks us (p. 196), “What happens to society when its members worship work?” Then she responds, “Silicon Valley offers us an answer.” The answer is, on the one hand, enlightening, and on the other hand, terrifying. It’s enlightening because it provides us with much-needed insight into the spirituality of the so-called “Nones” (i.e. those who answer the question “With what religion do you affiliate?” with the answer “none”). When people hear “Nones” they may think of people with a religious void, or people who claim to be “spiritual-but-not-religious” (which is a claim founded on a misplaced concreteness regarding the word “religion”). But few “Nones” are religiously apathetic; they place the energies that others may devote to going to a church, synagogue, mosque, temple, etc., to something else but with similar vigor and intent.

While Chen doesn’t provide a working definition of “religion” until Appendix A, her implied definition is clear and aligns with her stated one in the appendix. In short, Chen admits (p. 213), “To find ‘religion’ in Silicon Valley, I realized that I’d have to reexamine my assumptions about what is ‘secular’ and what is ‘religious’.” To do this, she says that there “are two ways of studying religion empirically in a secular age” which she claims are through the clearly “religious” “religious traditions such as Hinduism, Buddhism, Christianity, Islam, and so on” which doesn’t work well for finding religion in the Silicon Valley, and through the less clear idea of “the sacred” which are “the institutions, ideas, practices, spaces and things a community sets aside as special and worthy of worship. Something is sacred because of the power it has over the members of the community.”

This reminds me a bit of Paul Tillich’s definition of “religion” in Theology of Culture (pp. 7-8), “Religion, in the most basic sense of the word, is ultimate concern. And ultimate concern is manifest in all creative functions of the human spirit.” For a Christian, ultimate concern may be a relationship with their god through the person of Jesus resulting in the award of eternal life, and for a Buddhist, ultimate concern may be to reach nirvana, extinguishing the pain of suffering and dissatisfaction. For a Google employee, ultimate concern may be the creativity inherit in their job and the mission of the corporation of which they’re a part.

And this is where it’s terrifying. We should use the word “cult” cautiously because as we know it’s a pejorative that’s often used to dismiss or demonize a religious movement that seems fringe or unfamiliar. As it’s said, “cult plus time equals religion”. Most religious movements are seen as fringe and unfamiliar, dismissed and demonized, in their earlier state but come to receive some level of “respectability” over time. But I do think that when most of us use the word “cult” casually, we’re expressing discomfort not only with the difference we’re observing but the difference plus the level of demand. We’re used to Catholic priests and Buddhist bhikkus giving their lives but outside of these very old, well-established institutions, when a religious movement begins to demand all of someone’s life, especially when it results in that person becoming divorced from the world outside of their religious community, popular discourse refers to this level of control as “cultish” or “cult-like” and the group/community/organization as a “cult”

As you read Chen’s account of how much tech employees pour of their lives into their place of work, and how much it shuts them off from the outside world, you’ll begin to understand why some people see the religious devotion of Silicon Valley workers to their companies as, at least, “cult-like”.

In the Introduction, “How Work is Replacing Religion”, and Chapter 1: “Losing My Religion…and Finding It at Work”, the reader comes to see how and why work has begun to fill the hole where religion used to reside in the hearts of many people. In Chapter 2: “Corporate Maternalism: Nurturing Body and Soul” and Chapter 3: “Managing Souls: The Spiritual Cultivation of Human Capital” we receive insight into how the tech industry sees their employees, in a competitive “knowledge economy”, as investments. They can’t burn out their workers when these are some of the best and brightest minds coming from the top universities and colleges, so they must invest in them, and keep them healthy and happy. Pardon the analogy but it’s like this: you won’t get as much from a cow if you work it to death, so for as much extraction as you may require, there better be an investment. Similarly, free snacks and drinks, yoga lessons, on-site gyms, child care, etc., help corporations keep their employees happy and satisfied, and in return, with every need met on location, it allows for the employee to put in more hours for the corporation.

Now, if this seems dehumanizing, remember what the introduction and first chapter establish: work has become one of the ultimate forms of fulfillment in our society. And as I read these sections, I was reminded of something I’ve seen stated by people who survived the tragedies of The Peoples’ Temple, the Branch Davidians of Waco, and Heaven’s Gate: those were some of the most excited, fulfilling, best days of their lives, even if it all came crashing down on them. And many survivors of these movements, while recognizing something went wrong, never could get the same high as when they were on a mission to save the world. Silicon Valley is full of companies that encourage their workers to see themselves as world-changers, so 12-16 hour days, 6 days a week, isn’t a sacrifice.

Chapter 4: “The Dharma according to Google” and Chapter 5: “Killing the Buddha” examine what happens when workplaces import religious practices while often stripping them of their religious affiliations. This process is what Chen calls, “the secular diffusion of religion” (p. 16). We’ve seen it: yoga has hardly anything to do with Hindu thought and practice in the minds of most Americans who practice it; mindfulness has little to do with the Buddhist meditative practices from which it derives. So, what happens when a Zen teacher is contracted by a corporation to come and teach mindfulness while leaving their Buddhism at the door? Cognitive dissonance is often what happens. For many “spiritual coaches” in the Bay Area, there’s the pragmatics of needing to afford to live in one of the most expensive places in the world, so if they have to offer “Diet Buddhism” so be it. For others, there’s a sense that some Buddhism, even if unnamed, does more for the world than no Buddhism. And for others, it was too much, and some tech workers, meditation teachers, etc., decided that their religion was being corrupted by its marriage to big tech and then decided to choose their religion over big tech.

Chapter 5: “Killing the Buddha” may be worth the price of the book (though, you benefit from the rest of the book when it comes to understanding this)! Chen takes from the Zen saying, “If you meet the Buddha, kill him” (which, while interpreted diversely, has come to mean for some that you should kill the “religious trapping in the practice of Buddhist meditation” [p. 155]), and shows that this can be very problematic when we observe how it’s applied. Chen discusses five types of Buddism that emerge when corporations want the perks of Buddhist practice without all the things that may be considered “religious” sounding and looking. Those types of Buddhism are “Hidden Buddhism” as in the practices are Buddhist, but out of fear of violating Title VII, must be done without reference to their origins. Whitened Buddhism is Buddhism not only without its religiosity but “It erases the ‘ethnic’ and ‘religious’ Buddhism of Asians and Asian Americans in favor of the thinking and experience of White Westerners.” (p. 162) Scientific Buddhism is when CEOs or HR can be sold the benefits of Buddhism by appealing to the scientific studies that may indicate that meditation/mindfulness has certain psychological and physical benefits to it that will benefit the company (remember the themes of chapters 2 and 3). Bottom-Line Buddhism is directly connected to Scientific Buddhism: if workers are serene, peaceful, and free from anxiety, this will bring down lost hours, health-care costs, etc. So, Bottom-Line Buddhism is sold to corporations on the promise that it’ll increase productivity, reduce costs, and ultimately result in profit. Finally, On-the-Go Buddhism is just as it sounds: a religion that may ask you to spend time being in meditation is squeezed into a fast-food version of itself that’s suitable for busy tech workers.

The Conclusion: “Techtopia: Privatized Wholeness and Public Brokenness” examines the fallout of this sort of work-as-religion worldview, ranging from work “colonizing” the time of its employees to the displacement and economic turmoil the tech industry has caused in the Bay Area. Now, as I’ve spent much more time on the negative impact of work-as-religion, I want to be clear that this book isn’t a hit job. It’s quite fair to tech industry at many points. Chen embedded herself in that world for five years, so she got to know the people, the companies, and their culture. And as a former resident of San Francisco myself, I can resonate with the high of Bay Area life. It’s not just the West Coast of the United States but it often feels like you’re on the edge of the future, and I didn’t even work any jobs even remotely related to tech. So, while we may be rightly concerned with people giving their everything to work so that they’re no longer part of a church, or a PTA or HOA, or local politics, etc., let’s remember that tech jobs do provide purpose and mission, and as many religious institutions have failed to be able to show people their “purpose-driven life” (to borrow from Rick Warren’s 2000s approach to American Christianity), the tech industry has been able to do it. As religion becomes less relevant in the lives of many Americans, new forms of “ultimate concern” are created and offered to seekers everywhere.

Would it be unethical to simulate the universe?

The past few weeks, I’ve been teaching a unit on Indian cosmology (think Brahman, Samsara, Moksha, et al.) as I’ve been reading David J. Chalmers’ Reality +: Virtual Worlds and the Problems of Philosophy while also watching a lot of Rick and Morty, so excuse the weirdness. Because of all this, I’ve been thinking a lot about Nick Bostrom‘s famous Are We Living in a Computer Simulation?” article (The Philosophical Quarterly, Volume 53, Issue 211, April 2003pp. 243-255). In that article, Bostom made the claim that “one of the following propositions is true: (1) the human species is very likely to become extinct before reaching a ‘posthuman’ stage; (2) any posthuman civilization is extremely unlikely to run a significant number of simulations of its evolutionary history (or variations thereof); (3) we are almost certainly living in a computer simulation.” (If you don’t have time to read about this theory, there are plenty of YouTube videos that provide decent summaries.) As things are looking now, (1) seems extremely probable. We can’t figure out climate change and we still have weapons that could wipe us out in minutes. But if we are going to survive and slip through our “Great Filters,” then (2) would seem extremely improbable. We’re already making simulations and have been for a while now. If technology continues to develop at the pace it has since I was a teen, then I’d be shocked if it turned out that we humans would choose to not create advanced simulations. Except there’s one idea that has grabbed my attention: humans could choose to not create advanced simulations for ethical reasons.

Chalmers (p. 94) puts it this way: “populations advanced enough to create sims will know how to create intelligent sims that aren’t conscious (while nevertheless serving many practice purposes) and will have strong reasons—perhaps ethical reasons—to do this.” I mean, if I look at contemporary humanity, this seems unlikely. We humans seem to have no problems (collectively) with causing suffering, whether we’re inflicting it on fellow humans or other non-human animals. So, there’s little reason to believe that future humans would be morally superior to us…but there’s one I’ve been pondering.

As we look at the current state of our world, assuming it’s either (A) base reality or (B) a simulation of what base reality looked like in the early twenty-first century, then it appears clear that if humans are going to make it and make it so that we don’t launch our descendants into a dystopic age where they’d have little time to worry about anything other than creating technologies that help them stave off extinction, we’re going to have to experience an evolutionary leap in ethics. I mean, not just on the level of individuals recycling, buying electric vehicles, investing in renewable energies, and maybe going vegetarian, but at the international level and hopefully in a way that includes democratic societies. (Though, as the Pill Pod discussed in their 64th episode, “Democracy Dieth in Darkness,” political scientists/philosophers like Ross Mittiga are already asking if authoritarian power is ever a legitimate form of government, especially if climate catastrophe grows more probable: “Political Legitimacy, Authoritarianism, and Climate Change,” American Political Science Review [December 6, 2021], pp. 1-14).

This feels improbable right now but let’s assume it will happen (or happened, if this is a simulation that is based on base reality). What sort of collaboration would be demanded of humanity? What sort of transnational government structure would have to emerge? And if we were capable of these things, would we be moving more toward the Star Trek vision of the future than the Don’t Look Up one? And if that were to be the case, then doesn’t that raise the probability that humanity would become the type of species who knowing the suffering they’d cause by creating advanced simulations with sentient creatures (who would have to live through the era we’re living through now) would choose to avoid inflicting that type of pain on their potential digitized creations?

I don’t know that answer to this is “yes” but it’s worth considering. But it also leads to theological/theodicy questions and invites us to consider antinatalist ethics as well. First, if I’m assuming morally advanced humans would never create this reality intentionally, what does that say about a god who would create this reality? Now, I’m not actually opposed to this reality. In fact, I’m unsure that I can be because it seems odd to use existence to argue against existence. And I guess questions around postmortem sentience and even multiverses muddy that waters here. But my underdeveloped line of thought does have me wondering: if I think that advanced humans wouldn’t inflict this suffering, what does that say about the idea of “god” or god if god exists?!

Also, back to afterlives: would it be ethically justifiable to run simulations like our world if you offered your digital creations an afterlife of bliss?

Finally, am I being too negative about our current state? If a global catastrophe is around the corner, would it be immoral to have children? Obviously, if humans had foreknowledge and knew with absolute certainty that everything was going to go to hell within the next half-century, then yes. But we don’t have that foreknowledge. So, it gets trickier.

And that takes me back to the question of simulation: what if this universe is an open-ended simulation? Our fate isn’t predetermined. Maybe there’s great joy in meeting the challenge of climate change and solving it? Maybe we actually do that or have the potential to do that? Then I guess we could leave the door open to the possibility that there’s nothing immoral about our universe being a simulation if indeed it is one!

Book Note: Pamela Paul’s “100 Things We’ve Lost to the Internet”

Pamela Paul, 100 Things We Lost to the Internet (New York: Crown, 2021). (Amazon; Bookshop)

Pamela Paul is the editor of the New York Times Book Review. Her book 100 Things We Lost to the Internet is a nostalgia trip for Gen Xers, Millennials, and I guess we can include Boomers too. It would make almost no sense to Gen Zers. To them, it would be a weird museum of outdated practices. But for those of us who remember the world before the Internet was in all of our homes, this book is a lot of fun.

Many of the topics Paul discusses are social, like experiencing boredom or losing track of ex-boyfriends; others are technological, like having your phone in the kitchen or having to use printed, paper maps. And many of them are a mixture of how our social and technological lives have changed since the Internet created our global hivemind.

There’s not a ton I can say about the book other than it’s enjoyable to read, most of the “chapters” are very short (almost like reading sort blog posts!), and the book is great for resurrecting old memories and creating conversation starters with your friends.

Facebook is already dead

First, I want to go on record saying I thought MySpace was better than Facebook and I was resistant to join Facebook. I mean, MySpace allowed you to learn basic coding skills! And MySpace gave you the option to have a song on your page! But alas, Facebook won…for now.

The other day I was watching a debate between Yanis Varoufakis and Gillian Tett where the topic was “Can We Fix Capitalism?”

I’m not an economist, so I’m not commenting on the debate itself. I watched it to learn and be informed. What I want to discuss here is a snippet of an exchange between Varoufakis and Tett that I thought mattered more than a quick glance would reveal. Varoufakis has been arguing that capitalism is already dying or is basically dead or has “evolved into another system”. He proposes that something he calls “technofeudalism” has taken its place. If you want to know his thoughts on the matter, here’s a clip where he shares his idea with Slavoj Žižek:

If you don’t want to watch the video, I’ll provide a very brief, very rough simplification: Capitalism needs (A) “profit to drive it” and (B) that “exploitation takes place in markets” but what we see with Amazon, Facebook, etc., is not a market since people like Bezos and Zuckerberg use their digital platforms to predetermine what can be bought or sold. Varoufakis believes that this limiting power is more feudalistic than market-driven and since it’s done through “platforms” the person who decides (“one person owns to whole digital space”) what can be bought or sold are those who own the platforms. (Varoufakis argues that once you get on Facebook, you’re already “outside capitalism”.)

While this is fascinating, I want to go back to the aforementioned debate. During the debate, in an attempt to defend capitalism’s redeemability, Tett points out change can happen, that these “platforms” don’t have to have the last word, and that, in fact, they’re already losing their grip. Her point to Varoufakis: Gen Z isn’t on Facebook.

Now, I’m not saying that this gives Tett the edge in the debate; I’m saying this one point is fascinating. Because Varoufakis’ observation seems valuable to me. Something is changing. But Tett’s observation also matters: these multi-billion-dollar corporations aren’t invincible or eternal. In fact, like Blockbuster, I think the clock is tick, tick, ticking on Facebook. Facebook may dominate the connectivity of Millennials, Gen X, etc., but Gen Z is on Instagram, Snapchat, and TikTok. While Facebook owns Instagram, it appears that TikTok’s model (and still YouTube’s) is becoming increasingly attractive. What’s clear is Facebook itself is dying and this became most apparent this week when Zuckerberg lost $29 billion and Meta lost $200 billion. Why? Facebook is seeing a drop in users.

I don’t see Facebook rebounding. I don’t think the “Meta” rebranding will work. Facebook has lost the next generation already. And I wouldn’t be surprised if they lose my generation as well. I mean, MySpace did.

So, Varoufakis may be correct. We may be moving into technofeudalism. But Tett is right about at least one thing: consumers still have the power to bring even giants like Facebook to their knees.

Teens on their phones: two interpretations

On the one hand, as a ‘Millennial’ who uses his phone too much, I’m sympathetic toward teens who seem to have something of a phone addiction. On the other hand, as a teacher, I’m grateful to our school’s administration for banning phone use in the classroom (unless the teacher gives permission to use it for something related to class). Teens on their phones can be learning more, faster, than most of us could at their age. Teens on their phones can also be zombies who fell into the rabbit hole of YouTube, Tik Tok, or Snapchat. Because of this, I got a kick out of two tweets commenting on a picture of teens sitting in a museum using their phones. (FWIW, these two tweets reinforce the ongoing Boomer v. Millennial battle.)

Tweet #1

Tweet #2

Interestingly, a side-by-side comparison of these tweets invites us to do something similar to what a walk through an art gallery does: it invites us to experience to subjectivity of our own interpretation and to reinterpret it in light of the interpretation of other.

I’ve heard it said ‘everything is ethics’ but I prose ‘everything is hermeneutics’.