Would it be unethical to simulate the universe?

The past few weeks, I’ve been teaching a unit on Indian cosmology (think Brahman, Samsara, Moksha, et al.) as I’ve been reading David J. Chalmers’ Reality +: Virtual Worlds and the Problems of Philosophy while also watching a lot of Rick and Morty, so excuse the weirdness. Because of all this, I’ve been thinking a lot about Nick Bostrom‘s famous Are We Living in a Computer Simulation?” article (The Philosophical Quarterly, Volume 53, Issue 211, April 2003pp. 243-255). In that article, Bostom made the claim that “one of the following propositions is true: (1) the human species is very likely to become extinct before reaching a ‘posthuman’ stage; (2) any posthuman civilization is extremely unlikely to run a significant number of simulations of its evolutionary history (or variations thereof); (3) we are almost certainly living in a computer simulation.” (If you don’t have time to read about this theory, there are plenty of YouTube videos that provide decent summaries.) As things are looking now, (1) seems extremely probable. We can’t figure out climate change and we still have weapons that could wipe us out in minutes. But if we are going to survive and slip through our “Great Filters,” then (2) would seem extremely improbable. We’re already making simulations and have been for a while now. If technology continues to develop at the pace it has since I was a teen, then I’d be shocked if it turned out that we humans would choose to not create advanced simulations. Except there’s one idea that has grabbed my attention: humans could choose to not create advanced simulations for ethical reasons.

Chalmers (p. 94) puts it this way: “populations advanced enough to create sims will know how to create intelligent sims that aren’t conscious (while nevertheless serving many practice purposes) and will have strong reasons—perhaps ethical reasons—to do this.” I mean, if I look at contemporary humanity, this seems unlikely. We humans seem to have no problems (collectively) with causing suffering, whether we’re inflicting it on fellow humans or other non-human animals. So, there’s little reason to believe that future humans would be morally superior to us…but there’s one I’ve been pondering.

As we look at the current state of our world, assuming it’s either (A) base reality or (B) a simulation of what base reality looked like in the early twenty-first century, then it appears clear that if humans are going to make it and make it so that we don’t launch our descendants into a dystopic age where they’d have little time to worry about anything other than creating technologies that help them stave off extinction, we’re going to have to experience an evolutionary leap in ethics. I mean, not just on the level of individuals recycling, buying electric vehicles, investing in renewable energies, and maybe going vegetarian, but at the international level and hopefully in a way that includes democratic societies. (Though, as the Pill Pod discussed in their 64th episode, “Democracy Dieth in Darkness,” political scientists/philosophers like Ross Mittiga are already asking if authoritarian power is ever a legitimate form of government, especially if climate catastrophe grows more probable: “Political Legitimacy, Authoritarianism, and Climate Change,” American Political Science Review [December 6, 2021], pp. 1-14).

This feels improbable right now but let’s assume it will happen (or happened, if this is a simulation that is based on base reality). What sort of collaboration would be demanded of humanity? What sort of transnational government structure would have to emerge? And if we were capable of these things, would we be moving more toward the Star Trek vision of the future than the Don’t Look Up one? And if that were to be the case, then doesn’t that raise the probability that humanity would become the type of species who knowing the suffering they’d cause by creating advanced simulations with sentient creatures (who would have to live through the era we’re living through now) would choose to avoid inflicting that type of pain on their potential digitized creations?

I don’t know that answer to this is “yes” but it’s worth considering. But it also leads to theological/theodicy questions and invites us to consider antinatalist ethics as well. First, if I’m assuming morally advanced humans would never create this reality intentionally, what does that say about a god who would create this reality? Now, I’m not actually opposed to this reality. In fact, I’m unsure that I can be because it seems odd to use existence to argue against existence. And I guess questions around postmortem sentience and even multiverses muddy that waters here. But my underdeveloped line of thought does have me wondering: if I think that advanced humans wouldn’t inflict this suffering, what does that say about the idea of “god” or god if god exists?!

Also, back to afterlives: would it be ethically justifiable to run simulations like our world if you offered your digital creations an afterlife of bliss?

Finally, am I being too negative about our current state? If a global catastrophe is around the corner, would it be immoral to have children? Obviously, if humans had foreknowledge and knew with absolute certainty that everything was going to go to hell within the next half-century, then yes. But we don’t have that foreknowledge. So, it gets trickier.

And that takes me back to the question of simulation: what if this universe is an open-ended simulation? Our fate isn’t predetermined. Maybe there’s great joy in meeting the challenge of climate change and solving it? Maybe we actually do that or have the potential to do that? Then I guess we could leave the door open to the possibility that there’s nothing immoral about our universe being a simulation if indeed it is one!

Pre-knowledge and reading

This morning I’ve been reading, slowly, through Nick Bostrom’s Superintelligence: Paths, Dangers, Strategies and I’m reminded of something: pre-knowledge impacts how you read. I can hear you say, ‘Duh!’, but here’s why this matters to me. Every year I wrestle with what type of reading to assign to my students in class and as homework. Every year I reform both sets of reading banking on the ‘less-is-more’ approach. In other words, I’d rather lower the page count and keep wrestling with ways to make the shorter reading more meaningful. My rationale is pretty simple: for my students much of what I teach them is brand new. Even when students take my Bible-related classes, they may come knowing basic stories and characters, but it’s rare that they think of reading the Bible in ways that is academic in nature (rather than liturgical, devotional, etc.). Since almost everything they are learning is brand new it would be a mistake to try to introduce a ton of content.

Why do I argue this? Well, because of experiences like the one I’m having today. I know almost nothing about AI other than what I’ve seen in YouTube videos or heard on podcasts. Every page is filled with a ton of new information. Since I lack pre-knowledge, this means that there are many times when I have to stop and look up things I don’t know. Now, while this makes for great learning, if I had to read large chunks of the book every day I wouldn’t be retaining much.

In fact, when I try to speed read through books like this (where I’m unfamiliar with the content) I catch my eyes glazing over and moving without purpose. I’ll have ‘read’ a paragraph without actually having read the paragraph. If I do this as a teacher with years of academic training and experience doing research…then I’m guessing my teenage students are doing it too. Therefore, my own experience reminds me that while it may be easy for me to read ten or twenty pages on religion or Biblical Literature because I’ve been swimming in these thought-worlds for years, for my students it’s all new, and therefore they need more time to digest what they’re reading.