ER 19: On AI, extinction, and the existence spectrum
Or, "AI is an Existential Risk, But Not in the Way We’re Being Sold"
Welcome back to the Ethical Reckoner. Today, we’re going to talk about AI and the good news that it probably won’t wipe us out, but also the less-good news that the extinction framing is distracting from the real issues of AI.
AI is not an extinction risk. Well, it is an extinction risk in the same way that an Earth-killing asteroid is an extinction risk: technically possible, enough so that I saw NASA crash a spaceship into an asteroid and thought, “I’m glad that someone is thinking about that,” but not enough that I lay awake at night worrying or want all of NASA’s resources devoted to it. There is no asteroid approaching Earth, and AI is not about to wipe out humanity (as most experts surveyed by IEEE agree).
This is what most people mean when they talk about AI “existential risk,” or “x-risk” (which falls under the umbrella of “AI safety” research)—existence in terms of our presence (or lack thereof) in this universe. In other words, extinction. This is what the Future of Life institute meant when they wrote that “an existential risk is any risk that has the potential to eliminate all of humanity or, at the very least, kill large swaths of the global population,” it’s what the prominent—and problematic—x-risk philosopher Nick Bostrom meant when he wrote the widely-cited “Existential Risk: Analyzing Human Extinction Scenarios and Related Hazards,” and it is what dozens of “AI experts and public figures” meant when they signed the Center for AI Safety’s “Statement on AI Risk,” which states: “Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.”
This statement and the Future of Life Institute’s “Pause Giant AI Experiments” letter have been criticized for not laying out how AI poses an extinction threat and thus for creating a “generic sense of alarm” without possible actions to counter it. However, I think their bigger sin—and indeed, one of the x-risk field as a whole—is conflating existence with the extinction binary. Either we will exist, or we will not. Either AI will kill us all, or it won’t. However, there are many degrees between those two extremes. Existence is a spectrum, and looking exclusively at the negative pole ignores the entire range of what AI can and most likely will do, as wasteful as dedicating NASA’s entire budget to asteroid deflection, or exclusively researching the positive pole of how AI might lead to a techno-utopia. It might, but we shouldn’t bet the house on that lottery ticket, just as we shouldn’t stake the health of our society on countering only the worst-case scenarios. Some x-risk scholars also discuss “catastrophic risks” that would cause “devastating consequences for vast numbers of people” or define “existential risk” to include “catastrophes from which humanity would be unable to recover,” but this is still examining only the worst-case scenarios. It also implies that harm only matters if it’s on a large scale, only happening to the majority, but AI harms will be concentrated in minority groups—both those we know about and those we don’t—so even when x-risk scholars seem to be nuancing the conversation, they aren’t.
Just as I empathize with the NASA anti-asteroid team, I empathize with those who are concerned with extinction risk—in fact, my first introduction to AI ethics was through long-term risk. However, their all-or-nothing viewpoint is counterproductive. We need to prioritize looking at AI as a threat not to whether or not we exist, but as a threat to the quality of our existence. AI is polarizing us, encoding biases in some of the fastest-adopted tools in history, subjecting us to biased decision-making, risking our election integrity, harming our children’s mental health, and threatening any woman who has a few photos of her face online with personal humiliation. Do we want to live in a world where recommendation algorithms are feeding suicidal teens pro-self-harm content? Or one where we’re never quite sure if our next election will be the first one tipped by AI-generated disinformation? After all, the same recommendation systems that feed us personalized advertisements could serve us targeted disinformation. This is, in fact, similar to one “catastrophic risk” envisioned by some x-risk scholars. It probably will not “debilitat[e] human society” as they warn because we will learn what sources to trust (as we did when Photoshop was created), but it will likely make our online environment much less trustable in the short term. Crucially, it also shows that the AI ethics camp (which focuses on here-and-now threats, like bias and discrimination) and the x-risk camp are not diametric opposites, but different points along an existential risk spectrum—with a heavy concentration at one end.
Unfortunately, Kevin Roose wasn’t exaggerating when he described AI ethics and AI safety/x-risk as “warring factions.” Vox attributes the rancor between the two to a competition for scarce attention and resources and while this is likely part of it, another wedge is the mutual disrespect between the two. Some of this is merited; the x-risk community is part of group of futuristic philosophies linked to racism and eugenics, and they have done real harm to the AI ethics community, where many of the leading scholars are women of color. While I don’t agree that there is “no point in building bridges”—because these two communities are on the same spectrum of concerns—the x-risk community must reckon with the historical and present injustices of their field and make amends, which will not be an easy process. I realize that this may sound patronizing coming from someone more on the AI ethics side of the spectrum, but is critical to avoid further fragmentation that could damage attempts to mitigate the harms of AI.
AI will (most likely) not wipe us out. But it could make existence a lot less pleasant. As a recent op-ed noted, AI factionalism is really a fight over defining “our shared future.” We as a society need to decide where on the existential risk spectrum we want to be, and regulate—with all the tools at our disposal—accordingly.
Thanks to Jess Morley for her excellent feedback on an earlier draft.
Thumbnail generated by DALL-E with prompt “an abstract interpretation of the existential risk of AI”.