WR 5: Big moves on AI safety, weird moves from Elon Musk
Weekly Reckoning for the week of 30/10/23
Welcome back to the Ethical Reckoner. In this Weekly Reckoning, we’re basically covering AI (including the UK AI safety summit), war, and AI & war. To make up for it, here’s a picture from Leuven, where I live as of Wednesday (hence the slightly delayed Weekly Reckoning):
Alright, we’re a day late, so let’s get into it.
The Reckonnaisance
UK AI Safety Summit sparks action from US, China
Nutshell: In the lead-up to the UK’s Summit on AI Safety, the US and China launched new AI governance initiatives.
More: The UK is trying to position itself as a hub for AI safety, so it invited a lot of countries to a big summit. The US and China took it seriously and both launched new AI governance initiatives in the lead-up. The US put out a new Executive Order on the “Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence,” while China launched a “Global AI Governance Initiative.”1 At the summit, both ended up signing the “Bletchley Declaration,” which affirms the need to cooperatively address AI safety risks at the international level, especially “catastrophic” harm in domains like cybersecurity and biotechnology.
Why you should care: I’ve written before about how we need to consider risks from AI as a spectrum of existential risk. US Vice President Kamala Harris agrees; as she said in her speech on Wednesday, try telling the old man kicked off his health insurance by an algorithm that that isn’t an existential risk to him. The UK AI safety summit might shift global governance towards the catastrophic risk end of the spectrum, which is dangerous because the main risk from AI to you, dear reader, is not an autonomous Terminator robot or an AI-created bioweapon, but that you will be denied a job by an algorithm, or misdiagnosed by a biased AI system, or fooled by AI-generated misinformation (see #3), or threatened with explicit deepfakes, or be manipulated by what’s recommended to you. These are existential risks to us.
X wants to replace your bank, if you're dumb enough to let them
Nutshell: Musk has given Twitter X employees a year to “replace your bank” and be your “entire financial life.”
More: This is part of Musk’s mission to a) revive the platform he’s slowly killing and b) launch his “everything app.” These apps are popular in China, but I’ve argued that they won’t work in the West for a variety of reasons:
WeChat succeeded in China because it filled the need for a) easy cashless transactions in a cash-based economy, and b) combining services into one app at a time when smartphones were low-powered and couldn’t support dozens of different apps.
X is trying to launch in markets that already have established cashless payments (Apple Pay, Venmo, plain old credit cards) and high-powered smartphones. Especially with seamless biometric/saved password logins, your smartphone is essentially a superapp.
Everyone already has their preferred messaging, banking, video, shopping, and social media apps. Centralization is convenient, but overcoming inertia is difficult.
Also, centralization is sometimes ugly and confusing—feature bloat is real.
Finally, centralization is dangerous because it creates a target, and it’s probably not a great idea to hand your personal information to a platform whose compliance and security teams have been gutted.
Why you should care: Honestly, you shouldn’t. Just don’t give him your credit card info.
The possibility of deepfakes is muddying the information ecosystem in the Israel-Hamas war
Nutshell: There haven’t been that many AI-generated images spreading, but the ones that have and possibility that there could be more could has people unable to tell what’s real and what’s not.
More: We’re less likely to trust headlines labeled as AI-generated, even when they’re true (or actually human-written), so accusing something of being created by AI is an effective way to muddy the waters. Services that claim to identify AI-generated content are unreliable. What we really need is better information provenance tracing, but that’s tricky for its own reasons.
Why you should care: Even just the possibility of fake content is making figuring out what’s going on in a terrible conflict even harder. It’s possible to create deceptive images with AI that look extremely realistic, and because of this, it’s also possible to make people disbelieve real content by alleging it to be AI-generated. Unless we do something, photographic evidence may become meaningless: it’s going to become very easy to believe whatever content you want to believe and dismiss the rest as unreliable, reinforcing whatever your existing views are.
Starlink expands to another conflict zone
Nutshell: Elon Musk announced that Starlink, SpaceX’s satellite Internet service, will provide services to international aid groups in Gaza.
More: Starlink also provides services to Ukrainian forces. After the Ukrainian government asked Musk to allow them to use it to support an attack on the Russian naval fleet and he declined, concerns have been growing that Musk—an unelected, non-government businessman—is playing a huge role in shaping the conflict. Now, he may end up with a role in yet another war.
Why you should care: I’ve been critical of Starlink (and other services that promise to “close the digital divide”) before for potentially locking underserved communities into their services and subjecting them to the whims of whatever company/CEO controls it, but shaping the course of armed conflict is even more concerning. Of course private companies play a role in war (military contractors, etc.) but for a while SpaceX wasn’t under any sort of contract for its role in Ukraine, giving Musk enormous power and latitude.
Bonus: Sam Bankman-Fried convicted on all counts
Nutshell: In a verdict that was only surprising in the speed at which it was delivered, the jury in SBF’s trial found him guilty on all seven counts.
More: After a monthlong trial, the speed of the verdict (less than 5 hours) took some reporters by surprise; many had left the courthouse, expecting deliberations to run into the next day. The charges—for various forms of fraud and conspiracy related to the collapse of crypto exchange FTX last year—carry a combined maximum sentence of 115 years. Hopefully, this will provide some sense of closure to the victims and to residents of the Bahamas, whose economy and hopes have been impacted by this whole mess.
Why you should care: Justice served, and hopefully we won’t have to talk about crypto again for a while.
Extra Reckoning
There’s been a lot of AI in the newsletter today, and if that’s not your cup of tea I’m sorry about that, but I’m going to ask you to stick with me for a few more minutes. I’m also going to ask you a favor, which is a bit cheeky of me.
One of the things I’ve been thinking about recently is how normal people (defined here as “not AI researchers”2) can have their voices heard in AI governance. You may not care, and that’s perfectly valid, but AI is going to impact you whether you like it or not, and you probably don’t want governments and academics deciding what’s important. So, I wonder if you could share a few things. What concerns you about AI? And how did you come to be concerned about it? The comments are below, or get in touch with me privately.
I ask because apparently, one of the reasons Biden is interested in AI safety is because he watched a Mission Impossible movie where a the villain is sentient AI.
And while it’s great that he’s interested in AI, this is the kind of thing that drives those of us in AI ethics research crazy. Pop culture portrayals often lean on the killer AI narrative, which is more cinematic than “Alicia got denied a credit card that George got because of an algorithm.”
Biden is also worried about people being fooled by AI-generated vocal deepfakes, especially seniors—which is happening—and that was inspired by him hearing a deepfake of his own voice and asking “When the hell did I say that?” It’s this kind of demo that seems to be super effective in communicating some of the very proximate risks of AI to people, but not one accessible to the general public. So, I invite you to, in the comments below or via email/Bluesky/X (ugh)/carrier pigeon, tell me what worries you about AI, even if that’s nothing or even if it’s extinction.
I Reckon…
That we may be about to see a AI research factional feuds might get even worse as everyone races to influence the US’s new AI Safety Institute.
What do you think? Comments are below.
It’s notable that the US’s is very internal-facing—it’s all about measures the US should take to boost safe development and use—while China’s is very external. The US has historically focused on global leadership without the actual regulations to back it, while China has a lot of concrete legislation and has recently started seeking an international role, positioning themselves as the leader of developing countries.v
Let’s be real, we’re not normal.
Thumbnail generated by DALL-E 2 with the prompt “An abstract painting of the concept of safety as it relates to technology”.
Thanks for following and reporting on these things, great to see a reasonable perspective!
One of the things that I think about when it comes to the introduction of AI in production is the narrative of what consequences it will have. Disregarding the all the end-of-the-world stuff, one of the most common narratives I would say is the one that says AI will take peoples jobs -- often paired with reassurances that this will not happen because of market forces.
I will grant that in any market worth it's name, there should always, or at least most of the time, be some way for us humans to add value -- this is not the problem. The issue, as I see it, is instead that an introduction of a new technology by necessity means that the distribution of value in society will be renegotiated. (New technology means more value being added and old ways of production getting out-competed. In other words if we are not careful, the people in control of the new technology not only benefit from their added value, but also directly take control of value that would otherwise have reached other people.)
My conclusion from this is not to worry about having a job to go to, but instead very much about how fairly I will be compensated for the work that I will be doing. I know these aren't original ideas, but I am surprised at how rarely I see them presented in response to AI.