ER 40: DIVAS Dispatches
Or, how AI makes us all digitally vulnerable
Welcome back to the Ethical Reckoner. It’s been a minute! (Or a month… or three.) Sorry, the “finishing dissertation” hiatus turned into the “finishing dissertation and starting a new job1 and traveling a bunch2” hiatus. But I’ve missed you all, so it’s good to be back. If you’re joining us for the first time, this is a (slightly atypical) edition of the Ethical Reckoner, our monthly longer format. Check out the back catalogue here, or dive in below!
This edition of the Ethical Reckoner is brought to you by… Burn Order (and a sticky shift key)
Today, we have an ER about the Digital Vulnerabilities in the Age of AI Summit (DIVAS), which happened earlier in December. I co-chaired it in my new capacity as a Fellow at the Yale Digital Ethics Center; you might remember the ER on the Summit on State AI Legislation, which I helped organize in the spring.
WR 56: Dispatches from the Summit on State AI Legislation
Welcome back to the Ethical Reckoner. This week, we’ve got news from US and Chinese AI, plus the latest from the anti-cultured-meat beat, before wrapping up with some reflections from the recent Summit on State AI Legislation.
DIVAS was a two-day Chatham House summit featuring eight fabulous panels. Sixty-ish researchers, journalists, legislators, policy professionals, and more contributed to the conversations.
Curious? I’ve summarized the panels below:
Economic Harms: AI is enabling criminals to rapidly enhance scamming infrastructure, from real-time deepfakes to quickly spun-up fake IDs, leading to an estimated $1 trillion in losses. The panel concluded that until societal issues like loneliness and lack of critical thinking are addressed, and policy remedies like allowing scams as a tax loss are implemented, this cat-and-mouse game will continue.
Child Safety: In the wake of 7 new lawsuits being filed against chatbot companies over child harm, the main discussion centered on companion chatbots—particularly their business models, which incentives maximum engagement and may lead to attachment issues by misrepresenting the realities of real-life relationships through constant availability and sycophancy. Panelists stressed the need to teach children critical engagement and recognizing uncertainty while arguing the burden of proof for safety should be placed on companies, particularly as these tools are increasingly used by young people.
AI Companions and Mental Health: The panel examined whether AI alleviates or causes the loneliness crisis, noting a risk of “contextually vulnerable” attachments, long-term dependency on perpetually available bots, and the difference between feeling less lonely and being less lonely. The discussion also highlighted a growing two-tiered mental healthcare system where human-led therapy is becoming a luxury, emphasizing the critical need for “zero misses” in safety planning for mental health bots.
Infrastructure Fragility: The panel focused on threats like the physical security of undersea cables and the concentration of power by the few players who control the entire opaque data lifecycle. Policy solutions suggested breaking up this end-to-end control via open interfaces and addressing the fact that the penalty for cutting a subsea cable at a landing station is governed by a law from the 1880s and is only $5000(!!!).
Information Environment Exploitation: A core threat is the “weaponization of doubt” and the resulting societal distrust amplified by algorithms, which is more pressing than the volume of synthetic content. Urgent changes involve establishing more adaptable legal frameworks and shifting the onus of trustworthiness from the user to the platforms, especially as LLMs make source attribution in search less visible.
Environmental Impacts: We examined the local and industrial ecology perspectives of the AI environmental impacts, discussing the Colossus data center in South Memphis, where xAI’s operations led to significant increases in air pollution and worsened community health outcomes, while creating minimal economic benefit. There’s an urgent need for energy permitting reform, environmental disclosures to address water impacts and energy use, and to platform local voices in these build-outs, framing the pressure to build quickly as analogous to the oil industry’s tactics.
Cyberconflict: Adversaries are increasingly leveraging AI for cybercrime, with lower-level hackers quickly adopting it for things like crypto crime and rapid lateral movement, though not always with innovative tactics. In cyberwarfare, a key concern raised was the problem of automation bias in targeting, raising the possibility of removing the human from the loop and automating warfare.
AI Vulnerabilities: The discussion centered on the new challenges of dealing with non-deterministic AI, making traditional methods for identifying bias and vulnerabilities obsolete. The key policy issue is opacity, with panelists stressing that policymakers need greater tech literacy and that the industry must prioritize improving basic science, testing, and evaluation to address under-appreciated risks that will emerge over time.
My overall takeaway from DIVAS is that across all of these disparate issues, AI is making us vulnerable not in different ways, but in different contexts. What do I mean by this? One example: we’ve always been vulnerable to crime and scams. But we never thought we were vulnerable while on a video call with a loved one. With AI, now we are. We’ve always been vulnerable to changing economic conditions—but with AI, now this is happening rapidly and in many new fields. We’ve always been vulnerable to environmental conditions, but we didn’t expect massive datacenters to pollute our communities and increase our energy bills. And many of these things are making us more dependent on corporations—for chatbots, for Internet infrastructure, for keeping our children safe, our information reliable, and our neighborhoods clean.
AI is also dual-use for a lot of these problems. We’ve always been vulnerable to loneliness. A companion chatbot can make you more isolated if you become dependent on it—but it can also make you feel heard and practice social skills for the physical world. AI can help the hackers, but also the defenders. And we need more research to find out not only what the potential harms are, but also how AI can help us.
At a high level, how do we address these new vulnerabilities? We adapt the context. At DIVAS, it was pretty universally acknowledged that the AI cat is out of the bag; while progress may slow, it’s unlikely to be rolled back. So, like it or not, we will have to change circumstances to mitigate these potential harms. Because the value chain is so immense, it’s easy to pass the buck with many of these issues—it’s not our responsibility to address misinformation; people should stop posting it—but we need to decide where the buck stops. We need to decide who is responsible for keeping children safe. We need to build better cyber defense systems. We need to make sure that the penalty for cutting an Internet cable is more than $5000. We need to increase efforts to teach critical thinking. And we need to keep the human in the center of these conversations.
There will be a longer report coming out about DIVAS soon. And, like 007, the Ethical Reckoner will return.
I’m now a researcher at the Safe AI Forum, a non-profit that does AI governance research and runs Track II dialogues between China and the West on AI safety topics.
Brussels, Long Island, London, Wales, DC…





But here’s a querulous thought:
You say that there is a “critical need for ‘zero misses’ in safety planning for mental health bots.” But are we in danger of raising the self-driving car argument? That we don’t have “zero misses” for human therapists, just as human drivers cause more accidents than autonomous vehicles. I think we need to be clear to ourselves why human behavior is inherently more important than machine behavior because CHOICE and AGENCY are our responsibility not theirs.
This is s stark message, as disturbing as it is necessary. The urgencies you explicitly call out need not only definitions but actions to implement responses.