WR 53 (part 1): The UK wants your iCloud data
Part 1 of the Weekly Reckoning for the week of 10/2/25
Welcome back to the Ethical Reckoner. This week, you get double the ER (which could be good or bad depending on the state of your inbox). Today I’ve got the Reckonnaisance for you. Tuesday will be the Extra Reckoning about a super-secret report that I can’t share with you till tomorrow. But there’s plenty of interesting content today, from a new threat to encryption from the UK government to new avenues for sports gambling to the latest in AI development and safety.
This edition of the WR is brought to you by… the Super Bowl, I guess (jk, Severance)
The Reckonnaisance
UK demands access to encrypted iCloud data
Nutshell: A secret order from the UK government requires backdoor access to data stored in encrypted iCloud backups.
More: The unprecedented order, issued under the 2016 Investigative Powers Act (aka the “Snoopers’ Charter”), threatens the integrity of end-to-end encryption for every iCloud Advanced Data Protection user across the world, but also would set a dangerous precedent for encrypted data storage services. If implemented, Apple would be unable to inform users that the backdoor exists. They might pull the ADP service from the Uk, but this wouldn’t protect non-UK users.
Why you should care: The point of encryption is to protect your data. Even Apple isn’t supposed to have access to this data, much less the UK government. And even if you think you have nothing to hide—from Apple or the government—a backdoor for one is a backdoor for all, and this would make it far easier for hackers to steal sensitive data and use it for scamming or other nefarious purposes. And of course it’s concerning from a civil liberties perspective. Signal Foundation president Meredith Whittaker cautioned that this would turn the UK into a “tech pariah,” but that depends on the extent to which its allies oppose this; there are indications that they might quietly support it.
“Using technical capability notices to weaken encryption around the globe is a shocking move that will position the UK as a tech pariah, rather than a tech leader. If implemented, the directive will create a dangerous cyber-security vulnerability in the nervous system of our global economy.” - Meredith Whittaker
Crypto & prediction platforms move into sports gambling
Nutshell: Crypto.com and Kalshi did end-runs around regulators to offer “swaps” on the Super Bowl.
More: Neither of these platforms are technically sportsbooks, and they’re very careful in their wording:
“We do not offer sports-betting product. We offer tradeable cryptocurrency commodities and tradeable financial products, which differ from products offered by sportsbooks.” - Crypto.com spokesperson
Sure. But they definitely made it seem suspicious when Crypto.com filed its paperwork to the Commodity Futures Trading Commission the day before a possible government shutdown and described the contracts it wanted to offer as a chance to “‘express a market view related to the broad and varying economic and commercial impacts’ of an ‘an association’s final title event’.” This follows a long history of prediction markets and crypto platforms doing sketchy things to enter new markets. Now the CFTC is reviewing these products; sportsbooks and sports leagues are hoping they’ll shut them down.
Why you should care: Sports gambling is highly regulated for a reason. It’s highly addictive, and regulations include measures to protect consumers and address potential harm (though they’re likely inadequate). But if these platforms aren’t technically sports betting, none of those apply. This could be a loophole that cross-pollinates between the crypto and gambling communities in ways that hurt both. It’s also just another sign of the obnoxious “anything goes” attitude amongst the crypto and prediction market bros.

The next AI buzzword? “Distillation”
Nutshell: Open-source AI models are advancing rapidly, in part because they’re siphoning data from closed models.
More: No, Big Tech isn’t about to release a line of distilled spirits. But they’re certainly not fans of distillation, which is basically where a large language model (LLM) trains on responses elicited from another LLM (compared to a “student model” asking a “parent model” a lot of questions, which sounds like me as a child and possibly quite annoying for the parent model). OpenAI and Google’s terms of service ban this practice, but there’s really no way for them to prevent it. The first company that made headlines for doing this was DeepSeek, a Chinese start-up, and that made a lot of people nervous. Now, a team from Stanford and U Washington have done the same, training a reasoning model for $50 in compute.
Why you should care: Like many technologies, distillation is a double-edged sword. It’s certainly advancing AI development, but that means it’s doing so for everyone. And so you will have both people you like and people you don’t using this. People are also claiming that this indicates that open-source AI is catching up to closed-source AI. However, I think this is unlikely to hold. Distillation can get you close to cutting-edge models, but not quite there. Also, part of DeepSeek’s success was due to algorithmic advancements, which Meta (and surely others) now has four war rooms trying to figure out—and implement in its own AI products. Big companies will always have more computational power than start-ups and open source developers, meaning that they’ll be able to reap the benefits of increased efficiency even more.

More steps towards AI safety
Nutshell: In advance of the Paris AI Action Summit, Meta put out an AI release framework and China launched an AI safety institute.
More: Meta’s framework joins Google’s Frontier Safety Framework, OpenAI’s Preparedness Framework, and Anthropic’s Responsible Scaling Policy, all documents governing how companies will evaluate and release new models. China’s new “China AI Safety and Development Association” joins a host of AI Safety Institutes across the world. These institutes are representing countries at events like the Paris AI Action Summit (a successor to the UK and Korea summits), where everyone will dialogue and debate (and probably not much will get done, unfortunately).
Why you should care: It’s good that Meta is thinking more substantially about safety and model release, and it’s good that China has an organization to engage with the global AI regulation debate (although the org still seems to be getting stood up). This does make me wonder if AI ethics is getting brought under the umbrella of AI safety (I’ve argued before that they should exist on a spectrum, but this is better than them being diametrically opposed). It also makes me wonder if AI governance is increasingly going to be done by agency globally, rather than by law. Watch this space for more on these half-baked thoughts.
Extra Reckoning
Look out for a special Extra Reckoning tomorrow on a new report dropping in the morning. Hint: it has to do with DeepSeek and US-China competition.
I Reckon…
that this interview with Cynthia Erivo and Kara Swisher is a great listen.
Thumbnail generated by ChatGPT with the prompt “Please generate a brushy abstract Impressionist painting that represents the concept of friendly competition in shades of green and red”.