WR 6: Self-driving taxis are a scam and so is drone delivery, but EU tech legislation hopefully isn’t
Weekly Reckoning for the week of 6/11/23
Welcome back to the Ethical Reckoner. In this Weekly Reckoning, we’ll talk about how the public views AI regulation (interesting to me in light of last week’s musings on how normal people think about AI), how self-driving taxis aren’t really self-driving, and then have a slightly wonky (but amusing) discussion about the hijinks Big Tech companies are engaging in to end-run around the EU’s new tech laws.
The Reckonnaisance
No one cares about AI regulation :(
Nutshell: In a survey that I’m taking as a personal attack, AI regulation ranked 11th (out of 15) political priorities for respondents in the US.
More: 27% of people said it was a top priority, and 35% said it was “important, but a lower priority.” Americans are roughly evenly split between thinking that AI will make their life better, worse, have no impact, and not knowing what to expect. There’s also a gender gap, with more women more likely to say that it’s impossible to regulate AI and less likely to want to let their children use AI. Interestingly, those who have used generative AI (disproportionately men) are more likely to believe that it will improve their lives and that it needs regulation.
Why you should care: Regardless of whether or not you want your kids to use AI, they’re going to be exposed to it anyway—and so are you. AI isn’t just ChatGPT; it’s deciding what you see online and setting the price of your Uber and helping diagnose diseases and monitoring the power grid. So it will impact you whether you like it or not, and regulation is one way we can help assuage concerns that AI will make our lives worse or impact our jobs. Also, it’s a big part of what I’m studying, and I will be sad if no one cares.
“Driverless” cars are crying out for help
Nutshell: In the aftermath of the Cruise debacle, it’s being revealed that many “self-driving” cars may in fact rely on remote intervention to function.
More: This NY Times article really buried the lede, which is that Cruise intervened to remotely control cars every 2.5 to 5 miles. The Cruise CEO claimed in a comment on Hacker News that this is actually how often the cars reached out to their call center for help, and many of the situations are resolved without human intervention. Still, he said that Cruise cars are being remotely assisted 2-4% of the time, and that optimizing to reduce this number wouldn’t be cost-effective; they have 1 assistant agent for every 15-20 cars. Still, it makes you wonder: if Cruise cars, which are highly optimized for the specific urban environments in which they operate, can’t achieve full autonomy, where does that leave the rest of us?
Why you should care: That 2-4% of time is standing between us and full self-driving, but that will include the hardest and most expensive situations to figure out. It’s possible that full self-driving is a limit function; in other words, we may keep getting closer and closer, but never be quite there. As Gary Marcus said, “Uber and Lyft can stop worrying about being disintermediated by machines; they will still need human drivers for quite some time.” And even after that, it seems like there will still be a need for remote driver assistance for even longer. If you’re hoping your car will be fully self-driving soon, you will probably be disappointed (I am too; it’s well-documented that I hate driving).
Amazon’s drone delivery is looking pretty pointless, unless all you need is peanut butter
Nutshell: Amazon has been promising drone delivery for years, and they’re finally delivering on that promise… kind of. If you live in College Station, Texas, you can get one item at a time, and it has to be able to be dropped from 12 feet into an open space.
More: Drone delivery is free for Prime members where it operates, but it seems that the only way Amazon can drum up demand is to give things like peanut butter and floss picks away for free. The drones can’t fly with more than one item, anything too big or heavy, in bad weather, or without a whole yard for dropping the package in. Amazon touts its usefulness for prescription medications, and that does seem like an area it could be legitimately helpful, but the kinks need to be ironed out—one College Station resident reported that his meds melted in the heat before he could retrieve them.
Why you should care: Obviously this is still experimental, but it makes you wonder if drone delivery will ever be a thing if this is the result of a decade of research. Not everyone has an entire yard to devote to drone landing, not every product should be delivered in a giant box, and sometimes tech companies need to know when to cut their losses and stop trying to create a demand. That being said, lots of beta products get criticized before being successes… but this is not a good look for Amazon.
AI is helping decipher ancient languages
Nutshell: AI is helping read papyrus Greek scrolls that were carbonized in the eruption of Vesuvius, and is aiding researchers in a 100-year quest to decipher the ancient Indus script.
More: There are a bunch of Greek scrolls from Herculaneum (near Pompeii) that were carbonized and then buried in mud when Vesuvius erupted, so while they are technically still scrolls, they’re impossible to open without destroying. Researchers have done CT scans and x-rays to “virtually unwrap” them, but any remaining ink isn’t visible. So a group including tech investor Nat Friedman launched the Vesuvius Challenge to open-source the challenge of reading them, and two machine learning algorithms looking for tiny cracks in the scroll surfaces where ink used to be are starting to identify words. The first one? Purple. At the same time, AI is helping researchers decipher untranslated ancient languages including an ancient Indus script and a medieval manuscript, a process that can take decades of work.
Why you should care: This is honestly just super cool.
Extra Reckoning
The EU’s flagship Digital Services Act and Digital Markets Act are being implemented. These acts regulate large platforms and “gatekeepers” that provide core services like search engines, app stores, etc. The acts require platforms to increase interoperability, provide data access, keep ad archives, and stop anti-competitive behavior, among other requirements. There are a lot of provisions, and stiff penalties for noncompliance, so it’s in platforms’ interests to not be subject to them. As a result, we’re seeing some extremely interesting arguments from platforms trying to get themselves and their services exempted from the acts, including:
Apple insisting that there is no one Safari or App Store, but three different Safaris and five different App Stores that separately don’t meet the regulation threshold, only to get shot down by the European Commission, which pointed out that Apple itself advertises them as a single service.
Microsoft arguing that since basically no one uses Bing, it should’t be subject to the laws (the EU is investigating).
Amazon claiming that they don’t sell enough advertisements to matter to the DSA (a piddly $12 billion last quarter)—they have an interim stay on part of the designation.
Meta saying that Instagram and Facebook are the same service to try and keep them from having to go through two separate compliance processes (it didn’t work).
Google lobbying the EU to include Apple’s iMessage under the DMA, which could force Apple to interoperate with Google’s preferred protocol (i.e., no more blue/green bubble hierarchy).
Apple’s legal filings are being called a “post-modernist triumph” for their incomprehensibility—and may fall afoul of the anti-circumvention provisions in the laws designed to prevent exactly what they’re doing—but I think this provides a preview of what we might see with the AI Act, and any other AI legislation that other countries might pass.
Punishments so far have mostly been slaps on the wrist and haven’t been having an effect. This is a problem that China hasn’t faced with its tech regulations, where sanctions range from massive company and individual fines to IPO scuttling to disappearing executives. However, the DSA and DMA fines are much bigger than in past laws—6% of global turnover for the DSA and 10% for the DMA—and so once we get past the initial period of evasion attempts, these might actually be effective. It also bodes well for future tech regulation that the European Commission is seemingly having none of these tech companies’ shenanigans. The AI Act will likely see a similar period of attempted circumvention (especially if clauses that would allow providers to decide for themselves if their AI is high-risk pass) but after, its fines of 5-7% of global turnover will hopefully be enough to keep companies in compliance.
I Reckon…
that while the Hitchhiker’s Guide to the Galaxy is a great book, it is not the source of “the true nature of the universe.”
Thumbnail generated by DALL-E 2 with the prompt “an abstract painting of the concept of evasive technology, no tech, dreamy pastels”.