WR 24: The future of ridesharing, embryo health, sextortion, and data protection
Weekly Reckoning for the week of 15/4/24
Welcome back to the Ethical Reckoner. In this Weekly Reckoning, we cover some interesting future possibilities: for ridesharing (cooperative!), embryo screening (concerning!), sextortion (hopefully less!), and American data protection (hopefully more!). Then, a brief digression into European mother-tongue LLMs and how they may just reinforce English dominance.
This edition of the WR is brought to you by… sangria, churros, and hiking
The Reckonnaisance
Uber and Lyft Minneapolis exit creates opportunity for rideshare co-op
Nutshell: After Minneapolis passed an ordinance mandating higher pay for rideshare drivers, a driver-owned alternative plans to enter the market.
More: The ordinance, intended to boost driver pay to city minimum wage after expenses, was opposed by Uber and Lyft, and state legislators may still pass a law to override it. As it stands, Uber and Lyft are planning to exit Minneapolis in July when the ordinance takes effect. The Drivers Cooperative, a driver-owned rideshare platform, is planning to establish itself in Minneapolis as an alternative; they claim to take just 15% off the ride fee compared to 25-30% from Uber and Lyft and that drivers make 8-10% more per ride, plus profits returned as dividends. They’ve been operating in New York City and Denver, but this is the cooperative’s first opportunity to establish itself in an Uber-free environment.
Why you should care: Uber and Lyft have come under fire for many things, including not actually paying drivers that well (despite significantly increasing their prices recently). A driver-owned co-op could be a better alternative for riders and drivers by providing cheaper rides that pay drivers more. It sounds appealing, but despite spending significant amounts of time in NYC, I’ve never heard of the platform, likely because Uber and Lyft are so dominant. Minneapolis might be a proving ground that gets the cooperative a better national foothold and allows it to expand to markets where Uber and Lyft still operate.
Embryo health start-up raises concerns
Nutshell: Orchid Health claims to be able to screen embryos for neurodevelopmental disorders and hereditary diseases, but critics worry it’s the next Theranos.
More: The start-up’s website cheerfully advertises “have healthy babies” with “whole genome embryo reports”—basically, they take a few cells from embryos being used for in-vitro fertilization (IVF) and sequence 99% of the genome, looking for genetic abnormalities and variations that might indicate a likelihood of developing diseases down the line. Orchid charges $2500 per embryo for this, which can add up considering most people using IVF often have multiple embryos to choose from.
Why you should care: The company has been linked to some sketchy practices, including using academic data embargoed from use in human genetic screening. On top of that, experts question if the risk scores they use to estimate the likelihood of developing polygenic (influenced by many genes) conditions, and whether the tests are even useful for non-white people, as the science of polygenic risk is based on data from people of primarily European descent. Finally, there’s concern over what one expert dubbed “liberal eugenics.” While at first glance decreasing the frequency of conditions like schizophrenia sounds appealing, in reality it sends the message that children with those conditions are undesirable. And people will still develop schizophrenia, so in trying to eradicate it, you’re creating a society that systematically undervalues and marginalizes them even more than they already do.
Meta introducing new tools to counter sextortion
Nutshell: Meta is introducing new features to provide safety tips and encourage people to think twice before sending nude photos.
More: Sextortion is a scam where an abuser targets someone, usually a teenager, and convinces them to send nude pictures. Once they do, the abuser threatens to release the pictures unless the victim sends more pictures or money. Many sextortionists are part of criminal rings in Nigeria, India, and the Ivory Coast, which makes it difficult to hold them to account (although two men were arrested in Nigeria after an Australian boy killed himself over their threats). Meta’s new measures blur images detected as containing nudity, issue a pop-up when users are about to send a nude image, and show safety notices when people are chatting with possible sextortionists.
Why you should care: Sextortion is on the rise among kids, and it’s been linked to the suicides of at least 20 teen boys in the US alone. It’s a horrible crime, so any measures that can cut down on it are welcome, and the kinds of popups Meta plans to show add friction that might actually help teens when they’re being extorted. But I wonder about the measures they’re taking to make it harder for probable sextortionists to message other accounts. If a platform has probable cause to believe an account is a “potential sextortion account,” wouldn’t it be better to, say, investigate it, rather than just make it harder for them to message teens? It would be a tricky technical task because of message encryption, but platforms could more proactively reach out to suspected victims in a way that could encourage reporting and accountability.
Might the US actually get a data privacy bill?
Nutshell: The bipartisan “American Privacy Rights Act” would bring the US into the modern era of data protection.
More: The APRA would establish a national standard for data privacy, which is long overdue. The US is the only OECD country without a data protection agency and lacks a federal data protection law, meaning that amidst the hodgepodge of state laws, companies can use your data more or less however they want, using, selling, and sharing it with other companies at will. The bill would require more explicit consent, allow people to opt out of data transfers/sales and targeted advertising, and give them the right to sue if privacy rights are breached. In true American fashion, it’s framed as a consumer right rather than a fundamental right, as it is in Europe. The bill is likely to be opposed by tech companies, but it would arguably be partially good for business by unifying the patchwork of bills we have right now.
Why you should care: It’s hard to make “data protection legislation” sound exciting. But this is something that genuinely affects you. “Most people believe they’re protected, until they’re not,” and people have realized they’re not in really awful ways: opioid treatment apps sharing sensitive data with third parties, cell phone providers selling real-time location data to bounty hunters, and, of course, Cambridge Analytics influencing voter behavior through Facebook. But this doesn’t just affect you if you’re giving your data to these companies; because they can amass data with impunity, your data can be in untold numbers of places; remember the Equifax breach that impacted 148 million Americans, 14 million Brits, and 19,000 Canadians? While this bill would help fix a lot of the issues with the American data privacy regime, it still may not pass—“bipartisan support” doesn’t mean “enough-of-Congress” support (as we’re learning with the TikTok bill)—and, as with all laws, the devil is in the details (which is why the GDPR’s most visible impact is the scourge of cookie pop-ups).
Extra Reckoning
I’ve written before about how generative AI acts as an “averaging tool” that recreates the information, reasoning, and rhetoric of the majority—after all, large language models (LLMs) like ChatGPT function by predicting the bit of text most likely to come next based on their training data, which is mostly in English. LLMs have proven surprisingly good at translating between English and other languages because they do have some non-English training data, but they’re weaker operating in other languages. Also, research indicates that even if you prompt a mostly-English LLM in another language, its internal processes are linked to English. This means that whatever response you get will be tinged by the Anglocentricity of its training data; researchers found that LLMs are particularly bad at reflecting non-American cultural values.
Because of this, Europe is worried about “settling” for English in generative AI in a way that weakens European countries’ languages and cultures, with Politico presenting the provocative headline “Will American chatbots kill European culture?”. France is especially protective of its language—to the extent that they try to officially eradicate loanwords—and the French Economy Minister is a champion of “mother-tongue” LLMs, or developing LLMs for each official EU language. However, the major question I have is: should European countries want these LLMs?
The first problem is that training LLMs is hugely water- and carbon-intensive, and training one for each of the 24 official European languages would be even more so. The other problem is that these mother-tongue LLMs likely wouldn’t be as good as English LLMs, and there would be quality discrepancies within the group as well because of the different amounts of training data available for each language. The more training data you have, generally the better the LLM, and there’s more content in English on the Internet than in French, and there’s more French content than Latvian. If mother-tongue models aren’t as good, the economic pressure will be to use English models or be less competitive, while the political pressure will be to use the mother-tongue LLMs. Because of this, developing a suite of European LLMs might just further reinforce a hierarchy of languages that LLMs are already promoting—with English at the top.
I Reckon…
that it’s not great that there are only 700 Americans studying in China.
Cookie website photo generated by DALL-E 3 via ChatGPT with the prompt “Make me an image of a website with a cookie consent bar that's covered in actual cookies”.
Thumbnail is a close-up of a Picasso painting in the Las Meninas series, taken by yours truly.