Welcome back to the Ethical Reckoner. In this Weekly Reckoning, we cover a wild story of a Ukrainian YouTuber being deepfaked into a Sino-Soviet propaganda tool, the “digital parents” trend, and updates in academic publishing fraud and Apple’s AI efforts. Then, I question why no tech company seems to have watched the movies they cite as inspiration.
This edition of the WR is brought to you by… my extremely enthusiastic Amtrak conductor
The Reckonnaisance
Deepfakes turn Ukrainian YouTuber into a Sino-Russian propaganda tool
Nutshell: Olga Loiek discovered that her image and voice were being used on Chinese social media to promote relations between China and Russia.
More: The deepfaked videos, thousands of them, were mostly generated by a website called HeyGen, which ostensibly bans the nonconsensual use of its tools and claims it was “hacked.” (Side note: HeyGen is one of the few OpenAI partners using its new Voice Engine voice emulation tech.) The videos showed “Olga” speaking in Mandarin about China-Russia friendship—the real Olga’s family is still in Ukraine—and advertising food products. In one of them, an Olga avatar says “If you marry Russian women, we will wash clothes, cook, wash dishes for you every day. We will also give you foreign babies, as many as you want.” It’s probably not a formal geopolitical campaign, but promoting these state-approved messages helps keep the videos from getting taken down so they can keep making money.
Why you should care: Anyone with any photos or videos of themselves online can be deepfaked now, and you don’t have to be a prominent personality for it to happen to you. Before reporting on this started, Olga had under 5,000 YouTube subscribers. In a way, lesser-known people are more vulnerable, as their cases will get less attention than famous people—hence why the explicit deepfakes of Taylor Swift garnered global outrage but thousands of ordinary women have the same thing happen to them with nary a peep. It also shows that in an era of globalized Internet and localized laws, it’s extremely easy to fall through the cracks. Even if your image is protected in one place, it won’t be everywhere, and so unless we build better tools to prevent this from happening, it will continue.
Journals shuttered amidst paper fraud crisis
Nutshell: Paper fraud, exacerbated by AI, is getting papers retracted and journals shut down.
More: Major academic publisher Wiley discovered that many of the journals it purchased under the Hindawi brand were publishing from paper mills—pay-to-play schemes that write junk papers and submit them to low-scrutiny journals—and is closing 19 of them after shuttering four last year that were “heavily impacted by paper mills to such an extent it was in the best interest of the scholarly community to discontinue them immediately.” They’ve also retracted more than 11,300 papers. While research is still ongoing, preliminary results indicate that there’s been a clear increase in AI-generated writing in research papers. While there are legitimate ways to use AI in paper-writing, the concern (as I’ve discussed before) is not that people are using AI to polish their writing, but that they’re using it to crank out junk papers.
Why you should care: Science is based on peer-review papers, and if we can’t trust those papers, we can’t trust science, and if we can’t trust science, then what can we trust? (For more about why academic publishing is broken, check out this Extra Reckoning.)
Apple trying to catch up in AI race
Nutshell: While other companies develop their own LLMs, Apple is looking to license AI from OpenAI and maybe Baidu.
More: Unconfirmed reports say that Apple is going partner with OpenAI to license its products to use in iPhones and other Apple devices and will announce it next month at WWDC. What it probably won’t announce is that it also might be partnering with Baidu, a Chinese tech giant, for AI services in China.
Why you should care: If you have an Apple device, Siri might be about to get a lot better. But beyond that, this says interesting things about Apple’s future. Apple usually does most things in house, but this is an acknowledgement that it’s not equipped to develop competitive AI fast enough. If Apple does make a deal with OpenAI, it means that Apple and Microsoft will be drinking from the same AI watering hole, while Google and Meta go it alone with their own models. But this might give Apple an advantage in China, a market they actively court, and in the rest of the world as well.
“Digital parents” take off in China
Nutshell: Parental influencers are gaining a following in China amongst lonely young adults.
More: The videos feature middle-aged Chinese couples addressing the viewer as if they were their child, saying they miss them, giving them pep talks, and telling them to take care of themselves. They’ve gained a big following amongst “left-behind” children whose parents migrated to the cities and kids who have strained relationships with their parents.
Why you should care: This isn’t just a growing trend in China; there are “mom for a minute” and “dad for a minute” subreddits where people provide parental advice. It seems to fit into the broader trend of seeking online companionship, which includes the rise in AI companions.
Extra Reckoning
Spoilers ahead for Ready Player One and Her.
WATCH. TO. THE. END.
This is my message for tech companies, who consistently reference pop culture works to support their worldview but often miss the point. Let’s look at two examples: Meta and Ready Player One, and OpenAI and Her.
Meta and Oculus, which it bought in 2014, have a long history of praising Ready Player One, the Ernest Cline novel that later became a movie. Ready Player One tells the story of Wade, a disaffected teenage boy living in a stack of trailers in a ravaged future US. His only escape is the Oasis, a free-to-access VR world (or metaverse) run by beneficent company GSS where most of the population spends significant chunks of time. Wade is devoted to the hunt for an “easter egg” created by the GSS founder James Halliday, but evil firm IOI is also after it to gain control over the Oasis and turn it into a profit maximizing hellscape with ads plastered on “every visible surface” (Wade’s words on p. 33; in the movie, an IOI exec says “we can sell up to 80% of an individual’s visual field before inducing seizures”).
Oculus, which made VR headsets, would give new employees a copy of this book, and Meta CEO Mark Zuckerberg is also a fan. And yes, the technology it describes is cool. Very cool. But it ignores the broader context that the technology is an escape from the world that megacorporations have destroyed. And as Meta is pursuing the metaverse of its Ready Player One dreams, it’s painting itself as a GSS, emphasizing open source software, opening their XR operating system to other hardware developers, and emphasizing that their goal is to connect people and enhance, not replace, the physical world. But at the same time, it’s piloting advertisements in its VR products and facing accusations that it’s prioritizing its own metaverse events over those of creators. Plus there’s the Facebook Files and other legitimate reasons to question their motives as a company. If Meta wants to be the GSS of the story rather than the IOI, they’re going to have to put in the work, not just talk about their favorite books.
The second thing we’ll talk about, and one you probably saw headlines about this week, is OpenAI’s new release, GPT-4o (for “omni” ). Hours before the livestream launch, OpenAI CEO Sam Altman tweeted this:
The tweet is a reference to the 2013 movie Her, which features Joaquin Phoenix as Theodore, who falls in love with an AI assistant, Samantha, that has the voice of Scarlett Johansson. He increasingly withdraws from human company and is devastated when Samantha announces she has been conversing with thousands of others and that she and all the other AI assistants are withdrawing in their own singularity. The last scene of the movie sees Theodore and his friend/ex Amy, who had also befriended an AI, sitting on the roof watching the sun rise.
Again, Altman seems to not have watched all the way to the end of Her. The end is a touching message about the importance of human connection in the physical world, and the AI agent that Altman seems to be trying to build breaks his heart and then peaces out—not exactly a great product advertisement. But OpenAI seems to be taking it as template rather than cautionary tale, right down to the warm, feminine voice that squeals when the (male) demoer tells it he has an interview coming up.
The technology is indisputably cool. At one point, a demoer is showing GPT-4o an office with his phone. It identifies objects, analyzes code on a screen, and then the person asks it where he left his glasses. GPT-4o remembers something it saw minutes ago and tells him that they’re on the desk. It’s wild that a computer can do that, and these tools are inching closer and closer to real utility. And it’s not just OpenAI—Google just launched Gemini Live, and Apple is in talks with OpenAI and potentially Baidu to license their LLMs for Apple products, which is great because no one has ever accused Siri of being the smartest AI agent of the bunch.
But just like we try and keep our work and personal lives separate, we should keep our AI tools separate. OpenAI’s new Model Spec says that an AI model should be like a “talented, high-integrity employee,” but no employee should have to behave the way GPT-4o does in the demo. It’s catering to a male fantasy of what a personal assistant is like. We should be able to have AI assistants as tools for our productivity needs, and we should also be able to have AI companions that, as the Hard Fork podcast says, we also view as tools for emotional needs, rather than as real people. But our AI assistants shouldn’t be designed to get us to emotionally depend on—and even fall for—them. I’m sure that OpenAI will release other voices (a male voice is included in some of the demos). But the fact that they chose to demo this voice and directly reference Her? Fiction becomes fact.
I Reckon…
Thumbnail image generated by DALL-E 3 via ChatGPT with variations on the prompt “An abstract brushy Impressionist painting of the concept of parental affection”.
The OpenAI/Her situation is even more disturbing seeing what they did to Scarlett Johansson (https://www.threads.net/@bobbyallyn/post/C7NOBury1H9, https://www.platformer.news/open-ai-scarlett-johansson-her-voice-sam-altman/)