Welcome back to the Ethical Reckoner. We’ve got two weeks to cover due to the Thanksgiving break, but while I could make this all about OpenAI, I won’t. Instead, check out the latest Ethical Reckoner here for my take on the situation (plus some bonus analysis of how the EU AI Act is in jeopardy thanks to corporate lobbying).
Today, we’ll talk about bad algorithms, disclosing AI-generated content, and queer music, plus a discussion on what corporations are actually worried about when their ads appear near toxic content, and whether or not I should still be on Twitter.
The Reckonnaisance
Instagram Reels algorithm makes disturbing recommendations to adults who follow teen influencers
Nutshell: An investigation by the Wall Street Journal found that Reels recommended sexual videos of adults and children to adult accounts that followed young gymnasts, cheerleaders, and influencers.
More: Meta claims hat this was a “manufactured experience,” but it’s one that a user with an inappropriate interest in children could easily manufacture for themselves. It builds on the WSJ’s reporting that Instagram connects pedophilic networks, which Meta claims to have taken action on.
Why you should care: Just like the Facebook algorithm can infer that if you liked a picture of a puppy, you might like pictures of kittens, Instagram’s algorithm has made associations between adults who follow preteen girls and videos sexualizing children. And, just like Facebook will feed you more images of kittens and puppies to keep you engaged, Instagram keeps promoting child sexualization content to these accounts, potentially “recruit[ing] new members of online communities devoted to child sexual abuse” and putting children at risk.
YouTube will require creators to disclose AI-generated content
Nutshell: YouTube is adding new labels to disclose to viewers when content is altered or synthetically generated.
More: Labels will require the creators to disclose when they’ve altered or generated content. Since there’s no reliable way to detect when content is altered or generated, it means that it’s more or less an honor system, even though YouTube promises that “creators who consistently choose not to disclose” may be subject to penalties, and that even some disclosed synthetic media could still be removed if it violates YouTube’s Community Guidelines.
Why you should care: These labels are good for the health of the information ecosystem, but it remains to be seen how effective they will actually be. If a creator doesn’t disclose, there may not be any way to prove that content has been generated or altered. People can request the removal of content that simulates them and “music partners” can request the removal of unauthorized AI-generated music content, but these will likely have different impacts. Musicians can reasonably argue that they didn’t record the audio in question (and YouTube has a vested interest in keeping them happy), but regular people won’t have the same recourse—it’ll become a “creator said”/“subject said” situation.
Sports Illustrated caught not disclosing AI-generated content
More: Sports Illustrated claimed that the content was provided by a third party contractor and that only the author bios, not the content, was AI-generated—but the evidence seems to be everyone taking the next party down at their word, and Futurism claims to have the scoop otherwise, so watch this space.
Why you should care: Well, if contractors aren’t disclosing when they use AI, why should we expect that others (like YouTube creators) will? Also, the Sports Illustrated union isn’t happy about it, but unlike Hollywood writers, these writers have no union protections against AI.
Spotify Wrapped reveals the music tastes of the queers
Nutshell: The annual Spotify Wrapped “cruel annual psychological evaluation” personalized musical round-up dropped yesterday, and its “Sound Towns” that tell you what city your tastes align with are establishing a “gay music Bermuda Triangle” of Cambridge, Massachusetts; Berkeley, California; and Burlington, Vermont.
More: The gays are going to Berkeley, the lesbians are going to Burlington, and the bis are going to Cambridge, and it’s creating many excellent memes. Listen to a lot of boygenius? You may find kindred spirits in Cambridge.
Why you should care: This is just a fun moment in queer Internet history that I wanted to share. Even though I’m left out by being an Apple Music person :( (no word on if an Apple Replay permits entry to Burlington).
Bonus:
While everyone was looking at OpenAI, the Cruise CEO slipped out the side door.
Extra Reckoning
Recently, there’s been a lot of attention about how ads for major companies are appearing next to problematic social media content, including white supremacist content on X/Twitter and content sexualizing children on Instagram. Brands are issuing forceful statements about the “unacceptable situation” on X and canceling ads on X and Meta platforms. Everyone accepts that this is bad. But I can’t help but wonder: what exactly are they worried about?
At first glance, this situation seems to reflect more poorly on the platforms for allowing toxic content and making money off of it. It’s readily apparent that brands aren’t hand-picking the posts that their ads appear next to; it’s entirely algorithmically determined. Still, there’s an association risk, and the potential for bad actors could claim ads near their content as endorsement.
But, what’s more problematic is when the people posting that content are also making money off of the ads, like in 2017 when major brands pulled ads from YouTube because they were running on videos from extremist organizations. This wasn’t a problem on Old Twitter or Instagram, but now that X and Instagram Reels both have ad revenue sharing programs where creators can get part of the revenue from ads appearing next to their content, suddenly the people posting horrible content are making money off of it, meaning that money from brands is lining the pockets of bigots and extremists.
However, the Instagram situation will likely die down once Instagram figures out how to de-monetize (or better yet, remove) problematic content, as it did for YouTube in 2017. Musk has created a bigger problem because through his own actions, he’s making X a fundamentally toxic place for advertisers. In recent weeks, he’s Tweeted approval of anti-Semitic conspiracies, defamed the Anti-Defamation League, and endorsed the Pizzagate conspiracy, adding to a long history of terrible Tweets. Now, by advertising on Twitter, brands aren’t just potentially funding problematic content, but a fundamentally problematic platform run by a fundamentally problematic man. Thankfully, many brands are not only stopping advertising (threatening $75 million in ad revenue), but stopping Tweeting altogether.1
This is something I’ve been struggling with. I don’t want to support X and its leadership. But at the same time, I’m a young academic and Twitter is still where a lot of other academics are. Selfishly, and perhaps self-delusively, I hope that sharing my work on Twitter outweighs the harm that I do by staying on the platform—I do block ads, so I’m hopefully not putting any pennies into Musk’s pocket. But, even though I’m not monetarily supporting it, I don’t want to implicitly endorse Musk and his views. I hope that my academic community (and, let’s be real, the F1 meme community) converges somewhere else. But right now, that’s not Threads (EU residents have access issues), Mastodon (too federated), or Bluesky (too small). I don’t know if any of these will become what Twitter used to be, but I hope academia will eventually re-form on one. And I hope there are F1 memes there, too.
Should I still be on Twitter? Are you? If you left, what was the final straw? Comments are below.
I Reckon…
That we also need to be worried about the content of social media ads (looking at you, Meta).
Last night at a New York Times event, he started cussing out advertisers, which certainly isn’t going to build goodwill for them to come back.
Thumbnail generated by DALL-E 3 via ChatGPT with the prompt “An abstract painting themed around 'Bad algorithms, scary platforms, great music.' The painting should feature a cool color palette with shades of blue, green, and purple, symbolizing the digital and technological aspects of algorithms and platforms. The composition should include abstract shapes and lines that evoke a sense of unease or fear, representing the 'bad' and 'scary' elements. Interspersed should be vibrant, flowing elements or musical notes to represent 'great music,' creating a contrast within the artwork.”