ER 12: On TikTok, Twitter, and Trust
Or, what should we make of the social media moment that we’re in?
Welcome back to the Ethical Reckoner. It’s been a while (moving across the ocean and starting a PhD program will do that to you). But it’s good to be back. Now, let’s have a chat about social media.
TikTok has been making lots of headlines lately. Sometimes it’s for fun reasons: dance trends, butter boards, Santa Claus getting verified. Recently, though, the major storyline in the US has been national security: the FBI director has “a number of concerns” about the app, state legislatures and the military are banning personnel from having it on their phones, and a bipartisan bill in the US Congress would ban the app altogether.
The reason for the histrionics is that TikTok is owned by the Chinese company ByteDance. TikTok is the international version of its app, which is called Douyin (literally “shaking sound”) in China. The two apps aren’t exactly the same—for example, Douyin has much more advanced e-commerce features—but both focus on short-form video content. TikTok has a full-screen, vertically scrolling display that shows users an infinite feed of videos that they can swipe through at will. Through this feed, it collects data on the videos you see, how long you look at them, and your likes and comments, but it also collects things like your location, your messages, anything else you type or paste in the app, anything you do in its in-app browser, and potentially biometric faceprint and voiceprints.
It does all this to a) serve you videos that will keep you on the platform as long as possible, and b) to serve you targeted ads. The first is in service of the second, as the more ads you interact with, the more money TikTok makes. And TikTok is incredibly good at both, quickly becoming the most popular app in the world on the strength of its engagingness (or what some would call addictiveness). TikTok is approaching 2 billion monthly active users, and the average teenager spends 91 minutes on the app every day (adults average 45 minutes). TikTok made $4 billion last year and is projected to gross $10 billion this year.
TikTok is laughing all the way to the bank, but US politicians aren’t. The national security arguments against TikTok are that the Chinese Communist Party could access this data and use it to surveil Americans, or that the CCP could pressure ByteDance to tweak the content recommendation algorithm to promote destabilizing content. These concerns are somewhat legitimate, and because of that, TikTok is taking measures to mitigate them—all US traffic is being routed through Oracle servers in the US, and they’re migrating US user data out of their own data centers and to Oracle. Additionally, US-operated Oracle will be auditing TikTok's algorithm and content moderation policies. There are still concerns that US user data will keep leaking to China, but in general these seem like good measures. I’m not saying that there’s zero national security risk from TikTok, but TikTok’s newsroom is right when they say that they are “among the most scrutinized platforms from a security standpoint,” and I believe that risks will be identified and mitigated, because TikTok above all else wants to make money, and the US is a major pool of users that it would lose access to if banned.
I want to argue, though, that the biggest risks of TikTok are to individuals,* not national security. Overshadowed just this week by recent national security headlines are stories like this: “Report: TikTok boosts posts about eating disorders, suicide.”
These headlines emerged from a Center for Countering Digital Hate study, which set up accounts pretending to be 13-year-old girls and found that eating disorder- and suicide-related content were recommended within minutes of scrolling the feed. After interacting with videos about body image, mental health, and eating disorders, they found that body image-related videos were displayed every 83 seconds, and that many of the videos were about harmful weight loss strategies.
Question: where have we seen this before? Just last year, when it was revealed that Instagram can make body image issues worse for teenage girls, and that it can trigger suicidal thoughts in teens. This is a byproduct of platforms algorithmically optimizing for engagement: whatever users interact with are what algorithms will feed them more of, even if that content is harmful. And so, these are not TikTok-specific concerns; TikTok is not uniquely harmful. At its core, TikTok is no different than other social media platforms. It has merely taken the trend towards algorithmically mediated, maximally engaging content the farthest. Social media content used to be served chronologically, and platforms used to claim they were trying to “connect people” by showing you what your friends and family were doing. And then, in 2011, Facebook made the News Feed display “relevant” content at the top, and we were off to the races. In 2016, Twitter removed the chronological timeline (though it did eventually brought it back as an option). Instagram also used to be chronological, but has been trending more and more algorithmic. Earlier in 2022, it started to roll out a massive change to amplify short video content from recommended accounts (see: what TikTok does). It had to roll back the changes after user protested that they couldn't see what their friends were posting, but its CEO said that “I need to be honest: more and more of Instagram is going to become video over time”… whether users like it or not. TikTok defines itself as an “entertainment platform,” not a social network, and so the gloss of “connecting people” is fading: social media is becoming more nakedly about connecting eyeballs to content and thus to ads, regardless of the consequences to individuals. TikTok’s format and algorithm are not unique; they can and will be emulated.
When everything is becoming TikTok, banning TikTok won’t make a difference. Right now, TikTok is the worst platform for users because it is the most addictive.** Because it is the most addictive, it is the most profitable, and because it is the most profitable, everyone else wants to emulate it. New products like Instagram Reels, YouTube Shorts, Twitter Fleets (and maybe a Vine revival), SnapChat Spotlight, and algorithmic changes in existing products and feeds are all attempts to cash in on the short-form video format, and some of them are working—YouTube Shorts has 1.5 billion monthly users. Existing platforms all have the ability to amplify harmful content, and this will only grow as they trend away from family-and-friends content and towards increased algorithmic mediation.
So, TikTok isn’t unique in terms of its product and the impact it has on individuals. But again, what about its owners? Well, TikTok isn’t even different than other platforms in terms of misgivings we should have about who controls it. Concerned about ByteDance manipulating TikTok’s algorithm to promote content harmful for democracy? Zuckerberg actively decided to keep Facebook’s algorithm promoting divisive and hateful content,*** including misinformation about voter fraud. Worried that your personal data is being leaked? Instagram's in-app browser tracks just as much as TikTok's. Convinced that the CCP is quashing freedom of speech on TikTok? Elon Musk has apparently started purging journalists he doesn’t like from Twitter. A lot of the things we’re worried TikTok will do are already being done by other platforms.
TikTok is clearly getting the most scrutiny of any platform right now because of the geopolitical tensions between the US and China.**** However, what we should be most worried about is not its national security risks, but its risks to individuals. Because of that, we need to worry about every single other platform that is doing the exact same thing as TikTok. Every platform is a conduit for its overlord’s values, whether that be surveillance capitalism, "QAnonish reactionary[ism]," or… something else.
What could that “something else” be? Could platforms embody not exploitative, but empowering values? Mastodon is trying to be a community-centered, decentralized, demonetized platform for constructive connection. Post is trying to be a space to discover and discuss news “without the toxicity.” Even Substack is trying to portray itself as a place for “private social networks” where no “singular figure can hold dictatorial influence.”
All of these platforms are small, and I’m not saying that any of them are the solution to the myriad of issues that social media causes and exacerbates—that’s a much larger conversation. But banning TikTok***** is not the solution because none of these problems are unique to TikTok (or Elon Musk buying Twitter, or any other single action or moment). They are inherent to the way social media platforms are. Under the current model, we cannot trust social media to act in the best interests of our society or our own persons, and changing that will require a massive reordering. Maybe a catastrophe of necessary magnitude will occur and trigger a massive shift away to new platforms. Maybe all of the juggernauts entering the same TikTok competitive niche will starve some out. But ultimately, what we need is not to transplant existing dynamics to new companies. What we need is a paradigm shift in the way we view social media, an interrogation into what role we want it to play in our lives, and perhaps even an optimistic dialogue about how social media can impact our lives and society for the better.
* I use the term “individuals” rather than “users” because a lot of these issues also have spillover effects onto non-users—teenagers having mental health crises impact their loved ones, degraded democracy impacts all of us, and so on.
** In fact, under efforts to curb “Internet addiction,” children under the age of 14 are limited to 40 minutes a day on Douyin.
*** Context: Zuckerberg authorized temporary algorithmic changes after the 2020 US elections that ranked authoritative news sources higher… and then rolled them back because they decreased engagement, even though they vastly improved the quality of information users got.
**** The EU is also examining its data practices, but without the rhetorical bombast of American politicians.
***** Banning TikTok-like platforms might be a step but would be practically unworkable because of free speech concerns and how you define “TikTok-like” in the first place, plus at best it puts us back where we started before TikTok’s rise. Also, I have to wonder if one of the motivations behind “ban TikTok” is to allow US-owned companies to capture its revenue.
Thumnail generated by DALL-E with prompt “tiktok, twitter, trust, abstract oil painting”.
Thanks for reading.
Check out the archive of past Ethical Reckoner issues here. If you haven’t already, click the big “subscribe” button to get the Ethical Reckoner, and help the ER grow by spamming all your friends with the “share” button.
Any suggestions or comments? Let me know on Mastodon @emmiehine@dair-community.social, on Post @EmmieHine, or (if you must) on Twitter @EmmieHine.
Thanks for this timely and thought-provoking piece, Emmie. Will share with others. Trust all is going well with you and the Hine family. Miss you all!