Welcome back to the Ethical Reckoner! Thank you to everyone who filled out the reader survey last week. It was heartwarming to read the lovely bits of feedback, and I really appreciate that people put in the time to share their thoughts.
Based on the survey, I’m making a couple of changes. First, as you might be able to guess by the fact that it is a Monday, I’m trying a new day for the newsletter! A lot of you said you want to get it on Mondays, and in the mornings, and so while I have a lot of timezones to juggle, I’ll try to make sure it lands in your inbox so you can use it to procrastinate actually starting the work week when you get to your desk. Also, I was flattered by how many of you said you miss the longform Ethical Reckoners. And I miss them too! I had planned to keep the ER monthly, but as the lack of a December ER showed, I did not keep that promise (holidays, yes, but also the extra time of a bonus newsletter). To keep the ERs flowing along with the WRs, the ER will replace the final WR of the month. Better hope those are slow news weeks…1
Anyway, that being said, let’s get to ethical reckoning.
Pop quiz!
Should children be more safe online?
a. yes
b. no
That was the easy one, and I hope you all got it right (right?). Because the second one’s a doozy…
2. How?
*record scratch*
Children’s safety online is a huge concern, and one that’s been getting a lot of attention recently because everybody seems to have an opinion. Social media site think it should be a parent’s responsibility for monitoring what their kids get up to online, and are investing in things like parental supervision tools and guides for parents with “Conversation Starters” to “ask your teen to ensure they’re having a positive experience.”2
Parents, on the other hand, want more legislative action to protect their kids. And legislators have picked up that mantle (because, as the hopefully 100% correct rate for #1 indicates, it’s easy to agree that kids should be safer online). Many US states are passing laws mandating site design changes to prevent kids from accessing harmful material or that minors get parental consent before signing up, and 33 have signed on to a lawsuit against Meta alleging they knowingly violated children’s privacy and targeted them with addictive features.
National governments are also taking action. The US is debating the Kids Online Safety Act3 (KOSA) that would, among other things, require strong default privacy protections and reporting mechanisms, create transparency and safety audit standards, and place a duty of care on platforms to protect children from harm. This last clause has been controversial because interpreting specifically what causes harm is left up to the state attorneys general, and people are concerned that some states would use this measure to target LGBTQ+ content online (with the Heritage Foundation and Senator Marsha Blackburn explicitly saying it should be used to block trans content).
KOSA would not require explicit age verification, unlike the UK’s Online Safety Act, which became law in September. Sites frequented by minors—from social media sites to pornography sites to Wikipedia—will face criminal penalties if they don’t verify users’ ages and prevent minors from accessing “harmful” content, which includes, among other things, pornography, self-harm/eating disorder/suicide promotion, hate speech, cyberbullying, and graphic violence. And inputting a birthday or checking a box saying “I am over 18” won’t cut it—the verification has to be “reliable,” which most people interpret as requiring ID verification or biometric analysis. Wikipedia has refused to do so, and the government has said that only the “highest risk” services will be required to use age verification, but as the bill is written, everyone technically has to. The law also requires scanning users’ private messages to ensure they aren’t transmitting illegal material, which is impossible to do for encrypted messages without breaking the encryption or installing client-side monitoring software that renders the encryption useless. WhatsApp and Signal threatened to pull out of the UK over it, and the government basically said that they won’t enforce it right now because the technology doesn’t exist (as Apple found out in 2022 when they tried and failed to build privacy-preserving client-side scanning for iCloud)—but it remains in the bill. Overall, online providers are very leery about relying on the goodwill of the government when enshrined in law are provisions that could leave them with criminal liability.
In China, which has a very different attitude towards privacy and a different governing system, there’s no such wiggle room. The Regulation on the Protection of Minors Online took effect on January 1, issues a bunch of relatively vague provisions requiring online service providers to prevent cyberbullying and the transmission of harmful material, protect children’s private information, and prevent addiction. Video game time is already regulated (although with questionable efficacy) but new proposed legislation for “Minors’ Mode” for smartphones would set limits on total smartphone use from 40 minutes to 2 hours depending on the child’s age, and prohibit use (with some exceptions) between 10pm and 6am. It would go beyond the Online Safety Act’s prohibition on harmful content by creating “dedicated content pools” for minors containing specified “age-appropriate content” for each age bracket. For instance, children under 3 years old should get primarily audio content of children’s songs and elementary education, while kids 8-11 years old get “general education, popular science, life skills, entertainment content with positive guidance, and news information suited to the cognitive abilities of that age range, etc.” Sounds like a blast.
These laws all to varying extents involve limiting what information kids can access online. And while kids should indisputably be protected from predators and bullying online, I worry they’re trying to insulate children in bubble wrap in a potentially detrimental way. The Canadian Pediatric Society recently lamented the decline of opportunities for outdoor “risky play” and recommended that children be encouraged to engage in outdoor free play, even if it involves some risk of injury, for its benefits to physical, mental, and social-emotional health. Shouldn’t kids be allowed some “risky browsing”? I certainly did my fair share of slightly sketchy stuff online when I was a kid. And granted, the Internet was a lot different then—less social media, for one—but I think the online equivalent of scraped knees and twisted ankles helped me develop a sense of what the Internet is and the hazards that exist on it, just like playing Cops and Robbers in the woods with neighborhood kids probably helped me develop social skills and confidence as I negotiated alliances and climbed trees to escape my pursuers (and while I had many skinned elbows, I only got stuck once). Online, I probably read far too many Buzzfeed listicles, but my writing skills were honed in RPG forums that, yes, sometimes had edgy content. And as I was questioning my identity, being able to find information and resources online was invaluable. If I only had access to a content silo that the state—or a bigoted attorney general—decided was ok? It would have been a struggle.
There’s a lot about the Internet that’s bad for kids, like algorithms that feed kids pro-self-harm content when they’re depressed or social media encouraging FOMO and facilitating cyberbullying. But research indicates that to a certain extent, screen time actually benefits kids’ psychosocial functioning, meaning that they have better “social and emotional well-being,” and while social media can be really bad for vulnerable groups, it has a limited negative effect on adolescent well-being as a whole (less than smoking weed, bullying, or even wearing glasses, and approximately as much as eating potatoes). And while there have been tragic cases of online child grooming that make headlines, it’s like worrying that your child will be kidnapped while playing outside (cases of which have also certainly made headlines)—technically possible, but not likely enough that we should lock all kids inside to prevent it.
A lot of what’s bad about the Internet is how others interact with us on it, and there are steps that we can and should take to make that safer, but these laws also target how kids will be able to use it to access information, and that’s dangerous. I guess that what I’m saying is that the Internet in general is less risky for children than it seems, and that giving kids leeway to explore online is as important to their development in the modern era as letting them explore outside. I agree with a lot of the provisions of these laws—kids should absolutely have stronger default privacy settings on their social media accounts, for example—but trying to control what information they have access to is going to cause harm, not mitigate it, especially when what’s considered “harmful” is vulnerable to political manipulation. Kids should have just as much right to information access as adults, but in the rush of politicians falling over each other to be the ones seen as doing the most to protect kids, we seem to have forgotten that.
So, going back to our pop quiz. Should kids be more safe online?
❌ yes
❌ no
✅ it depends.
It was not a slow news week and while I’m sure I will have more to say in the next WR, I am in awe of Taylor Swift’s power, but also we shouldn’t have to have a celebrity go through something traumatic to get deepfake regulations.
If anyone has ever successfully initiated a conversation with a teen with one of these, please do let me know.
Should it be Kids’? Probably not Kid’s, but I feel like there needs to be an apostrophe somewhere.
Thumbnail generated by DALL-E 3 via ChatGPT with the prompt “Make an abstract Impressionist painting representing the concept of children's safety online.”
To the person who said they don’t get the art in the email: Substack says the images are the “social previews,” but I’m working on it!