ER 34: On the Future of American AI
Or, on the future of American AI... and why you should care about it
Welcome back to the Ethical Reckoner. You may have noticed a few things about AI in the news recently. This week, we’re diving in to discuss what’s going on with the new US administration and AI, where we’re going from here, and most importantly, why you should care about it.
This edition of the Ethical Reckoner is brought to you by… chai in Mumbai
For the last week, my timelines have been filled with breathless AI news. Some of it has been technical and some of it has been policy. We’re going to talk mostly about policy, but we’ll get into the technical news as well because it has a bearing on what the US is going to do in terms of addressing AI ethics issues like bias and discrimination, AI safety, and international relations.
So. Changing administration, changing AI policy. I’ve written two papers on US and Chinese AI policy, but it’s too early to write a third. Still, I wanted to jot down some thoughts on how the second Trump administration’s AI policy is taking shape, especially in light of new developments from China. And, more importantly, I want to convey why you, dear reader, should care, when you may have a lot of other things on your mind right now.
So, here’s a quick primer on where the US is/has been in AI governance:
The US has mostly been regulating AI through the executive branch (because legislating in the US is hard).
During his first term, Trump signed an executive order on “maintaining American leadership in AI.” It was mostly ignored.
The Trump administration’s rhetoric around AI heavily focused on promoting innovation, AI with “American values,” and putting America’s interest above those of our allies.
The Biden administration was quite active in AI regulation. It issued a bunch of reports and a few Executive Orders, including on “safe, secure, and trustworthy AI."
It also issued the Blueprint for an AI Bill of Rights, which painted a vision of a fundamental rights-based approach to AI that focused on protecting communities from AI-related harms and included America and its allies.
You should care about being protected from AI-related harms. I believe that AI can do great things; I’m optimistic about technology.1 But it can also do bad things. For example:
Your self-driving car could try to drive over a fire hose in active use.
An LLM could be used by cyber-criminals to create malware and phishing scams (and genAI can also clone a loved ones’ voice to scam you).
A biased algorithm could under-diagnose conditions in your racial group.
And these were just from a cursory glance at an incidents database!
The Biden administration tried to strike a balance between promoting development and protecting people. The EO on safe, secure, and trustworthy AI didn’t put many burdens on private companies, just that they report massive computing clusters or if they’re training advanced models, and the results of their safety tests. Otherwise, the EO mostly applies to the federal government and required agencies to create reports on how they could better use AI and promote safe and responsible development (including addressing bias and discrimination) within their agency remits.
It also proposed something called the National AI Research Resource (NAIRR), which would help increase access to data and compute for institutions and researchers with fewer resources.
The new Trump admin is going all in on innovation, but it’s not just de-prioritizing equity topics—it’s actively rolling them back. On his first day in office, Trump revoked Biden’s EO. This means that:2
Big tech firms don’t have to report on frontier model training, red-teaming, and compute cluster deployment.
Visa reform efforts trying to bring more talent into the US are at risk.3
Guidelines for government procurement of AI and making sure AI respects civil rights might get scrapped.
The NAIRR is probably safe because it’s being held up as a way to “accelerate groundbreaking research.”
The US should be “the global leader” in AI; AI’s purpose is to “secure a brighter future for all Americans” (note this doesn’t include the US’s allies).
AI Ethics
After repealing Biden’s EO, Trump signed a new EO on “removing barriers to American leadership in AI.” This still seems like a placeholder (it’s light on specifics compared to the 36-page Biden EO) but offers some insights into where things are heading. It calls on agencies to review and “suspend, revise, or rescind” actions that go against the new “policy of the United States,” which is to “sustain and enhance America’s global AI dominance in order to promote human flourishing, economic competitiveness, and national security.” Which sounds ok, but Section 1 states that “to maintain this leadership, we must develop AI systems that are free from ideological bias or engineered social agendas” (emphasis mine). While we don’t have an exact interpretation of this, the fact that it considers an order on “safe, secure, and trustworthy AI” as a “barrier[] to American AI innovation” does not bode well for equitable and representative AI. Conservatives got up in arms about Google Gemini’s initial excessive wokeness (which, to be fair, generating diverse Nazis is beyond the pale) but Elon Musk has also taken aim at the general industry practice of training LLMs to output responses less likely to cause offense to specific groups.
These models are trained on huge swathes of the Internet, which is not known for being the friendliest of places for some groups. These biases are then reflected in unfiltered model outputs. Companies do things like reinforcement learning from human feedback (RLHF)4 and response filtering to mitigate this, which helps ensure that generative AI is better for more people to use and reflective of the society that we want to see. Or at least, the society that we wanted to see. Ideologically, the winds are turning away from debiasing and content filtering. In fact, de-biasing is now instilling “ideological bias.” X’s AI chatbot, Grok, is premised on being “politically unbiased” (or based, depending on who you ask), and the EO clearly wants other companies to move in this direction. But this creates PR conundrums. Musk doesn’t care if his chatbot can generate pictures of politicians doing insane things or Mickey Mouse with a machine gun or whatever, but Google and Microsoft and OpenAI sure do. Going forward, expect to see a push-pull between the “anti-woke” rhetoric (which some tech companies may adopt as a smoke screen) and companies’ actual actions, which may include quietly filtering more than any public statements would suggest.
AI Safety
Beyond bias, this is also probably bad for AI safety. The reporting requirements weren’t much, but they were something, and now they’re gone. The administration may rely on its increasingly tight ties with Silicon Valley to keep tabs on what’s going on, but this sort of informal, personalist reporting is not how you do things if you want to be sure about safety. The Biden administration also released an AI Risk Management Framework offering guidance on how to make sure AI is trustworthy and safe. Biden’s EO also directed the Office of Management and Budget to release reports guiding agencies on how to advance responsible innovation and address AI risks and acquire AI products responsibly. Trump’s EO requires that these memos be revisited to and revised. Most likely what will be eliminated are provisions on civil rights, bias, discrimination, and supplier diversity, but broader provisions on safety could also be on the chopping block.
International Relations & Big Tech
Finally, we come to the international relations portion of the debate. It’s a good thing that this issue is running slightly late (Mumbai has been wonderful for morale if not deadlines) because by now, you’re probably aware that the stock market is having a bit of a rough start to the week. This is because a Chinese start-up called DeepSeek has released two open-source models that rival the state-of-the-art proprietary models from US Big Tech companies. Now, everyone is basically freaking out that US Big Tech is barking up the wrong tree, that semiconductor export controls have failed, and that China is “winning” the “AI race.” However, I think these concerns are overblown. First, there’s analysis to suggest that rather than export controls not working, this was already in progress pre-export controls and so was going to happen either way, but the controls may stall further progress. And basically everyone is trying to figure out how DeepSeek did it; Meta has reportedly set up four war rooms to analyze it. And because DeepSeek’s models are open-source,5 their approach is more easily replicable. Big Tech companies still have more data and computing resources than DeepSeek, and they’ll leverage it—and their in-house talent—as much as they can, and will likely make significant progress especially as export controls make themselves felt. The US open-source AI community is also making progress; a UC Berkeley team released a model called Sky-T1 that’s competitive with OpenAI’s early o1 model and can be trained for just $450.
Regardless, this looks bad on the geopolitical front; US Big Tech has egg on its face. Recently, we’ve been seeing Big Tech CEOs try to get as close as they can to Trump (see: million-dollar inauguration donations), the bet being that the closer personal ties you have and the more he likes you, the better off your company be. This was on full display when OpenAI’s Sam Altman, Oracle’s Larry Ellison, and SoftBank’s Masayoshi Son announced “Project Stargate,” a massive AI infrastructure investment, at the White House with Trump last week. There is no federal component to Stargate. The government has nothing to do with it. But, as Kara Swisher and Scott Galloway argued on Pivot, part of the CEO charm offensive is taking existing initiatives (Stargate has been in the works for a long time) and letting Trump take credit for them so that your company is protected.
China has a long history of appointing “national champions” to lead tech development. Basically, those (typically large) companies have the blessing and support of the government to lead specific technology development. But recently, China has been moving away from this model, and start-ups like DeepSeek are likely to accelerate that trend. At the same time, it (ironically) seems like the US is moving towards a model like this where specific Big Tech companies have the government’s support. And right now, US open-source AI is being led by Meta (although start-ups and academic groups are certainly playing a role). Regardless, US tech companies are going to be used as a vehicle for national power and thus will have the backing of the US government to pursue innovation full-throttle and beat China. Expect a tightening of ties between Big Tech and the administration, but also potentially interpersonal tensions flaring up as personalities and motivations clash.
So. Why should you care about where US AI governance is going? Even if you and your loved ones aren’t affected by any AI-related incidents (which are now more likely with fewer ethical and safety requirements), AI is going to drive a lot of the economy and geopolitics over the next few years. (And Europe, don’t think you’re immune.) Buckle up.
I don’t identify as a techno-optimist because of the connotations; I’ve written about the dangers of techno-optimism and techno-solutionism.
Biden’s EO was 36 pages and had 150 requirements; this is just a selection.
This has been a whole debate amongst the right; stay tuned for more.
Basically where you have humans look at responses and tell the model which are good so that it learns human preferences.
There’s a lot of nuance to this term that we can’t get into here, but basically: you can look at the code.
Thumbnail generated by ChatGPT with the prompt “Generate an abstract brushy impressionist painting on the subject of an uncertain future”.
Veeery interesting, thank you!
I’m curious to see what will happen in the AI for health space, as the dept of Health and Human Services just promoted the Office of the National Coordinator for Health IT (ONC) to Assistant Secretary for Technology Policy (ASTP), which now includes a Chief AI Officer, Chief Data Officer and Chief Technology Officer (and all three positions are filled with very capable women btw). Health IT has seen bipartisan support for a while now, but with the new influence of big tech, I’m interested to see how this may change. I hope not, as ASTP/ONC is doing amazing things!