What’s Next for AI? 3 Policy Moves Expected from Trump’s Return

It’s been over a week since the unexpected Republican upset of Donald Trump’s return to the White House. While Democrats begin to regroup, the new administration’s AI agenda is already taking shape.

One of Trump’s key messages during the campaign was his commitment to dismantling what he calls “overly restrictive” AI regulations. The Republican Party Platform leaves little ambiguity:

“We will repeal Joe Biden’s dangerous Executive Order that hinders AI Innovation and imposes Radical Leftwing ideas on the development of this technology. In its place, Republicans support AI Development rooted in Free Speech and Human Flourishing.”

— 2024 Republican Party Platform

In this special edition of Aurora Insights, we explore the possible future of AI regulation in the U.S. by tracing back to Trump’s first term for clues on what might come next.

Donald Trump returns to office in a stunning show of force, capturing both the Electoral College and popular vote.

His first term sheds some insight into what might come next.

/E.O. 13859 and Trump’s First Term

In May of 2018, President Trump’s administration hosted its “White House Summit for American Industry,” featuring senior administration officials, academic heads, and business leaders. Organized by the then-OSTP, prominent figures from the NSF, DOA, DOC, DOE, and more were in attendance. Their focus: removing “barriers to AI innovation in the United States.” 

2018 was a special year for the acceleration of AI development. Google’s DeepMind was king, and breakthroughs in medical predictions, scheduling assistants, and financial security dominated headlines long after the early 2010s breakthroughs in deep learning. These and many more were notable discussion topics for the heads and bureaucrats looking to accommodate Trump’s “America First” approach to innovation & development.

The product of this gathering and the landmark moment in AI for Trump’s administration would culminate with the 2019 Executive Order on Maintaining American Leadership in AI, launching the American AI Initiative. This initiative set out five primary goals: increasing federal AI research funding, expanding access to government datasets, creating standards, developing an AI-skilled workforce, and establishing international AI partnerships.

Ever-bullish on AI development, the Trump administration doubled AI research investments and set formal federal guidance for AI principles. For the first time, federal agencies were directed to deploy AI for use in medical diagnostics and energy research. The NIST even began developing formal technical standards (more on that soon) that would shape and ultimately unify the US government’s approach to AI governance: “pro-business.”

/From NAIIO to AI.gov

The bridge between Trump and Biden’s respective administrations came with the creating of the National Artificial Intelligence Initiative Office (NAIIO). Devised by the Trump administration and passed into law with the 2021 National Defense Authorization Act, the NAIIO helped further centralize and consolidate federal AI policy coordination, looking to reinforce the United States’ role as a global leader in AI. Interestingly, even in a time of record division and distrust, this law remained one of the few bipartisan bright spots for the outgoing Congress.

However, by 2021, other nations were significantly diverging in their approaches to regulating AI systems. The early beginnings of Large Language Models (LLMs) and other Generative AI applications were starting to enter the main fray, and while nations like China passed laws such as the “Ethical Norms for New Generation AI” that established formalized governance channels, the incoming Biden administration struck a markedly different tone.

Instead of scrapping what the Trump administration had done, senior officials in the Biden administration expanded NAIIO’s focus to include more specific attention to risk management and civil rights protections. In 2022, the White House’s Blueprint for an AI Bill of Rights outlined specific protections for safety, privacy, and fairness in AI applications. Coupled with the NIST’s AI Risk Management Framework, Biden’s policies culminated in a comprehensive 2023 executive order, codifying requirements for rigorous testing, risk assessment, and transparency standards for AI systems used by federal agencies.

It’s this significant step that brings us to the present day. While the Biden administration has taken admirable steps to address the challenges and risks of widescale AI usage, no binding legislation remains, even as landmark legislation like the EU’s AI Act begins to take effect. That will mean that the incoming administration will have a great deal of power over how AI policy going forward will be crafted.

/3 Key Predictions in Trump’s Second Term

1. AI Becomes “America First”

President Trump’s “America First” AI approach is expected to prioritize competition, especially with China, by reducing regulatory constraints. Unlike Europe’s comprehensive AI Act, which requires stringent risk assessments, Trump’s framework is likely to focus on accelerating development through a leaner, market-driven approach to innovation. We will likely see some revision, in the form of future Executive Orders that specifically address the need to bolster US competitiveness in the space.

2. Trump Recasts “Innovation”

The Trump administration may target the Artificial Intelligence Safety Act (AISA), viewing safety regulations as potential obstacles to rapid AI advancement. Advocacy groups like Americans for Responsible Innovation argue that these regulations are essential to prevent adverse impacts of AI technology. Trump’s policy will likely shift safety to a secondary priority while selectively keeping certain protections to mitigate potential risks of unchecked development. 

3. Many Things Don’t Change

Despite expected policy shifts, Trump’s AI strategy may retain certain elements introduced by the Biden administration. Many things will likely stay the same with the uniquely bipartisan nature of many components in the American AI apparatus (NIST standards, NAIIO, etc.). Some of the most lasting changes of the Biden administration, including the Office of Management and Budget’s M-24-10 memorandum introducing Chief AI Officers into federal agencies will likely be shaped by the Trump administration’s shifting priorities, but many of the positions may entirely stay in place. 

/Implications for Us

It’s been four years without Donald Trump in the Oval Office. During that time, the AI landscape has changed, ballooning in the hype cyclone many of us are living through today. Since the release of ChatGPT in 2022, leading AI developers, including OpenAI and Anthropic, have operated and complied within an increasingly regulated environment focused on safety, transparency, and anti-discrimination. These companies entered agreements with federal entities, allowing model access for safety evaluations—a response to mounting expectations for responsible AI practices.

With Trump’s return, developers may see reduced compliance demands, potentially easing deployment timelines. However, that may be a double-edged sword for leading players and innovators. The struggles of OpenAI, in particular, to wrestle with its safety initiatives have been well-documented (the head of ‘AGI Readiness’ resigned in October), and reduced structural demands may make it permissible for organizations to misuse data, among a host of poor consumer outcomes. 

While Trump’s administration seems poised to adjust AI policy significantly, a degree of continuity is likely. Developers, regulators, and business leaders should prepare for a regulatory landscape that will favor accelerated innovation, while still including some safeguards to manage the broader societal impacts of AI. Yes, that means the future for young professionals in the US AI policy space will also change dramatically. No doubt, we will continue to live through disruptive times. 

Next
Next

Big Tech in The White House, a Step or a Leap in the Right Direction?