States Draw the Line on AI Preemption
The Public AI Brief · Issue No. 24
The fight over who gets to regulate AI moved from threat to reality this week. Following President Trump’s executive order blocking state AI laws, 24 state attorneys general told the Federal Communications Commission it lacks authority to preempt state protections. California’s Attorney General Rob Bonta was blunt: the FCC inquiry “follows the troubling pattern of the Trump Administration attempting to limit states’ ability to protect their residents.”
This isn’t posturing. States have been the only governments actually regulating AI while Congress has failed to pass a single comprehensive bill. They’ve enacted laws protecting children from AI chatbot harm, prohibiting deepfakes in elections, and requiring disclosure when consumers interact with AI systems. The executive order would wipe those protections away, leaving what 40 attorneys general called Americans “entirely unprotected from the potential harms of AI.”
This Week’s Key Developments:
24 state AGs challenge FCC authority to preempt state AI laws
StateScoop reports state leaders, civil rights groups call order “dangerous”
AI tops state CIO priorities for first time, overtaking cybersecurity
Treasury’s viral job posting requires 10-page Gatsby analysis for AI position
Federal agencies get procurement guardrails for buying AI tools
Federal
Congress’s refusal to regulate AI created the vacuum states rushed to fill. Now the administration wants to prevent states from acting without offering federal protections in return. The Department of Justice has 30 days to establish an AI Litigation Task Force whose sole purpose is challenging state laws. The Commerce Department must identify “onerous” state provisions within 90 days. The FTC has the same timeline to issue guidance on when state laws requiring truthful AI outputs might be preempted as “deceptive.”
The Office of Management and Budget set new procurement guardrails this week, directing agencies to ensure large language models they purchase are “truth seeking and ideologically neutral.” The memo says acquisition policies must prevent what OMB calls “biased” outputs from AI tools. The procurement guardrails arrive as agencies navigate contradictory workforce signals: the Office of Personnel Management launched a “Tech Force” initiative seeking early-career tech talent even as the IRS has shed at least 2,000 technology employees.
The Gatsby Test
The Treasury Department posted a job opening this week for an IT Specialist focused on artificial intelligence. The application requirements went viral on social media: write a 10-page analysis of metaphors in “The Great Gatsby,” convert it to a 200-word executive summary, translate both into Spanish and Mandarin, create a comparison table with three other novels, then rewrite the entire essay as a scientific paper with abstract.
The posting appears designed to test whether applicants can effectively use AI tools. But as an AI expert told Nextgov, the skills being measured don’t align with the technical strategy and architecture work the position actually requires. The timing is awkward—President Trump hosted a “Great Gatsby”-themed Halloween party at Mar-a-Lago during the government shutdown, and Treasury is recruiting for AI expertise while simultaneously losing thousands of technology workers. The disconnect between stated workforce needs and actual hiring practices captures the federal government’s broader struggle to articulate what AI leadership actually requires.
Meanwhile, Nextgov reports the White House convened companies and researchers to discuss the Genesis Mission, the administration’s initiative connecting AI capabilities with scientific research. Radical AI’s CEO, who participated in the meeting, said there was a goal-oriented, partnership-driven focus for Genesis Mission and the ways it can change how AI and science work together.
State
States Push Back Against Federal Preemption
State leaders and civil rights groups are responding forcefully to what they call a “dangerous” executive order banning state AI laws. Following months of protest from state lawmakers, attorneys general, and civil rights organizations, the order potentially sets the stage for widespread legal challenges. The National Association of State Chief Information Officers released a statement in May expressing concern about the proposal’s impact on work states have done to regulate AI in the absence of federal laws.
Twenty-four state attorneys general filed comments with the FCC this week arguing the agency lacks statutory authority to preempt state AI laws. The letter responds to an FCC notice of inquiry from September suggesting the commission would use its regulatory authority to override state protections. The AGs argue federal preemption would harm state interests and leave residents unprotected. California’s Attorney General Bonta has been particularly vocal, having opposed multiple federal preemption attempts throughout 2025.
American Civil Liberties Union senior policy counsel Cody Venzke called the order “dangerous,” noting it doubles down on a policy that the Republican-led Congress rejected twice: “displacing states from their critical role in ensuring that AI is safe, trustworthy, and nondiscriminatory.” American Federation of Teachers President Randi Weingarten called it an “outrageous and likely illegal directive.”
States show no signs of backing down. Nearly 40 states adopted or enacted AI measures in 2025, with states and territories proposing more than 250 pieces of AI-related legislation. According to a November report from the Council of State Governments, states undertook this work because of federal inaction. The state laboratory of democracy is functioning exactly as designed—experimenting with approaches to emerging technology risks while the federal government debates whether to act at all.
Building Governance Capacity
AI has overtaken cybersecurity as the top priority for state CIOs in 2026, according to the National Association of State Chief Information Officers’ 20th annual survey. The shift reflects how quickly AI has moved from experimental to essential in state operations, representing what NASCIO calls “a pivotal shift in how leaders are preparing for the next era of gov tech.”
Illinois is searching for its first Chief AI Officer to lead the state’s artificial intelligence and machine learning strategy. The Department of Innovation and Technology is building out a formal AI office to coordinate deployment across agencies. California Governor Newsom announced a 30-member California Innovation Council including executives and leaders from the UC system, Stanford University, the Brookings Institute, and the California Chamber of Commerce. The council will advise on responsible AI deployment and help position California as a leader in AI governance.
New Jersey launched a $20 million AI fund backed by the state and private sector to help companies develop AI tools. The move signals New Jersey’s ambition to become a national AI leader. Route Fifty reports the state is prioritizing AI for better service delivery under a new grant program, recognizing pressure to innovate public benefit systems.
These investments in governance capacity—chief AI officers, advisory councils, startup funds—represent states taking institutional responsibility for AI deployment. They’re building the organizational muscle to move beyond pilots to scaled implementation.
Local
Cities are deploying AI across diverse use cases, from reducing first responder paperwork to streamlining permitting processes. Los Angeles is ramping up AI deployments ahead of hosting the World Cup, Super Bowl, Olympics and Paralympics, using global events as both deadline and justification for accelerated adoption. Smaller municipalities are finding AI helps stretch limited staff capacity, automating routine tasks so employees can focus on complex problems requiring human judgment.
Local governments are using AI to navigate the notoriously complex federal grant application process. Before, employees sifted through hundreds of pages and filled out unique applications for each opportunity. AI has made the process more streamlined and less time consuming, though success still requires human oversight to ensure accuracy and alignment with grant requirements.
Not every community welcomes AI infrastructure. A Michigan township limited where data centers can be built, restricting the facilities to land zoned for industrial and commercial revitalization. The move reflects growing community concern about data centers’ energy consumption, water usage, and limited job creation relative to their physical footprint.
Key Insights for Practitioners
Federal vacuum creates state imperative: Congress’s failure to regulate AI hasn’t stopped AI deployment—it has forced states to act as the only layer of consumer protection. Without federal guardrails, states are operating as laboratories of democracy by necessity, not choice.
Action: Document the specific harms your constituents face from unregulated AI systems. State regulations work best when grounded in concrete problems affecting real people, not abstract technology policy.
Governance capacity precedes implementation success: States investing in chief AI officers, advisory councils, and formal AI offices are building institutional capacity to move from pilots to production. The organizational infrastructure matters as much as the technology itself.
Action: If your organization lacks dedicated AI governance leadership, identify who owns AI strategy and accountability now. Informal arrangements don’t scale—create formal reporting structures and decision rights before expanding AI use.
Workforce signals reveal strategic confusion: Federal agencies simultaneously recruit AI talent and shed technology workers. Job postings test AI tool proficiency through literary analysis while positions require technical architecture expertise. The disconnect suggests unclear thinking about what AI capabilities government actually needs.
Action: Define AI competencies your organization actually requires before recruiting or training. Distinguish between AI literacy (everyone needs some), AI tool proficiency (many roles need this), and AI technical expertise (few roles require deep skills). Don’t hire for one when you need another.
What I’m watching: How quickly the AI Litigation Task Force identifies its first targets for legal challenge. The 30-day deadline means we’ll see by mid-January which state laws DOJ considers most threatening to the administration’s “minimally burdensome” framework. The selection will signal whether this is about removing genuine regulatory barriers or simply preventing states from acting at all.
What’s your take on the federal-state preemption fight? Should states continue regulating AI even as the administration challenges their authority, or does fragmented state-by-state regulation create more problems than it solves? Share your perspective in the comments.

