The Workforce Question Agencies Can't Dodge
The Public AI Brief · Issue No. 26
Congress is finally asking what agencies have been avoiding: can you execute an AI strategy when your technology teams are walking out the door? That tension framed this week’s developments, as federal adoption moves forward despite unprecedented workforce disruption, states navigate leadership churn while advancing governance models, and local governments demonstrate that practical AI deployment doesn’t require massive teams.
Federal workforce losses from DOGE-mandated reductions hit technology teams hardest, creating a paradox where agencies must accelerate AI adoption with diminishing human capacity to implement it. Meanwhile, states are shuffling their own leadership deck, with Texas and New York replacing chief AI officers as they attempt regulatory pushback against federal preemption. At the local level, cities are proving that AI for planning and permitting doesn’t need armies of specialists—just clear use cases and vendor partnerships.
This Week’s Key Developments:
Federal workforce crisis meets AI mandates: Democrats question how agencies can execute AI Action Plan while losing technical talent
90% of agencies using AI: Google survey reveals adoption widespread but stuck in pilot purgatory due to security fears and skills gaps
Fraud detection proving value: PRAC’s AI engine trained on pandemic data could have flagged “tens of billions” before disbursement
State CIO musical chairs continues: Texas, New York shuffle AI and technology leadership amid governance buildout
Wyoming approves massive data center: What could become largest U.S. facility raises infrastructure cost questions
Local governments deploy AI for permitting: Honolulu, Pueblo among cities using automation to speed planning processes
Federal
The Talent Exodus Nobody’s Solving
House Democrats pressed White House Office of Science and Technology Policy Director Michael Kratsios on an uncomfortable question during Wednesday’s hearing: how do you build a tech-centric government workforce to advance AI when you’ve spent 2025 firing that workforce? Rep. Haley Stevens pointed to NIST’s proposed $325 million budget cut, resulting in approximately 500 job losses, arguing the cuts “weaken cybersecurity and privacy standards” and “limit advanced manufacturing, physical infrastructure and resilience innovation.” Rep. George Whitesides called the science workforce attacks “reprehensible,” noting they target “one of the core pillars of American strength.”
The administration’s answer appears to be Tech Force, the two-year rotation program for private sector technologists that Kratsios touted as receiving interest from 35,000 Americans. But that optimism collides with new survey data from Google Public Sector showing 55% of federal respondents cite lack of employees with skills and training as a major barrier to AI adoption. The survey found nearly 90% of agencies are “planning to or are already using AI,” but only 12% of civilian agencies and 2% of defense agencies report completed AI adoption plans. Security and adversarial risks remain the single biggest blocker at 48%, followed by reliability concerns at 35%.
The mismatch is stark. Agencies need sustained technical capacity to move AI from pilots to production, but DOGE-era workforce reductions have disproportionately affected mid-career technologists who bridge legacy systems knowledge with modern capabilities. Tech Force may inject talent, but two-year rotations don’t build institutional knowledge or maintain systems long-term. The question isn’t whether agencies can start AI projects—it’s whether they can sustain them.
When Fraud Detection Actually Works
While agencies struggle with AI strategy, the Pandemic Response Accountability Committee demonstrated concrete results with its AI-powered fraud prevention engine. Trained on 5 million pandemic-era relief applications, the system can review 20,000 applications per second and flag anomalies before payment. PRAC Executive Director Ken Dieffenbach told House lawmakers the engine would have flagged “at least tens of billions of dollars” in fraudulent claims had it existed in March 2020.
The engine combines unsupervised machine learning to detect anomalies, supervised models identifying patterns from known fraud cases, and rules-based flags catching invalid Social Security and employer identification numbers. Small anomalies often reveal hidden connections like shared bank accounts among supposedly independent applicants. Treasury’s Do Not Pay system is expanding access across agencies, with full utilization expected by fiscal year end—up from just 4% of programs having full access in FY 2014.
This represents the rare AI success story in government: clear problem definition, quality training data, measurable outcomes. GAO estimates the federal government loses $233 to $521 billion annually to fraud. PRAC has helped recover $500 million so far, a fraction of what pre-payment vetting could prevent. The challenge now is finding a permanent home for PRAC’s analytics capabilities before the committee sunsets, ensuring these tools outlive the crisis that created them.
Budget Realities Check AI Ambitions
Congress allocated $5 million for the Technology Modernization Fund in fiscal 2026 funding bills—far less than the administration requested and a fraction of what agencies need for AI infrastructure. The executive branch budget pact includes IT investments but reduces funding for the U.S. DOGE Service to less than half the request. Congressional appropriators directed OMB to produce guidance on AI-ready datasets at agencies and cloud infrastructure adoption, but didn’t provide explicit TMF reauthorization.
The funding gap compounds workforce challenges. GSA’s OneGov initiative is helping expand AI adoption by providing procurement pathways for agencies with “early, light contact” with AI technologies. Chief AI Officer Zach Whitman noted the deals help agencies that may lack expertise acquire tools at negotiated prices. But procurement shortcuts don’t solve the fundamental constraint: agencies need sustained funding for infrastructure, training, and technical staff to move beyond pilots.
State
The CIO Shuffle Continues
Texas named Chief AI and Innovation Officer Tony Sauerhoff as interim CIO after Amanda Crawford was appointed to head the state’s Insurance Department. Sauerhoff became Texas’ first chief AI officer in 2024, making him a rare official holding both technology leadership titles simultaneously. The move comes as Texas continues building out AI governance structures while Crawford transitions to insurance regulation, an odd lateral move for a sitting CIO.
New York replaced its chief AI officer less than a year after creating the position, appointing Eleonore Fournier-Tombs, a United Nations University researcher specializing in AI governance and climate adaptation. The state also named a new chief digital officer. Delaware’s CIO Greg Lane resigned after serving since June 2023, with Chief of Administration Jordan Schulties serving as interim replacement.
The pattern is concerning. States are building AI governance capacity while simultaneously losing the leaders meant to execute that vision. CIO tenure has always been short—averaging three to four years—but the AI governance layer adds complexity that benefits from continuity. When chief AI officers turn over annually, institutional knowledge evaporates. Mississippi CIO Craig Orgeron captured the challenge: “There’s a lot of hype” around AI, but success depends on building foundations state government needs to scale emerging technologies. That foundation-building requires sustained leadership.
States Push Back on Regulation
New York lawmakers are preparing another attempt to regulate AI after previous bills were watered down by industry lobbying. The state’s 2026 legislative agenda includes ambitious plans despite past failures to rein in the powerful industry. California Attorney General Rob Bonta opened an investigation into Elon Musk’s xAI after an “avalanche” of complaints about sexual content generated by the company’s AI image editing tool, examining whether it violates California law.
Kentucky’s attorney general sued an AI company, alleging it “preys” on youth, claiming violations of the Kentucky Consumer Protection Act and Kentucky Consumer Data Protection Act. Virginia’s social media law took effect limiting minors to one hour daily on platforms without parental approval via age verification, though an industry group is suing to block it. Arkansas faces similar legal challenges to its reworked social media law requiring age verification.
The state regulatory push continues despite Trump administration threats to preempt state AI laws. States are testing different approaches—content regulation, youth protection, consumer rights—creating a patchwork that industry opposes but that reflects genuine attempts to address harms that federal inaction has ignored. Whether states can maintain this authority against federal preemption attempts remains the year’s defining question.
Data Centers: Economic Promise, Infrastructure Reality
Wyoming County approved construction of what could become the largest data center in the United States. The project could eventually consume electricity equivalent to 10 nuclear power plants, boosting Wyoming’s energy industry while challenging emissions limits and stressing water supplies. The economic development promise is significant, but the infrastructure demands raise questions about who bears the costs.
The pattern repeats nationwide. Kansas approved a $6.6 billion data center in Independence, with construction starting this summer and continuing three to five years. These projects promise tax revenue and jobs, but Mississippi’s CIO noted such gains aren’t always easy to quantify. Policymakers can push developers to deliver, but often lack leverage once projects are approved.
New Jersey took a different approach, advancing legislation to charge data centers new tariffs for driving higher electric costs. The move acknowledges infrastructure strain rather than just celebrating economic development. As AI adoption accelerates data center demand, states must balance growth promises against grid capacity, water availability, and community impact.
Local
Practical AI: Permitting and Planning
Honolulu is using CivCheck’s platform to review applications and speed up the permitting process, joining cities bringing AI to planning and permitting workflows. Pueblo County, Colorado partnered with Blitz AI to make building permit processes more efficient through integration that automates formerly time-consuming manual application reviews. Bellevue, Washington already uses AI permitting tools, and Louisville, Kentucky will soon pilot them.
These deployments share characteristics: clear use cases, vendor partnerships, measurable efficiency gains. No massive internal AI teams required. No lengthy governance debates. Just practical automation of document-heavy processes that consume staff time without adding judgment value. The AI handles initial review, flagging issues for human assessment. Staff focus on edge cases and applicant interaction rather than checkbox verification.
Thurston County, Washington is considering formalizing AI surveillance guardrails through a draft ordinance regulating the county’s acquisition and use of AI-enabled surveillance technology. The approach balances deployment with oversight, acknowledging that technology decisions have civil liberties implications.
When Students Lose Access
Denver schools are blocking student access to ChatGPT over concerns about the chatbot’s new features enabling group chats and potentially exposing students to content related to self-harm, violence, and cyberbullying. District officials cited the popular AI tool’s evolution beyond its original question-and-answer format into social features that raise student safety concerns.
The move illustrates the whiplash schools face with consumer AI tools. Districts initially blocked ChatGPT over academic integrity concerns, then some reconsidered as AI literacy became important, and now safety concerns from feature additions prompt new restrictions. Schools lack control over product roadmaps, forcing reactive policy changes that confuse students and teachers. The challenge isn’t AI itself but rapidly evolving consumer products entering educational contexts without guardrails designed for minors.
Key Insights for Practitioners
Workforce capacity trumps AI strategy: Survey data showing 55% of agencies cite workforce skills gaps as a major AI barrier while simultaneously losing technical staff through DOGE reductions reveals a fundamental contradiction. No amount of strategic planning compensates for lack of people who can execute.
Action: Audit your current technical capacity against AI adoption goals. If gaps exist, determine whether you’re building internal expertise, relying on contractors, or pausing ambitions until staffing stabilizes. Wishful thinking about Tech Force or vendor solutions won’t maintain production systems.
Fraud detection offers the AI blueprint government needs: PRAC’s fraud engine demonstrates what successful government AI looks like—clear problem definition, quality training data, measurable outcomes, pre-payment intervention rather than post-payment recovery. The model applies beyond fraud to any high-volume decision process where patterns matter more than individual judgment.
Action: Identify processes in your organization that involve reviewing large volumes of applications, claims, or requests where anomaly detection could flag issues before action. These are your highest-value AI targets, not chatbots or general productivity tools.
Local deployment doesn’t require federal capacity: Cities implementing AI for permitting aren’t waiting for federal guidance, state frameworks, or massive internal teams. They’re partnering with vendors for narrow use cases that deliver measurable time savings.
Action: Stop treating AI as enterprise infrastructure requiring organization-wide strategy. Start with specific workflow pain points where automation reduces manual review time. Vendor partnerships accelerate deployment, but ensure contracts include performance metrics and exit clauses.
What I’m watching: How agencies manage technical staff losses through 2026 as AI ambitions scale. If DOGE reductions continue targeting technology teams while AI Action Plan demands accelerate, something breaks. Either agencies admit they lack capacity and pause deployment, or they proceed with insufficient staff and face system failures that undermine public trust.
What workforce challenges are you seeing in your organization as AI expectations increase? How are you balancing adoption ambitions against technical capacity? Share your experiences in the comments.

