Agentic AI Arrives, Data Centers Face Reckoning
The Public AI Brief · Issue No. 25
I hope everyone had a happy holiday season and a chance to recharge. Happy New Year! If the first week of 2026 is any indication, this year will be anything but quiet in the AI world. We’re seeing a major shift from AI as experimental tool to AI as autonomous agent, while communities across the country are drawing hard lines on data center expansion. The public sector is caught between embracing AI’s promise and managing its very real infrastructure and social costs.
While I took a short break from writing, one thing I noticed ovcer the holidays as an uptick in the disucssions around agentic AI, likely because everyone wanted to get in their “2026 will be the Year of…” predictions. Agentic AI, the capability for AI systems to take action independently rather than just respond to prompts, is moving from concept to reality in government operations. At the same time, the data center boom that powers all this AI has been triggering a backlash that’s impossible to ignore. From New Jersey charging data centers for spiking electric costs to Michigan townships imposing moratoriums, the “build it and prosperity will follow” narrative is colliding with resident concerns about infrastructure strain, environmental impact, and who actually benefits.
This Week’s Key Developments:
Federal agencies embrace agentic AI for planning, casework, and operational efficiency
Industry predicts 2026 as breakthrough year for AI agents that act rather than just respond
Data center backlash intensifies with New Jersey tariffs, Michigan moratorium, Baltimore County permit halt
Louisville launches AI permitting pilot while hiring public-sector AI leader focused on affordable housing
States move to protect children from AI in toys and excessive screen time
Army creates new AI/ML career path for officers seeking specialization
Federal
Agentic AI Moves From Concept to Operations
The federal government is shifting from experimenting with AI to deploying systems that can act independently. Agentic AI is changing how government plans and prepares, moving beyond the chatbot model to systems that can manage complex workflows without constant human oversight. Industry leaders are calling 2026 the year of agentic AI, with major technology companies reporting client demand for AI solutions that can handle end-to-end processes rather than just answer questions.
This isn’t theoretical. The Government Publishing Office is using AI to enhance operations by converting internal documents into AI-generated podcasts through Google’s NotebookLM, making information more accessible for its workforce. Utah’s Office of AI Policy is working with an AI-powered health platform to streamline prescription renewals for residents with chronic conditions, reducing administrative burden while maintaining safety protocols.
The most compelling case for agentic AI may be in overwhelmed systems. The SNAP program faces a paper crisis as new federal mandates kick in, with caseworkers drowning in documentation requirements. AI systems that can process applications, flag issues, and route cases could help caseworkers focus on the human judgment calls that matter most. The Defense Logistics Agency is building its AI foundation on continuous training and integrated platforms, ensuring every employee can work effectively with AI tools rather than treating them as specialized technical capabilities.
The workforce implications are already visible. The Army launched an AI and machine learning career path for officers, with applications opening January 5 through the Voluntary Transfer Incentive Program. This signals a recognition that AI expertise needs to be embedded throughout the organization, not siloed in IT departments.
The shift to agentic AI raises fundamental questions about accountability and oversight. When AI systems can take action without human approval for every step, agencies need new frameworks for defining acceptable autonomy, monitoring decisions, and maintaining meaningful human control. State leaders are grappling with this transition in real time. At a recent Innovate(us) workshop I attended on AI governance, practitioners reported moving “from experimentation to more of a repeatable practice,” with the focus shifting from “is AI allowed?” to “under what conditions does this create value?” The challenge isn’t just deploying these systems but governing them responsibly while maintaining the agility to experiment.
State
The Data Center Reckoning
States are waking up to the hidden costs of the data center boom, and they’re not impressed with what they’re finding. But before we get to the backlash, it’s worth noting what’s working. State AI programs are maturing beyond policy documents into operational practice. Arizona established an Office of Digital Solutions and formed an AI steering committee with over 150 applicants representing municipalities, counties, industry, and academia. New York embedded AI governance as “a service, not a gate,” working alongside pilot projects rather than reviewing them after the fact. Utah built an open-source automated AI risk assessment tool because manual assessments were too time-consuming. The pattern is governance embedded in operations, not bolted on afterward.
Now for the reckoning. New Jersey lawmakers advanced a plan to charge data centers for spiking electric costs, imposing new tariffs on facilities that are driving utility rates higher for everyone else. It’s a direct challenge to the assumption that data centers are economic development wins without qualification.
The backlash is spreading fast. Michigan’s Springfield Township passed a 180-day moratorium barring data center plans from even being reviewed, with the possibility of extension if needed. Baltimore County is considering legislation to halt permits during impact reviews. Maryland lawmakers are signaling that more data center regulations are coming in 2026, acknowledging that the Virginia model of aggressive data center development is moving across state lines with consequences the state isn’t prepared to manage.
The consitent pattern I’ve seen is that communities are promised economic growth and tax revenue, then discover the infrastructure strain, environmental impact, and resource demands that come with massive energy-hungry facilities. The data center rush in Appalachia shows big tech eyeing coal country as AI demand soars, but rural communities are pushing back against the industry narrative. Georgia counties are taking vastly different approaches to managing the surge in data center proposals, with no state-level regulations to guide them.
I’m skeptical of the “gold rush” framing that dominates data center coverage. A lot of money will be made, certainly. But the benefits are flowing to a very narrow group of landowners, developers, and tech companies, while the infrastructure costs, environmental impacts, and electricity rate increases hit everyone else. The public sector is left managing the externalities while private companies capture the gains. States and localities that slow down to assess real costs and benefits, rather than racing to approve projects out of economic development desperation, are making the smarter long-term choice.
One bright spot: New Jersey codified its Office of Innovation into law, becoming the first state to enshrine its digital delivery team in statute. The office, now the New Jersey Innovation Authority, will continue into the new gubernatorial administration. This matters. Innovation and AI work shouldn’t disappear with political transitions. Recognizing that technology modernization is institutional, not political, and giving it structural permanence rather than treating it as a pet project is exactly the kind of governance maturity states need.
Local
Building AI Leadership Where It Matters
Louisville launched an AI-backed permitting test that represents something more significant than just another pilot program. The city recently hired a public-sector AI leader and is approaching AI deployment with a clear purpose: addressing the affordable housing shortage by streamlining the permitting process. This is AI leadership done right. Louisville isn’t deploying technology for technology’s sake. They’ve identified a concrete problem, the bottleneck in housing development, and they’re testing whether AI can help. They’re building internal expertise before scaling, not outsourcing strategic decisions to vendors.
The data center tensions playing out at the state level are even more acute locally. In Georgia, the data center rush is pitting local officials’ hunt for new revenue against residents’ concerns. Twiggs County’s situation highlights what happens when counties lack guidance or regulations to evaluate proposals that promise jobs and tax revenue but deliver infrastructure strain and environmental questions. Local governments are making billion-dollar decisions with limited information and no playbook.
Leadership transitions continue to reshape local government technology. New York City named a new acting CTO as Matthew Fraser stepped down after four years leading the city’s technology efforts. Meanwhile, Erie County, New York is scrutinizing biometric data use, with the county executive directing staff to pass a local law barring collection of such data. If enacted, Erie County would be in the vanguard on biometric data oversight, addressing privacy concerns before they become crises rather than after.
Education
Protecting Children in the AI Age
States are moving to address AI’s impact on children, though the approaches vary widely. California Senator Steve Padilla proposed a first-in-the-nation moratorium on AI chatbots in toys, building on his earlier work establishing chatbot protections. The concern is straightforward: toys with AI capabilities create privacy risks and developmental questions that haven’t been adequately studied, let alone regulated.
Alabama lawmakers are pushing for screen time limits for children, with research showing that reduced screen time from birth to age five helps build social skills. It’s a different approach than AI-specific regulation, but it reflects the same underlying anxiety about technology’s impact on child development.
California’s Department of Education continues updating its AI guidance and resources, providing districts with frameworks as they navigate these questions. The challenge for K-12 and higher education isn’t just setting rules but helping educators, parents, and students understand what AI means for learning and development.
The environmental dimension can’t be ignored either. State leaders are increasingly concerned about AI’s infrastructure demands. Arizona formed a task force specifically addressing data center expansion and environmental impacts, particularly around water and energy resources. As one state official put it at a recent workshop, “What are the policy changes that need to be made to make sure that everyone can come along in this fantastic ride?” The question applies to children’s development and environmental sustainability alike. We’re making decisions about children’s exposure to AI systems and about resource allocation for AI infrastructure with limited evidence about long-term effects, which should make everyone cautious about moving too fast.
Key Insights for Practitioners
Agentic AI requires new governance frameworks: Systems that act independently need different oversight than tools that only respond to prompts. Agencies can’t govern agentic AI with chatbot-era policies.
Action: Begin mapping which decisions you’re willing to delegate to AI systems and which require human judgment. Document the criteria now, before pressure to scale forces rushed choices.
Data center economics don’t benefit communities equally: The promise of jobs and tax revenue often obscures infrastructure costs, environmental impacts, and utility rate increases that affect everyone. The winners are concentrated, the costs are distributed.
Action: If your jurisdiction is evaluating data center proposals, demand comprehensive impact assessments that include electricity demand, water usage, infrastructure strain, and realistic job projections. Compare promised benefits against actual outcomes in communities that approved similar projects three to five years ago.
AI leadership means hiring for it: Louisville’s approach, hiring a public-sector AI leader before scaling deployments, inverts the usual pattern of deploying first and figuring out governance later. Building internal expertise gives agencies strategic capacity rather than vendor dependence. Workshop participants emphasized that successful AI adoption requires embedding governance deeply with day-to-day pilots, creating AI champions across agencies, and combining policy development with hands-on implementation.
Action: Identify whether your organization needs dedicated AI leadership or can integrate AI responsibilities into existing roles. If you’re deploying AI at scale, you need someone whose job is thinking strategically about AI’s role in your mission, not just implementing vendor solutions. Consider establishing AI leads within each department to build distributed expertise rather than centralizing all knowledge in IT.
What I’m watching: How states respond to the data center backlash, particularly whether Maryland and other states follow New Jersey’s lead in making facilities pay for infrastructure costs. If the “AI boom requires infinite data centers” narrative starts breaking down under scrutiny of who actually benefits and who pays, we’ll see a significant shift in how AI infrastructure gets built and where.
What’s your take on agentic AI in government? Are you seeing autonomous AI systems in your agency, or is this still mostly hype? And on data centers: should states be more aggressive about regulating them, or will market forces and community pushback provide enough check on expansion? Share your thoughts in the comments.

