When Temporary Fixes Become the Plan
The Public AI Brief · Issue No. 20
This week, Congress ended the longest government shutdown in U.S. history: 43 days that exposed how fragile our public sector technology and cybersecurity infrastructure really is. The resolution funds the government through January 30 and temporarily extends two critical programs: the State and Local Cybersecurity Grant Program and the Cybersecurity Information Sharing Act. Temporary is the operative word. What happens in ten weeks when these deadlines return?
The shutdown revealed something uncomfortable: federal capacity for AI and cybersecurity isn’t just strained, it’s held together by short-term patches and handshake agreements. While federal agencies scrambled to restore basic operations, state governments formed regional coalitions, K-12 districts wrote their own AI policies, and cities deployed practical solutions to immediate problems. The gap between federal policy development and state-local implementation isn’t just widening, it’s becoming the new normal.
This Week’s Key Developments
43-day shutdown ends with temporary fixes: cyber grants and info-sharing law extended only through January 30
CISA lost 65% of workforce during shutdown; federal AI legislation stalled indefinitely
Six heartland states form AI caucus to drive regional policy while federal framework remains absent
Maryland launches multi-agency AI partnership to modernize benefits access and reduce child poverty
K-12 districts create vetting frameworks for AI tools as American Institutes for Research launches implementation studies
Federal
Congress ended the shutdown, but it didn’t solve the underlying problem. The continuing resolution funds the government through January 30, 2026, giving lawmakers just ten weeks before they face the same fight again. What’s more troubling for technology leaders is the deal includes only temporary extensions of two programs that state and local governments depend on. The State and Local Cybersecurity Grant Program, which expired in September and provides critical funding for smaller jurisdictions, got a reprieve until January. So did the Cybersecurity Information Sharing Act, which shields companies from liability when they share threat intelligence with government partners. Both are stopgaps, not solutions.
The damage from 43 days of paralysis runs deeper than delayed paychecks. CISA lost 65% of its workforce during the shutdown. 1,651 employees were furloughed from a 2,540-person agency responsible for cybersecurity across all levels of government. Contractors who patch vulnerabilities and manage incident response stopped coming to work. State cybersecurity officials told reporters they felt the impact immediately, particularly in smaller jurisdictions that depend heavily on federal support and grant funding. Mike Hamilton, former CISO of Seattle, put it bluntly: “Cybersecurity isn’t something that you can pause. Adversaries don’t take days off.”
The shutdown’s toll on AI policy may prove equally costly. Federal legislation on artificial intelligence, already moving slowly, stalled completely during the impasse. The National Defense Authorization Act, which typically includes AI provisions, got pushed aside. Industry groups warned that the prolonged closure threatens U.S. leadership in AI innovation. Lawmakers now face a compressed timeline to address not just funding but also the expiring Affordable Care Act subsidies, farm bill extensions, and energy credits. AI policy will compete for attention in an already crowded agenda. The temporary nature of the shutdown resolution guarantees this will all happen again in January.
State
States Fill the Federal Vacuum
While Congress debated, six states decided they couldn’t wait. Arkansas, Illinois, Louisiana, Ohio, Oklahoma, and Tennessee formed the Heartland AI Caucus, a bipartisan effort to shape regional AI strategies and drive innovation where federal policy remains absent. The mission of the caucus is “to advance practical AI policy that strengthens local economies, prepares workers and modernizes government systems across the region.” The caucus is a sign that states are moving from reactive compliance with federal mandates to proactive regional coordination. They’re not asking permission anymore.
Texas is taking a different approach, proposing an ethics code for government AI use that would apply to all state agencies and local entities. The code, developed by the state Department of Information Resources, is now open for public comment. It’s the kind of framework that might typically come from federal guidance, but Texas isn’t waiting. Neither is Virginia, but the state’s AI registry faces mounting criticism from agencies frustrated with transparency challenges and usability issues. Building governance tools is one thing; making them work is another.
The patchwork of state AI laws is growing more complicated. State leaders increasingly worry that a mosaic of different rules will create obstacles for technology developers and businesses operating across multiple jurisdictions. Without a unified federal approach, states are simultaneously innovating and creating potential compliance nightmares. Regulation by geography rather than coherent national policy.
Maryland’s Multi-Agency Model
Here in my state of Maryland, following the lead of States like Pennsylvania and Colorad, we’re moving from governance debates to actual implementation. The state launched a multi-agency AI partnership designed to bring AI tools directly to residents with the aim of simplifying access to benefits, reducing child poverty, and improving housing access. The initiative embeds AI in daily workflows for staff across multiple agencies, moving beyond pilots to enterprise-scale deployment.
What makes Maryland’s approach notable isn’t just the technology. It’s the institutional coordination. Multi-agency efforts typically die in committee or fragment into competing priorities. Success here depends less on the AI itself than on whether state leaders can translate technical capabilities into sustained cross-agency collaboration. If Maryland can pull this off, it becomes a template for other states trying to move from proof-of-concept to production. If it stalls, it joins the long list of ambitious government tech projects that couldn’t scale.
Infrastructure Tensions and Privacy Concerns
State governments are grappling with harder questions about AI’s physical and social infrastructure. Most states don’t disclose which companies receive data center incentives, even though at least 36 states provide these subsidies. Only 11 reveal the recipients. Virginia’s incoming governor, Abby Spanberger, promises to reshape utility policy, targeting lower energy bills for residents while raising costs for data centers. The collision between AI’s energy demands and public infrastructure constraints is no longer theoretical.
In Arizona, AI-powered balloons have been photographing homes for insurance risk assessments, raising immediate privacy and policy concerns that state regulations haven’t caught up with. Pennsylvania legislators are working to close what they call a “loophole” for AI-generated child sexual abuse materials, recognizing that existing laws weren’t written with generative AI in mind. These aren’t abstract policy debates. They’re states scrambling to regulate technologies that arrived faster than the legal frameworks meant to govern them.
Education
K-12 Districts Improvising Policy
K-12 districts can’t afford to wait for federal or state AI guidance, so they’re writing their own rules in real time. The American Institutes for Research launched its AI in Education Network this week, aiming to give educators and policymakers clearer understanding of how AI tools perform in actual classroom settings. The initiative recognizes a gap: there’s plenty of vendor marketing about AI in education, but very little rigorous evidence about what works, what doesn’t, and what unintended consequences emerge when schools deploy these systems at scale.
Harlingen Independent School District in south Texas illustrates what homegrown policy looks like. The district developed digital responsibility guidelines and created a vetting process for AI tools before purchasing anything. Teachers now use several AI applications (Snorkl for engagement, Eureka Math for instant feedback) but only after the district established guardrails. It’s a practical middle path: not banning AI out of fear, not embracing every tool uncritically, but building institutional capacity to evaluate and deploy thoughtfully.
A new tool launched this week aims to help other districts follow Harlingen’s lead. Developed to address the opacity around AI products in education, it gives school leaders a framework to examine tools more closely when tech companies aren’t transparent about how their AI actually works. Meanwhile, Broken Arrow High School in Oklahoma is going a step further, offering an AI Foundations class starting this spring that includes lessons on coding and data storytelling. The implicit message: if you can’t wait for curriculum guidance from above, build your own.
Local
Cities Where AI Meets Service Delivery
Boston replaced its aging 311 system with an AI-powered, no-code platform that , unlike the old system, adapts as needs evolve. The previous system had become too rigid, unable to keep pace with how residents actually want to interact with city services. The new platform uses AI to help officials be more efficient and agile, letting them modify workflows without waiting for vendor updates or expensive customization. It’s not flashy, but it solves a real problem. Hopefully, Philadelphia (where I work) and Baltimore (where I live) will follow suit, but I’m not holding my breath.
San Anselmo, California, ran the numbers on its AI-driven traffic signal pilot and decided to expand. The system detects traffic patterns and adjusts signals to speed up or slow down flow, decreasing wait times at a busy intersection by 25 to 30 percent. After proving the concept worked, the city is rolling it out to more locations. Vail, Colorado, is taking a broader swing, implementing what it calls an “agentic smart city” platform designed to improve government operations and boost customer experience across multiple services. The ambition is higher, the complexity greater, but the goal is the same: use technology to make government work better for residents.
Portland, Oregon, offers a different kind of local AI story. The city council is reconsidering a ban on algorithms that set residential rents, a measure pulled from consideration this spring but now back on the table. The debate centers on whether prohibiting algorithmic rent-setting would discourage housing developers, potentially worsening Portland’s housing crunch. It’s a microcosm of the broader tension caused by cities trying to regulate AI applications that may harm residents while avoiding policies that stifle development and make other problems worse.
Key Insights for Practitioners
Temporary fixes guarantee permanent crisis management: The January 30 funding deadline means state and local leaders have ten weeks before facing renewed uncertainty about cyber grants and information-sharing protections.
Action: Treat the current continuing resolution as a planning window, not a solution. Identify which programs depend on federal funding that could lapse again in January, and develop contingency plans now for operating without those resources.
Regional coalitions are the new federal policy: The Heartland AI Caucus shows states aren’t waiting for national frameworks. They’re building regional coordination structures that may outlast whatever federal guidance eventually emerges.
Action: If your state isn’t part of a regional AI working group, explore creating or joining one. Interstate coordination on procurement standards, ethical frameworks, and implementation lessons learned provides leverage that individual states lack.
Build vetting capacity before buying tools: Harlingen’s approach (digital responsibility guidelines first, vetted AI tools second) offers a template that works across government levels. Virginia’s registry struggles show that governance tools built without user input often fail.
Action: Establish internal evaluation frameworks for AI tools before vendors arrive with proposals. Include frontline staff in developing vetting criteria. The people who will actually use these systems know which promises are realistic and which are marketing.
What I’m watching: Whether the January 30 deadline produces another shutdown or genuine long-term funding commitments. Having been in the federal government for the last decade, you can guess what my bet is. If states continue forming regional AI coalitions while federal policy stalls, we may be witnessing a fundamental shift in how technology governance happens in the U.S. Not top-down from Washington, but horizontally across state and regional partnerships. Wouldn’t that be interesting to see?

