November 2, 2025
Originally posted on LinkedIn
Federal
A Framework for Responsible AI in Homeland Security
A new five-step framework urges homeland security leaders to treat AI adoption as a strategic, mission-critical process rather than a routine IT procurement. The model emphasizes defining operational use cases, ensuring data integrity, building trusted vendor relationships, integrating across systems, and establishing strong governance.
Federal Judges Blame AI for Faulty Court Orders
Two U.S. district judges acknowledged that law clerks used generative AI tools like ChatGPT and Perplexity to draft legal orders that were later found to contain significant errors. The judges have since implemented stricter internal review processes and formal AI usage policies. This most recent in a string of cases underscores the urgent need for clear, enforceable AI policies within the judiciary and other branches of government. As generative tools become more accessible, public institutions must balance innovation with safeguards that preserve trust, accuracy, and due process.
Federal Courts Issue Interim AI Guidance with Safeguards
The U.S. federal judiciary has released interim guidance allowing courts to experiment with AI tools while emphasizing security, ethical standards, and judicial accountability. The guidance, developed by an AI task force, cautions against delegating core judicial functions to AI and encourages independent verification of AI-assisted work. This measured approach reflects the judiciary’s need to balance innovation with its foundational principles of independence and integrity. By encouraging experimentation within clear boundaries, the courts are signaling openness to AI while reinforcing that responsibility for decisions remains human.
Why Federal Agencies Need AI Factories Now
A new article argues that federal agencies must move beyond scattered AI pilots and adopt “AI factories,” integrated systems that align infrastructure, data, and workforce processes to scale AI across missions. This piece underscores a growing consensus: AI success in government depends less on tools and more on institutional readiness. Building AI factories is really about modernizing how agencies organize people, data, and decisions, rather than just deploying new tech.
Senator: Legal Immigration Key to U.S. AI Competitiveness
Sen. Mike Rounds (R-S.D.) argued that expanding legal immigration is essential for the U.S. to compete with China in AI, citing workforce shortages and the need for skilled labor in manufacturing and data center operations. He also called for stronger industry-education partnerships to prepare American workers for AI-era jobs. This is a rare bipartisan moment of clarity: the future of AI in the U.S. hinges not just on technology, but on people. A modern immigration policy and coordinated workforce development strategy are foundational to any serious national AI agenda.
Advocates Challenge Federal Use of Grok AI Tool
A coalition of advocacy groups is urging the Office of Management and Budget to suspend the federal government’s use of xAI’s Grok chatbot, citing concerns over ideological bias and antisemitic content. The push follows a General Services Administration contract allowing agencies to access Grok models at a reduced cost through 2027. This controversy highlights the growing tension between rapid AI adoption and the need for rigorous vetting aligned with public values. As agencies integrate generative tools, transparency in procurement decisions and adherence to ethical standards will be critical to maintaining public trust and institutional accountability.
AI Agents Touted as Solution to Shrinking Federal Workforce
As the federal government faces significant staffing reductions, Salesforce is promoting AI agents as a way to maintain service delivery. While still in pilot phases, these technologies are being tested for use in customer service and claims processing, with human oversight emphasized. This story underscores a growing tension in public administration: how to maintain service levels amid workforce cuts. AI agents may offer relief, but their deployment raises critical questions about accountability, oversight, and the long-term role of human expertise in government operations.
State
Maryland Rolls Out Google AI Tools to State Workforce
State of Maryland has partnered with Google Public Sector to provide generative AI tools to 43,000 state employees, aiming to improve productivity and streamline government operations. This is one of the largest state-level deployments of generative AI to date, signaling a shift from pilot projects to enterprise-scale integration. The real test will be whether these tools enhance service delivery without compromising transparency or public trust.
State CIOs Shift From Technologists to Strategic Leaders
The role of state chief information officers has evolved from technical experts to strategic change leaders, according to the 2025 National Association of State Chief Information Officers (NASCIO) survey. With high turnover and upcoming gubernatorial elections, CIOs are now expected to bridge policy, technology, and communication across state agencies. This shift reflects a broader transformation in public sector leadership—where digital governance is no longer about managing infrastructure but about navigating complexity, building trust, and aligning technology with policy goals. As states adopt AI and modernize legacy systems, CIOs must act as both translators and tacticians.
Missouri Uses AI and Drones to Monitor Waterfowl
The Missouri Department of Conservation is piloting a system that combines drones and artificial intelligence to track waterfowl populations more efficiently and accurately than traditional methods. This is a smart example of how AI can support core public functions like wildlife management—enhancing data collection while reducing labor and environmental disruption. It also raises important questions about transparency and public trust when deploying emerging tech in the field.
Local
Covington Launches AI Policy for City Operations
The City of Covington, Kentucky has introduced a formal policy to guide the use of artificial intelligence in local government, focusing on transparency, data governance, and responsible deployment. Covington’s proactive approach is a model for smaller municipalities navigating AI adoption. By setting clear guidelines early, the city is positioning itself to use AI tools effectively while maintaining public trust and accountability.
Denver Extends Use of AI License Plate Readers
City and County of Denver Mayor Mike Johnston has extended the city’s contract with Flock Safety, which provides AI-powered license plate readers, for an additional five months at no extra cost. Short-term contract extensions like this suggest cities are still weighing the trade-offs between public safety tools and civil liberties. It’s a reminder that AI deployments in public spaces demand ongoing oversight and community trust.
AI Offers Practical Gains in Local Government Procurement
Cities and counties are exploring AI tools to streamline procurement tasks like drafting RFPs, summarizing vendor responses, and tracking contract deadlines. The structured nature of procurement makes it a low-risk entry point for AI adoption in local government. Procurement is one of the few areas in government where AI can deliver measurable value without overhauling policy or risking public trust. Starting here allows agencies to build internal capacity and confidence before expanding AI use to more complex domains.
Vail Uses Agentic AI to Modernize City Services
Vail Colorado has deployed an agentic AI platform to integrate and manage municipal services including housing, emergency response, and transportation systems. This is a promising example of a small town using AI not just for efficiency, but to rethink how services are coordinated. It raises important questions about how local governments can responsibly adopt emerging tech without overextending their capacity to govern it.
Los Angeles Adds Generative AI to City Staff Tools
The City of Los Angeles has begun integrating generative AI tools, including Google Gemini, into its standard suite of software available to city employees. The move follows similar efforts in other jurisdictions exploring AI to improve productivity and service delivery. As cities like Los Angeles adopt generative AI, the real test will be whether these tools are deployed with clear governance, transparency, and measurable public value. This is less about tech adoption and more about institutional readiness to manage change responsibly.
International
Canada Criticized for Inaction on AI Surveillance Safeguards
A University of Toronto director has warned that Canada is falling behind in regulating government use of AI surveillance technologies, raising concerns about privacy and civil liberties. This highlights a growing accountability gap in how democratic governments deploy AI tools. Without clear oversight, public trust in digital governance will erode, especially when surveillance is involved.
Business
Amazon Uses AI to Cut 14,000 Corporate Jobs
Amazon is leveraging generative AI to automate tasks in its corporate offices, resulting in the elimination of 14,000 jobs as part of a broader restructuring effort. This move underscores how AI is not just transforming frontline services but also reshaping white-collar work. Public sector leaders should take note: workforce planning and reskilling strategies must now account for AI-driven shifts in administrative and knowledge-based roles.
Public Sector
New Fund Targets $120M to Modernize Government Services
The Recoding America Fund, a new bipartisan nonprofit, has launched with a goal of raising $120 million over six years to improve state and federal government capacity. Led by former public officials from both parties, the fund will invest in talent, technology, and operational models to accelerate digital transformation and service delivery. This fund reflects a growing recognition that effective governance depends on institutional capacity, not just policy. By focusing on the “plumbing”—talent, tools, and delivery models—it offers a pragmatic, cross-partisan path to rebuilding trust in government performance amid political disruption.
Microsoft Restructures to Focus on AI and Public Sector
Microsoft has expanded Judson Althoff’s role as CEO of Commercial Business in a major reorganization aimed at better serving commercial and public sector clients through AI integration and workforce transformation. This move signals Microsoft’s intent to deepen its role as a strategic partner to governments navigating AI adoption. Public leaders should note how tech firms are aligning their leadership and services to meet the operational and workforce needs of the public sector.
Veritone Exec Discusses AI Use in Public Sector
Jon Gacek, General Manager of Veritone‘s Public Sector division, outlines how the company’s aiWARE platform supports public agencies with AI-powered tools for transcription, redaction, and evidence management. This interview highlights the growing role of AI in operational tasks like document processing and digital evidence handling—areas where efficiency gains can free up staff for more complex work. But as adoption grows, so does the need for clear governance and transparency in how these tools are deployed.
Lockheed Martin Adopts Google’s Generative AI Tools
Lockheed Martin is partnering with Google Public Sector to integrate generative AI tools, including the Gemini model, into its internal workflows and operations. This collaboration signals growing acceptance of generative AI in high-stakes, regulated environments. For public agencies, it raises important questions about vendor partnerships, data governance, and the operational readiness of AI tools in mission-critical contexts.
Google Event Highlights AI Use in Public Sector
Karen Dahut shared insights from a Google Public Sector event, noting that many public sector organizations have deployed more than 10 AI agents in their operations. The rapid adoption of AI agents across public agencies signals a shift from experimentation to operational integration. Leaders should now focus on governance frameworks to ensure these tools enhance service delivery without compromising accountability.
Public Sector Lags in Global Customer Experience Rankings
KPMG‘s 2025–2026 Global Customer Experience Excellence report finds that public sector organizations score 9.4% below the global average in delivering customer experience, particularly in areas enhanced by proactive and predictive AI technologies. This gap underscores the urgency for public institutions to modernize service delivery using AI not just for efficiency, but to meet rising citizen expectations. Trust and legitimacy increasingly hinge on how well governments can match the responsiveness of the private sector.
AI Streamlines Public Sector Case Management
A new report from Public Sector Network highlights how AI-powered workflows are improving case management by increasing speed, accuracy, and efficiency in public service delivery. This is a practical example of AI’s value in reducing administrative burden and improving responsiveness in government services. As agencies face growing caseloads and limited staff, smart automation can help maintain service quality without sacrificing accountability.

