October 26, 2025
Originally posted on LinkedIn
Last week, I was invited to participate in the inaugural OpenAI Forum Higher Education Guild at OpenAI in San Francisco. I got to share how I’m integrating AI into PublicAdministration education through my AI for Public Sector Leadership course at the Fels Institute of Government at the University of Pennsylvania .
The most energizing part about attending was getting to know new colleagues from various disciplines. From neuroscience, to the arts, to humanities and social science, to leadership initiaitves, there was nothing artificial about the intelligent applications of AI by many of the people in the room. I’m looking forward to finding new opportunties to partner with Tina Austin, Katherine Elkins, Siobahn Grady, Ph.D., Jen García, Daniel Albert, and so many others.
Special thanks to Natalie Cone PMP, Alex Nawar, and Jane Kim from OpenAI for organizing such an inspiring event!
Federal
Unions Sue State, DHS Over AI Social Media Surveillance
Three major labor unions have filed a lawsuit against the Departments of State and Homeland Security, alleging that AI-powered surveillance tools are being used to monitor and suppress the online speech of noncitizens and university-affiliated visa holders. The suit claims the program violates First Amendment rights and has chilled lawful organizing and expression. This case raises urgent questions about how AI is being deployed in federal surveillance and the potential for overreach. Public agencies must tread carefully to balance national security with constitutional protections, especially when automated tools risk amplifying bias and suppressing civic participation.
DOE Seeks Partners for AI Data Centers at Oak Ridge
The Department of Energy has issued a request for proposals to build and operate AI-focused data centers and energy infrastructure at Oak Ridge National Laboratory. The RFP encourages bids from experienced private firms and consortia, and acknowledges potential environmental and community impacts. This initiative reflects the federal government’s growing commitment to AI infrastructure, but also highlights the tension between innovation and sustainability. Public leaders should watch how DOE balances technological ambition with environmental stewardship and local accountability.
Federal R&D Key to Sustaining U.S. AI Leadership
A Center for Strategic and International Studies (CSIS) blog post argues that continued federal investment in research and development is essential for maintaining U.S. competitiveness in artificial intelligence, emphasizing that public and private R&D efforts reinforce each other. This is a timely reminder that public funding doesn’t just fill gaps—it sets the foundation for long-term innovation ecosystems. For public leaders, sustained R&D investment is not optional if we want AI systems that reflect democratic values and serve public needs.
What Federal Buyers Need for AI-Driven Procurement
A recent FCW article outlines the skills, tools, and governance structures federal procurement officials need to effectively adopt AI-enabled systems, particularly as generative AI evolves toward more autonomous ‘agentic’ models. As AI tools become more autonomous, procurement professionals must evolve from compliance enforcers to strategic stewards of risk and innovation. This shift demands not just technical literacy, but also institutional support for responsible experimentation.
State
States Find Over 100 Uses for Generative AI
State governments have identified more than 100 applications for generative AI, ranging from document drafting to language translation, according to reporting from the National Association of State Chief Information Officers’ annual conference in Denver. This growing list of use cases reflects a shift from experimentation to operational integration. The challenge now is ensuring these tools align with public values as adoption scales across state agencies.
New Jersey Shares Guide for Public-Sector GenAI Projects
New Jersey has published a generative AI guide aimed at helping other states and agencies avoid common pitfalls when developing AI tools for government use. This kind of knowledge-sharing is exactly what the public sector needs. It’s a reminder that intergovernmental collaboration is a powerful tool for institutional learning.
Nevada Launches Cybersecurity Projects After Major Attack
Following a significant cyberattack, Nevada’s CIO Timothy Galluzi secured $300,000 in state funding to initiate two new cybersecurity initiatives aimed at strengthening the state’s digital defenses. This is a timely example of how crisis can accelerate investment in digital resilience. State leaders should view cybersecurity not just as an IT issue, but as a core component of public trust and service continuity.
Maryland Launches Statewide Cyber Vulnerability Program
The State of Maryland has established a Vulnerability Disclosure Program (VDP) and is mandating participation in its Information Sharing and Analysis Center (MD-ISAC) for all state and local government entities. This is a smart move toward institutionalizing cybersecurity accountability across all levels of state government. By formalizing vulnerability reporting and centralizing threat intelligence, Maryland is setting a practical example of how to build cyber resilience into public infrastructure.
Local
Sonoma County Upgrades Wildfire Response with AI Tools
County of Sonoma officials are using AI and geospatial technology to improve emergency communication and evacuation planning ahead of future wildfires. Lessons from the 2017 Tubbs Fire have led to investments in platforms that integrate data from multiple sources and support real-time coordination across agencies. This case highlights how local governments can evolve from past crises by integrating AI into daily operations and emergency planning. The emphasis on interagency coordination and keeping humans in the loop reflects a mature approach to technology adoption in public safety.
San Jose Turns to AI to Ease Staff Workload
City of San José is seeking generative AI tools that would let city employees build custom digital assistants to automate routine tasks and reduce burnout. This is a pragmatic move by a city under pressure to use AI not for flash, but to support an overstretched workforce. It’s a reminder that the most meaningful public sector AI applications often start with internal operations, not citizen-facing services.
International
OpenAI Wins Second Exclusive Contract with Australian Government
OpenAI secured a second contract with the Australian Government after being the sole company invited to bid, raising questions about procurement transparency and vendor competition. Single-vendor deals can expedite adoption but risk undermining public trust and market fairness. Governments need clear, accountable frameworks for AI procurement that balance innovation with open competition.
Nearly Half of Canadian Workers Fear AI Job Loss
A recent survey found that 46% of employed Canadian job seekers are concerned that artificial intelligence could eliminate their current roles. Despite this anxiety, many respondents support using generative AI tools during the job search process. This data highlights a growing tension in the workforce: AI is seen both as a threat to job security and a tool for career advancement. Public sector leaders should take note that addressing these fears through transparent workforce planning and upskilling initiatives will be critical to maintaining trust and stability.
AI Innovation Risks Being Co-opted by Nationalist Agendas
A blog post by Vinay Lohar explores how artificial intelligence, once a symbol of global innovation, is increasingly being shaped by nationalist policies and governance failures, particularly in developing nations. The piece argues that wealthier countries with strong nationalist leaders may attract global talent while limiting open collaboration. This perspective underscores a growing tension: as AI becomes more strategic, governments face pressure to balance openness with control. For public leaders, the challenge is to foster innovation ecosystems that are both globally connected and locally accountable.
Education
AI Surveillance in NY Classrooms Raises Privacy Concerns
The PLAINEDGE UNION FREE SCHOOL DISTRICT deployed the AI-powered XSponse surveillance system in classrooms without public notice, prompting criticism from the ACLU over student privacy and transparency. This case underscores the growing tension between safety technologies and civil liberties in schools. Public institutions must prioritize transparency and community trust when introducing AI tools, especially those that monitor vulnerable populations like students.
Public Interest Tech Law Emerges as Career Path
A Harvard Law School event highlights the growing need for legal professionals focused on public interest technology, particularly in response to AI and data governance challenges. As governments grapple with AI’s societal impacts, there’s a clear need for legal minds who understand both technology and public values. This signals a shift in legal education toward roles that bridge law, policy, and digital ethics in service of the public good.
Rhode Island College Opens Cybersecurity Training Center
Rhode Island College has launched a new cyber range to simulate real-world cyberattacks and train students in high-pressure incident response. The facility uses IBM’s Cloud Range platform and is part of the college’s Institute for Cybersecurity & Emerging Technologies. This investment reflects a growing recognition that cybersecurity readiness must start at the local and institutional level. By preparing students with hands-on, scenario-based training, Rhode Island is building the kind of workforce resilience that public agencies increasingly depend on in the face of escalating digital threats.
Public Sector
New Report Urges Protections for Public Workers Amid AI Adoption
A recent report outlines strategies for governments and labor unions to collaborate on responsible AI implementation, emphasizing worker protections, retraining, and transparency in deployment decisions. This report is a timely reminder that AI adoption in government isn’t just a tech issue, it’s a workforce issue. Public leaders need to prioritize inclusive planning and labor engagement to ensure AI enhances, rather than erodes, public service jobs.
Panel Discusses AI Evaluation in Public Sector
A recent panel hosted by Think Digital Partners brought together experts to discuss how governments can evaluate and monitor AI systems used in public services. As public agencies adopt AI, the conversation is rightly shifting toward oversight and quality assurance. Panels like this help surface practical approaches to responsible deployment, something every government leader should be thinking about now, not later.
How AI Is Reshaping Utilities and Energy Operations
A recent article from FTI Consulting outlines how utilities and energy companies are leveraging AI to streamline operations, improve customer service, and enhance grid reliability. Case examples include Entergy’s deployment of AI tools to optimize service delivery and reduce operational costs. While the focus is on private utilities, the lessons are highly relevant for public energy providers and regulators. AI’s role in infrastructure resilience and service efficiency should be on the radar of any public leader overseeing critical systems.
AI Chatbots Face Growing Legal and Compliance Scrutiny
As generative AI chatbots become more common, legal experts warn of increasing regulatory and compliance risks, particularly around data privacy, consumer protection, and misinformation. State and local governments are among those navigating how to deploy these tools responsibly. Public agencies experimenting with AI chatbots must now weigh not just technical feasibility but also legal exposure. This is a reminder that innovation in government must be accompanied by rigorous oversight and clear accountability frameworks.


