Accountability Arrives: AI Meets the Jobs Question
The Public AI Brief · Issue No. 19
Friends, welcome to the new home of The Public AI Brief on Substack. After building this community on LinkedIn, I’m excited to bring you deeper analysis and a better reading experience here. If you are following me from LinkedIn, I’d love to hear from you about the new format to make sure you’re getting the very best content about the latest developments in public sector AI.
Also, do you know someone who wants to begin a career in public service or nonprofit leadership? The Fels Institute of Government at the University of Pennsylvania is now accepting applications for Fall 2026 for the Master of Public Administration and Executive Master of Public Administration programs.
This week, two bipartisan bills force the federal government to answer what it’s been avoiding: how many jobs is AI actually replacing? Meanwhile, states and cities navigate the messier reality of deploying AI without clear answers on workforce impact, student wellbeing, or vendor accountability.
Federal
Congress is finally asking the question agencies have been dodging: what happens to workers when AI takes their jobs? Two bipartisan bills introduced this week would require federal agencies and major private firms to report quarterly on AI-driven layoffs and workforce displacement. Sens. Mark Warner (D-Va.) and Josh Hawley (R-Mo.) aren’t waiting for voluntary transparency. The legislation mandates disclosure when AI replaces a federal position, a data point agencies currently don’t track and certainly don’t publicize.
The bills arrive as AI optimism collides with institutional denial. Agencies tout efficiency gains while refusing to quantify headcount impacts. This legislation won’t stop automation, but it forces a public accounting. If you’re deploying AI that eliminates roles, Congress now expects you to say so. The federal workforce has operated in a data vacuum on this question. These bills would turn the lights on.
State
Forty state attorneys general drew a line this week: no federal preemption of state AI laws. The coalition called a proposed 10-year moratorium on state AI enforcement “sweeping and wholly destructive of reasonable state efforts to prevent known harms”. The moratorium, tucked into federal budget reconciliation, would leave Americans “entirely unprotected” while providing no replacement regulatory scheme. With 48 states and Puerto Rico introducing AI legislation in 2025, and 26 states adopting at least 75 new AI measures, states aren’t backing down from their consumer protection role. More than 140 civil rights and consumer protection organizations joined the opposition. The message: states have been filling the regulatory void Congress created, and they’re not stopping now.
This federal-state tension played out vividly in California. Tech companies are threatening to leave the state if legislators don’t back down from restrictive AI regulation. The message is blunt: regulate us too hard, and we’ll take our jobs and tax revenue elsewhere. OpenAI’s recent deal with California’s attorney general, converting to for-profit status while settling an investigation, has critics calling out holes in the agreement. Meanwhile, Ohio lawmakers are proposing penalties up to $50,000 per violation when chatbots promote self-harm, a direct response to documented cases of AI tools encouraging dangerous behavior. The pattern is clear: policymakers are chasing problems that have already materialized, while industry demands regulatory forbearance.
States are building AI governance infrastructure in real time, and the GovAI Coalition Summit in San Jose this week showcased both the urgency and the improvisation. San Jose announced a new public-private partnership to bring AI skills training to any resident who wants it, a recognition that workforce transformation isn’t optional. Summit attendees described an “industrial revolution” underway in local government, with service delivery and workforce upskilling taking center stage. Meanwhile, Oakland is taking a more cautious approach, issuing an RFI that invites innovators to test AI solutions before the city commits to buying anything. Test before procurement: a refreshingly skeptical posture in a landscape dominated by vendor promises.
The challenge isn’t just capacity, it’s knowing what capacity to build. Pennsylvania formalized this approach with a Cooperative Agreement for Artificial Intelligence Advising Services with the University of Pennsylvania, enabling Penn faculty experts to serve as official advisers to state government on AI policy, strategy, risk assessment, and governance frameworks. The partnership builds on Pennsylvania’s existing AI initiatives, including a 2023 Generative AI Governing Board and an OpenAI pilot that demonstrated employees saved an average of 95 minutes per day. Rather than build redundant internal expertise, Pennsylvania is leveraging academic research capacity, a model other states are watching closely.
I’m personally enthusiastic about this approach. As the professor teaching Penn’s course on AI for Public Sector Leadership, this partnership will directly enhance our students’ ability at the Fels Institute to engage with state government on using AI for the public good. It’s one thing to teach AI governance in a classroom. It’s another to have students working alongside state officials on real policy challenges, risk assessments, and implementation strategies. This is how you build the next generation of public sector leaders who understand both the technology and the institutional realities.
A new AI Readiness Project aims to help states, territories and tribal governments use AI responsibly through convenings, knowledge sharing and pilots. The initiative recognizes what’s increasingly obvious: most governments lack the institutional muscle to evaluate AI claims, design governance frameworks, or navigate vendor markets.
While some states build capacity, others are already deploying at scale, and the results are mixed. California’s Employment Development Department rolled out AI-powered identity verification for benefits applications, evaluating devices, IP addresses and risk signals to combat fraud. Maryland is considering an AI-enabled nonemergency phone system statewide to ease the burden on 911. At an estimated $2.5 million for two years, the state would be the first in the nation to implement such a system. Indiana took a quieter approach, using generative AI to revamp content for its notary education system. These deployments share a common thread: states are moving from pilots to production, often without waiting for federal guidance or comprehensive risk assessments.
Local
Cities are where AI theory meets service delivery reality. Los Angeles is ramping up AI deployments ahead of hosting the World Cup, Super Bowl, Olympics and Paralympics, using the global spotlight as both deadline and justification for accelerated adoption. The city is betting that AI can improve services under the pressure of massive events. A broader survey of cities large and small shows AI being used for everything from reducing first responder paperwork to streamlining permitting. These aren’t moonshot projects. They’re operational tools addressing immediate friction points in service delivery.
New York City’s fire department deployed AI-powered cameras at city parks for early detection of brush fires. Following a busy brush fire season in 2024, FDNY updated eight existing cameras in five locations with AI detection capabilities, powered by solar panels. When the AI detects smoke or fire, it triggers a notification to the on-duty officer at FDNY’s Command Center, who assesses the situation and determines if fire companies need to be dispatched. The system augments rather than replaces existing infrastructure and personnel, with human oversight remaining central to decision-making.
Massachusetts is taking a different approach, awarding seven grants through its AI Models program to university-led research projects in manufacturing, energy and climate resilience. The state is using academia as an R&D engine before scaling AI into government operations, a model that prioritizes risk assessment and engineering rigor over speed.
Education
Higher education is discovering that AI policy is inseparable from student mental health. A panel at EDUCAUSE ‘25 highlighted how punitive, fear-driven AI policies deepen mistrust, stress and disconnection among students. Institutions that lead with prohibition rather than pedagogy are creating anxiety, not learning. Meanwhile, a survey found that students overwhelmingly want schools to incorporate AI into learning, but they’re afraid. They fear being accused of plagiarism, letting AI think for them, not knowing where the boundaries are. The data is clear: schools are lagging behind their students in using AI.
The K-12 reality is messier. Nearly half of educators say their district does not have an AI policy, with only Ohio and Tennessee requiring districts to have comprehensive AI policies. Districts are taking wildly divergent approaches: Tucson created a task force of 40+ people over two years to develop comprehensive policy, while Arlington opted for a flexible “framework” with continuous website updates instead of formal policy. Just 40% of states have AI guidance according to SETDA surveys. Districts emphasize that professional development must accompany policy implementation, but most lack resources for both.
In Decatur, Alabama, educators agreed at a State of Education forum that AI has already become essential for both teachers and students. The consensus: embrace it, don’t ban it. A counterpoint from a college administrator argues that students need not panic about AI automation. The key is building the right skills and relationships to turn uncertainty into advantage. But this assumes students have access to clear guidance and thoughtful policies, which most don’t. The gap between student demand, educator acceptance and institutional policy is widening, and it’s students who bear the cost of that misalignment.
Key Insights for Practitioners
Transparency isn’t optional anymore: The federal job displacement bills signal a broader shift. Stakeholders expect public accounting of AI’s workforce impact, not just efficiency narratives. Action: Begin tracking and documenting where AI is replacing, augmenting or transforming roles in your organization now, before disclosure requirements force rushed assessments.
Test-before-buy beats vendor promises: Oakland’s RFI approach, inviting solutions to prove value before procurement, should become standard practice, not an outlier. Action: Build evaluation frameworks that require vendors to demonstrate outcomes in your operational environment before you sign contracts or commit budgets.
Student mental health is an AI governance issue: Punitive AI policies in education create anxiety and disconnection, undermining learning outcomes. Action: Review your institution’s AI policies through a mental health lens. If they lead with fear rather than pedagogy, revise them to emphasize learning opportunities and clear boundaries instead of prohibition.
What I’m watching: Federal guidance on workforce transition planning as AI deployments scale. If agencies begin publicly tracking job displacement, expect states and municipalities to face pressure to do the same, and for collective bargaining agreements to start addressing AI’s role in workforce transformation.

