<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0" xmlns:itunes="http://www.itunes.com/dtds/podcast-1.0.dtd" xmlns:googleplay="http://www.google.com/schemas/play-podcasts/1.0"><channel><title><![CDATA[The Public AI Brief]]></title><description><![CDATA[The Public AI Brief explores how artificial intelligence is transforming government, policy, and public administration through practical insights for public leaders.]]></description><link>https://brief.dylanhayden.com</link><generator>Substack</generator><lastBuildDate>Mon, 20 Apr 2026 07:07:26 GMT</lastBuildDate><atom:link href="https://brief.dylanhayden.com/feed" rel="self" type="application/rss+xml"/><copyright><![CDATA[Dylan]]></copyright><language><![CDATA[en]]></language><webMaster><![CDATA[dylan@dylanhayden.com]]></webMaster><itunes:owner><itunes:email><![CDATA[dylan@dylanhayden.com]]></itunes:email><itunes:name><![CDATA[Dylan Hayden]]></itunes:name></itunes:owner><itunes:author><![CDATA[Dylan Hayden]]></itunes:author><googleplay:owner><![CDATA[dylan@dylanhayden.com]]></googleplay:owner><googleplay:email><![CDATA[dylan@dylanhayden.com]]></googleplay:email><googleplay:author><![CDATA[Dylan Hayden]]></googleplay:author><itunes:block><![CDATA[Yes]]></itunes:block><item><title><![CDATA[To Serve Man: What a 1962 Twilight Zone Episode Tells Us About the AI Industry in 2026]]></title><description><![CDATA[The Public AI Brief &#183; Issue No. 27]]></description><link>https://brief.dylanhayden.com/p/to-serve-man-what-a-1962-twilight</link><guid isPermaLink="false">https://brief.dylanhayden.com/p/to-serve-man-what-a-1962-twilight</guid><dc:creator><![CDATA[Dylan Hayden]]></dc:creator><pubDate>Sat, 11 Apr 2026 13:05:17 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/ab8d228a-9205-4449-b777-ae1913b1b92f_3158x2400.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<div><hr></div><p>I have been away from this newsletter for a few months. Not because there has been nothing to say about artificial intelligence in the public sector. Quite the opposite. I have been watching the news accumulate, thinking about what kind of publication this should be, and trying to find a frame worth writing about.</p><p>This week, I found one.</p><p>A colleague of mine, an economist at Penn, raised a question that has been bothering me for a while. I&#8217;m parphrasing, but her question was, if these AI tools are so valuable, why are the companies building them burning through cash at a rate that would sink any normal business? They are not making money. In most cases they are not close to making money. And yet they keep building, keep offering, keep giving things away. What exactly is the business model here?</p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://brief.dylanhayden.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">The Public AI Brief is a reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><p>It seems like some economists are just now catching up to questions science fiction was asking over sixty years ago. And that question sent me back to a show I have thought about many times since I was a kid.</p><div><hr></div><p><strong>The Episode</strong></p><p>In March of 1962, a CBS anthology series called <em>The Twilight Zone</em> aired an episode titled &#8220;To Serve Man.&#8221; If you have not seen it, the setup is this: an alien species called the Kanamits arrives on Earth. They are enormous, impassive, and apparently benevolent. They end world hunger. They share technology that eliminates the need for weapons. They offer free trips to their home planet. Humanity, understandably, is overwhelmed with gratitude.</p><p>The Kanamits leave behind a book. Cryptographers work furiously to decode it. They manage the title first: <em>To Serve Man.</em> Reassured, people begin boarding spacecraft for the Kanamit home world in large numbers.</p><p>Then, as the episode&#8217;s narrator is himself about to board, a colleague comes running across the tarmac. She has decoded the rest of the book.</p><p>"Mr. Chambers, don't get on that ship! The book &#8212; To Serve Man &#8212; it's a cookbook!"</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!Tn3W!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fff557336-87be-4c53-9909-371008902e29_407x301.webp" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!Tn3W!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fff557336-87be-4c53-9909-371008902e29_407x301.webp 424w, https://substackcdn.com/image/fetch/$s_!Tn3W!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fff557336-87be-4c53-9909-371008902e29_407x301.webp 848w, https://substackcdn.com/image/fetch/$s_!Tn3W!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fff557336-87be-4c53-9909-371008902e29_407x301.webp 1272w, https://substackcdn.com/image/fetch/$s_!Tn3W!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fff557336-87be-4c53-9909-371008902e29_407x301.webp 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!Tn3W!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fff557336-87be-4c53-9909-371008902e29_407x301.webp" width="363" height="268.4594594594595" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/ff557336-87be-4c53-9909-371008902e29_407x301.webp&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:301,&quot;width&quot;:407,&quot;resizeWidth&quot;:363,&quot;bytes&quot;:2296940,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/webp&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://brief.dylanhayden.com/i/193843319?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fff557336-87be-4c53-9909-371008902e29_407x301.webp&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!Tn3W!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fff557336-87be-4c53-9909-371008902e29_407x301.webp 424w, https://substackcdn.com/image/fetch/$s_!Tn3W!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fff557336-87be-4c53-9909-371008902e29_407x301.webp 848w, https://substackcdn.com/image/fetch/$s_!Tn3W!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fff557336-87be-4c53-9909-371008902e29_407x301.webp 1272w, https://substackcdn.com/image/fetch/$s_!Tn3W!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fff557336-87be-4c53-9909-371008902e29_407x301.webp 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>The question I have: &#8220;Are we being fattened up to be devoured?&#8221;</p><p>Rod Serling&#8217;s <em>Twilight Zone</em> was, at its best, a machine for making people uncomfortable about the present by disguising it as the future. The show ran from 1959 to 1964 and managed to smuggle serious commentary about McCarthyism, nuclear anxiety, racism, conformity, and the dehumanizing effects of technology past network censors and corporate sponsors who might have killed it outright had it been less clever about the packaging. Serling fought those fights constantly. The science fiction frame was not just an aesthetic choice. It was a survival strategy.</p><p>We don&#8217;t have anything quite like it today. We have prestige television with bigger budgets and longer episode counts, but very little that does what the <em>Twilight Zone</em> did at its peak: take a mainstream audience, meet them where they are, and then turn the lights on in a room they did not realize they were sitting in.</p><p>I have been thinking about that room a lot lately.</p><div><hr></div><p><strong>The Cookbook</strong></p><p>Here is the thing about the Kanamits that makes the episode work as horror rather than just plot twist: they were not lying. The book really was about serving man. It said exactly what it said on the cover. The problem was not deception in the conventional sense. The problem was that humanity accepted surface-level evidence of benevolence and stopped asking harder questions. What kind of serving? Serving whom? Serving toward what end?</p><p>My economist colleague&#8217;s question deserves the same scrutiny applied to the current AI moment.</p><p>The major AI companies are not profitable on their core products. They are offering tools at prices that do not reflect actual costs, in some cases offering them free, and absorbing staggering losses in the process. This is not a secret. It is discussed openly in financial reporting and in industry media, occasionally with a kind of admiring wonder at the boldness of the bet. The assumption embedded in most of that coverage is that the losses are temporary, that scale will eventually produce unit economics that make the math work.</p><p>What gets less attention is that the acclimation to higher costs is already underway, and it is happening on multiple fronts simultaneously. As the saying goes, &#8220;if it&#8217;s free, you&#8217;re the product.&#8221; AI companies have made this explicit. Don&#8217;t want your data used to train their frontier models? Pay for the privilege of privacy. Want real functionality rather than the stripped-down version? That will run you upwards of $200 a month. That&#8217;s more than a CrossFit gym membership, more than most streaming subscriptions combined, and a price point that has somehow become normalized in the span of a few years.</p><p>But the costs don&#8217;t stop at the subsciption price. Even if you never use these tools, you are absorbing them. <a href="https://www.consumerreports.org/data-centers/ai-data-centers-impact-on-electric-bills-water-and-more-a1040338678/">Residential electricity prices jumped 7.1 percent in 2025</a>, more than double the inflation rate, and topped 20 percent in some states. In the PJM grid region, which covers 13 states from Illinois to the Atlantic Coast, <a href="https://www.pewresearch.org/short-reads/2025/10/24/what-we-know-about-energy-use-at-us-data-centers-amid-the-ai-boom/">data centers drove an estimated $9.3 billion increase in capacity market costs</a>, costs that are then socialized across every household and business on the grid. <a href="https://www.bloomberg.com/graphics/2025-ai-data-centers-electricity-prices/">Wholesale electricity prices near data center clusters have risen as much as 267 percent since 2020</a>. I live in Maryland. Just this month, my utility bill went up $100 a month. I didn&#8217;t get a vote on that.</p><p>The mechanisms are different but the logic is the same. None of this is happening by accident. These are policy choices, about who bears the cost of infrastructure, about how public resources get allocated, about which industries get subsidized and which populations absorb the tab. I tell my students that public administration is fundamentally about who gets what, when, and how. Right now, the answer to that question is being written into utility rate structures, federal procurement decisions, and agency budget lines, mostly without the public realizing it is being asked.</p><p>My colleague&#8217;s answer is that the industry losses are not a problem to be solved later. They are the strategy right now. The goal is integration. The goal is dependency. The goal is to get these tools woven into workflows, institutions, and human habits deeply enough that the cost of leaving becomes prohibitive. At that point, the pricing conversation changes entirely, and it changes in favor of the people who own the infrastructure.</p><p>This is not a novel observation about technology markets. It is, more or less, the history of enterprise software. But the speed and depth of the current integration push is different in ways that matter for government specifically.</p><div><hr></div><p><strong>This Week&#8217;s News Feed</strong></p><p>I don&#8217;t need to speculate about the direction of travel. This week&#8217;s news is the evidence.</p><p>The General Services Administration <a href="https://www.nextgov.com/artificial-intelligence/2026/04/gsa-require-agencies-pay-usai-after-launching-it-free-service/412678/">launched a platform called USAi last year</a>, billed explicitly as a way to &#8220;accelerate AI adoption across the government.&#8221; It was free. This week came the announcement that GSA will now require agencies to pay for it. That arc, from free adoption accelerant to paid dependency, took less than a year. The cookbook was right there in the title. It was called an adoption tool. Nobody asked: adoption toward what, and at whose eventual eventual expense?</p><p>The CIA, <a href="https://www.govexec.com/technology/2026/04/cia-plans-ai-coworkers-deputy-director-says/412757/">according to reporting this week</a>, recently used AI to generate an intelligence report for the first time. Deputy Director Michael Ellis announced that agency staff will eventually manage teams of AI agents. The <a href="https://www.nextgov.com/artificial-intelligence/2026/04/secret-service-embedding-ai-experts-across-agency/412646/">Secret Service is embedding AI experts across the entire organization</a>. The <a href="https://www.govexec.com/technology/2026/04/vas-fy27-budget-proposal-seeks-funding-additional-ai-adoption/412699/">VA is requesting a 10.9% budget increase</a> driven primarily by what its own documents call &#8220;AI Infrastructure.&#8221; <a href="https://www.govtech.com/artificial-intelligence/new-orleans-is-latest-to-answer-nonemergency-calls-with-ai">New Orleans has trained an AI agent on three years of 311 call data</a> and is preparing to hand it the phones. <a href="https://www.govtech.com/artificial-intelligence/after-pilot-nys-expands-staff-ai-training-to-over-100-000">New York State is scaling AI training to over 100,000 employees</a>.</p><p>Each of these stories is being written as progress. And in many ways they are progress. I am not making the argument that these tools have no value. I use them constantly, I teach with them, and I have built programs around helping public servants use them well.</p><p>But <a href="https://www.theregister.com/2026/04/10/ai_roi_kpmg/">research from KPMG and The Register this week</a> found that organizational leaders are continuing to increase AI spending even when they cannot demonstrate return on investment. That is not a story about confident investment in a proven technology. That is a story about momentum that has decoupled from evidence. And when spending decouples from evidence in government, it is worth asking who benefits from that momentum and who is building dependency on whom.</p><div><hr></div><p><strong>What the Twilight Zone Understood</strong></p><p>Rod Serling was not a pessimist about technology. He was a skeptic about power, and about the human tendency to accept gifts without examining the terms. The <em>Twilight Zone</em> was full of wishes granted in ways that destroyed the wisher, of machines that served their stated purpose perfectly while missing the point entirely, of futures that arrived exactly as advertised and turned out to be unbearable.</p><p>The show trusted its audience to sit with that discomfort. It did not resolve the tension. It presented the evidence and let people draw their own conclusions. That is a rarer thing than it sounds.</p><p>I&#8217;m going to try to do something in that spirit going forward. Just the lens, applied honestly to what is actually happening.</p><p>This week, what is actually happening is that governments at every level are accelerating their integration into AI systems at a moment when the long-term pricing and dependency implications of that integration remain almost entirely unexamined. The tools are real. The value is real in many cases. The Kanamits were not wrong that they intended to serve man. They were just more literal about it than anyone thought to ask.</p><div class="captioned-image-container"><figure><a class="image-link image2" target="_blank" href="https://substackcdn.com/image/fetch/$s_!IhsY!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb8f0606c-6763-4142-9964-e474495c19e7_259x195.jpeg" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!IhsY!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb8f0606c-6763-4142-9964-e474495c19e7_259x195.jpeg 424w, https://substackcdn.com/image/fetch/$s_!IhsY!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb8f0606c-6763-4142-9964-e474495c19e7_259x195.jpeg 848w, https://substackcdn.com/image/fetch/$s_!IhsY!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb8f0606c-6763-4142-9964-e474495c19e7_259x195.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!IhsY!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb8f0606c-6763-4142-9964-e474495c19e7_259x195.jpeg 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!IhsY!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb8f0606c-6763-4142-9964-e474495c19e7_259x195.jpeg" width="403" height="303.41698841698843" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/b8f0606c-6763-4142-9964-e474495c19e7_259x195.jpeg&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:195,&quot;width&quot;:259,&quot;resizeWidth&quot;:403,&quot;bytes&quot;:6208,&quot;alt&quot;:&quot;&quot;,&quot;title&quot;:null,&quot;type&quot;:&quot;image/jpeg&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://brief.dylanhayden.com/i/193843319?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb8f0606c-6763-4142-9964-e474495c19e7_259x195.jpeg&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" title="" srcset="https://substackcdn.com/image/fetch/$s_!IhsY!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb8f0606c-6763-4142-9964-e474495c19e7_259x195.jpeg 424w, https://substackcdn.com/image/fetch/$s_!IhsY!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb8f0606c-6763-4142-9964-e474495c19e7_259x195.jpeg 848w, https://substackcdn.com/image/fetch/$s_!IhsY!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb8f0606c-6763-4142-9964-e474495c19e7_259x195.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!IhsY!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb8f0606c-6763-4142-9964-e474495c19e7_259x195.jpeg 1456w" sizes="100vw" loading="lazy"></picture><div></div></div></a></figure></div><p>It is worth reading further before we board the ship.</p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://brief.dylanhayden.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">The Public AI Brief is a reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div>]]></content:encoded></item><item><title><![CDATA[The Workforce Question Agencies Can't Dodge]]></title><description><![CDATA[The Public AI Brief &#183; Issue No. 26]]></description><link>https://brief.dylanhayden.com/p/the-workforce-question-agencies-cant</link><guid isPermaLink="false">https://brief.dylanhayden.com/p/the-workforce-question-agencies-cant</guid><dc:creator><![CDATA[Dylan Hayden]]></dc:creator><pubDate>Mon, 19 Jan 2026 20:31:30 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/bacf8fbf-c70d-4c02-9222-f90d36631dcd_1200x675.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Congress is finally asking what agencies have been avoiding: can you execute an AI strategy when your technology teams are walking out the door? That tension framed this week&#8217;s developments, as federal adoption moves forward despite unprecedented workforce disruption, states navigate leadership churn while advancing governance models, and local governments demonstrate that practical AI deployment doesn&#8217;t require massive teams.</p><p>Federal workforce losses from DOGE-mandated reductions hit technology teams hardest, creating a paradox where agencies must accelerate AI adoption with diminishing human capacity to implement it. Meanwhile, states are shuffling their own leadership deck, with Texas and New York replacing chief AI officers as they attempt regulatory pushback against federal preemption. At the local level, cities are proving that AI for planning and permitting doesn&#8217;t need armies of specialists&#8212;just clear use cases and vendor partnerships.</p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://brief.dylanhayden.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">The Public AI Brief is a reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><h3>This Week&#8217;s Key Developments:</h3><ul><li><p><strong>Federal workforce crisis meets AI mandates</strong>: Democrats question how agencies can execute AI Action Plan while losing technical talent</p></li><li><p><strong>90% of agencies using AI</strong>: Google survey reveals adoption widespread but stuck in pilot purgatory due to security fears and skills gaps</p></li><li><p><strong>Fraud detection proving value</strong>: PRAC&#8217;s AI engine trained on pandemic data could have flagged &#8220;tens of billions&#8221; before disbursement</p></li><li><p><strong>State CIO musical chairs continues</strong>: Texas, New York shuffle AI and technology leadership amid governance buildout</p></li><li><p><strong>Wyoming approves massive data center</strong>: What could become largest U.S. facility raises infrastructure cost questions</p></li><li><p><strong>Local governments deploy AI for permitting</strong>: Honolulu, Pueblo among cities using automation to speed planning processes</p></li></ul><div><hr></div><h2>Federal</h2><h3>The Talent Exodus Nobody&#8217;s Solving</h3><p>House Democrats pressed White House Office of Science and Technology Policy Director Michael Kratsios on an uncomfortable question during Wednesday&#8217;s hearing: how do you build a <a href="https://www.nextgov.com/artificial-intelligence/2026/01/democrats-question-white-house-tech-lead-how-workforce-churn-will-impact-ai-action-plan/410689/">tech-centric government workforce to advance AI</a> when you&#8217;ve spent 2025 firing that workforce? Rep. Haley Stevens pointed to NIST&#8217;s proposed $325 million budget cut, resulting in approximately 500 job losses, arguing the cuts &#8220;weaken cybersecurity and privacy standards&#8221; and &#8220;limit advanced manufacturing, physical infrastructure and resilience innovation.&#8221; Rep. George Whitesides called the science workforce attacks &#8220;reprehensible,&#8221; noting they target &#8220;one of the core pillars of American strength.&#8221;</p><p>The administration&#8217;s answer appears to be Tech Force, the two-year rotation program for private sector technologists that Kratsios touted as receiving interest from 35,000 Americans. But that optimism collides with <a href="https://www.nextgov.com/artificial-intelligence/2026/01/report-workforce-shortages-security-fears-among-biggest-hindrances-agency-ai-adoption/410627/">new survey data from Google Public Sector</a> showing 55% of federal respondents cite lack of employees with skills and training as a major barrier to AI adoption. The survey found nearly 90% of agencies are &#8220;planning to or are already using AI,&#8221; but only 12% of civilian agencies and 2% of defense agencies report completed AI adoption plans. Security and adversarial risks remain the single biggest blocker at 48%, followed by reliability concerns at 35%.</p><p>The mismatch is stark. Agencies need sustained technical capacity to move AI from pilots to production, but DOGE-era workforce reductions have disproportionately affected mid-career technologists who bridge legacy systems knowledge with modern capabilities. Tech Force may inject talent, but two-year rotations don&#8217;t build institutional knowledge or maintain systems long-term. The question isn&#8217;t whether agencies can start AI projects&#8212;it&#8217;s whether they can sustain them.</p><h3>When Fraud Detection Actually Works</h3><p>While agencies struggle with AI strategy, the <a href="https://federalnewsnetwork.com/big-data/2026/01/pandemic-watchdog-builds-fraud-prevention-ai-engine-trained-on-millions-of-covid-program-payments/">Pandemic Response Accountability Committee demonstrated concrete results</a> with its AI-powered fraud prevention engine. Trained on 5 million pandemic-era relief applications, the system can review 20,000 applications per second and flag anomalies before payment. PRAC Executive Director Ken Dieffenbach told House lawmakers the engine would have flagged &#8220;at least tens of billions of dollars&#8221; in fraudulent claims had it existed in March 2020.</p><p>The engine combines unsupervised machine learning to detect anomalies, supervised models identifying patterns from known fraud cases, and rules-based flags catching invalid Social Security and employer identification numbers. Small anomalies often reveal hidden connections like shared bank accounts among supposedly independent applicants. <a href="https://fedscoop.com/government-fraud-ai-data-tech-agencies/">Treasury&#8217;s Do Not Pay system is expanding access</a> across agencies, with full utilization expected by fiscal year end&#8212;up from just 4% of programs having full access in FY 2014.</p><p>This represents the rare AI success story in government: clear problem definition, quality training data, measurable outcomes. GAO estimates the federal government loses $233 to $521 billion annually to fraud. PRAC has helped recover $500 million so far, a fraction of what pre-payment vetting could prevent. The challenge now is finding a permanent home for PRAC&#8217;s analytics capabilities before the committee sunsets, ensuring these tools outlive the crisis that created them.</p><h3>Budget Realities Check AI Ambitions</h3><p>Congress allocated <a href="https://fedscoop.com/technology-modernization-fund-2026-approps-budget/">$5 million for the Technology Modernization Fund in fiscal 2026 funding bills</a>&#8212;far less than the administration requested and a fraction of what agencies need for AI infrastructure. The <a href="https://fedscoop.com/white-house-tech-budget-doge-ai-it-cloud/">executive branch budget pact</a> includes IT investments but reduces funding for the U.S. DOGE Service to less than half the request. Congressional appropriators directed OMB to produce guidance on AI-ready datasets at agencies and cloud infrastructure adoption, but didn&#8217;t provide explicit TMF reauthorization.</p><p>The funding gap compounds workforce challenges. <a href="https://www.nextgov.com/artificial-intelligence/2026/01/onegov-deals-helping-expand-agencies-ai-adoption-gsa-official-says/410597/">GSA&#8217;s OneGov initiative is helping expand AI adoption</a> by providing procurement pathways for agencies with &#8220;early, light contact&#8221; with AI technologies. Chief AI Officer Zach Whitman noted the deals help agencies that may lack expertise acquire tools at negotiated prices. But procurement shortcuts don&#8217;t solve the fundamental constraint: agencies need sustained funding for infrastructure, training, and technical staff to move beyond pilots.</p><div><hr></div><h2>State</h2><h3>The CIO Shuffle Continues</h3><p>Texas <a href="https://statescoop.com/texas-tony-sauerhoff-chief-ai-innovation-officer-interim-cio/">named Chief AI and Innovation Officer Tony Sauerhoff as interim CIO</a> after Amanda Crawford was <a href="https://statescoop.com/texas-cio-amanda-crawford-insurance-commissioner/">appointed to head the state&#8217;s Insurance Department</a>. Sauerhoff became Texas&#8217; first chief AI officer in 2024, making him a rare official holding both technology leadership titles simultaneously. The move comes as Texas continues building out AI governance structures while Crawford transitions to insurance regulation, an odd lateral move for a sitting CIO.</p><p>New York replaced its chief AI officer less than a year after creating the position, <a href="https://statescoop.com/new-york-state-chief-ai-officer-eleonore-fournier-tombs/">appointing Eleonore Fournier-Tombs</a>, a United Nations University researcher specializing in AI governance and climate adaptation. The state also named a new chief digital officer. Delaware&#8217;s <a href="https://statescoop.com/delaware-cio-greg-lane/">CIO Greg Lane resigned</a> after serving since June 2023, with Chief of Administration Jordan Schulties serving as interim replacement.</p><p>The pattern is concerning. States are building AI governance capacity while simultaneously losing the leaders meant to execute that vision. CIO tenure has always been short&#8212;averaging three to four years&#8212;but the AI governance layer adds complexity that benefits from continuity. When chief AI officers turn over annually, institutional knowledge evaporates. Mississippi CIO Craig Orgeron <a href="https://www.govtech.com/artificial-intelligence/theres-a-lot-of-hype-mississippis-cio-on-ai-growth">captured the challenge</a>: &#8220;There&#8217;s a lot of hype&#8221; around AI, but success depends on building foundations state government needs to scale emerging technologies. That foundation-building requires sustained leadership.</p><h3>States Push Back on Regulation</h3><p><a href="https://www.route-fifty.com/artificial-intelligence/2026/01/new-york-lawmakers-are-ready-try-regulating-ai-industry-again/410702/">New York lawmakers are preparing another attempt</a> to regulate AI after previous bills were watered down by industry lobbying. The state&#8217;s 2026 legislative agenda includes ambitious plans despite past failures to rein in the powerful industry. California Attorney General Rob Bonta <a href="https://www.route-fifty.com/artificial-intelligence/2026/01/california-investigates-elon-musks-ai-company-after-avalanche-complaints-about-sexual-content/410704/">opened an investigation</a> into Elon Musk&#8217;s xAI after an &#8220;avalanche&#8221; of complaints about sexual content generated by the company&#8217;s AI image editing tool, examining whether it violates California law.</p><p><a href="https://www.route-fifty.com/artificial-intelligence/2026/01/kentucky-attorney-generals-lawsuit-says-ai-company-preys-youth/410581/">Kentucky&#8217;s attorney general sued an AI company</a>, alleging it &#8220;preys&#8221; on youth, claiming violations of the Kentucky Consumer Protection Act and Kentucky Consumer Data Protection Act. <a href="https://www.route-fifty.com/digital-government/2026/01/virginia-social-media-law-takes-effect-amid-legal-challenge/410642/">Virginia&#8217;s social media law took effect</a> limiting minors to one hour daily on platforms without parental approval via age verification, though an industry group is suing to block it. <a href="https://www.route-fifty.com/management/2026/01/tech-industry-group-seeks-block-reworked-arkansas-social-media-law/410640/">Arkansas faces similar legal challenges</a> to its reworked social media law requiring age verification.</p><p>The state regulatory push continues despite Trump administration threats to preempt state AI laws. States are testing different approaches&#8212;content regulation, youth protection, consumer rights&#8212;creating a patchwork that industry opposes but that reflects genuine attempts to address harms that federal inaction has ignored. Whether states can maintain this authority against federal preemption attempts remains the year&#8217;s defining question.</p><h3>Data Centers: Economic Promise, Infrastructure Reality</h3><p><a href="https://www.route-fifty.com/infrastructure/2026/01/wyoming-county-approves-construction-what-could-become-largest-data-center-us/410671/">Wyoming County approved construction</a> of what could become the largest data center in the United States. The project could eventually consume electricity equivalent to 10 nuclear power plants, boosting Wyoming&#8217;s energy industry while challenging emissions limits and stressing water supplies. The economic development promise is significant, but the infrastructure demands raise questions about who bears the costs.</p><p>The pattern repeats nationwide. <a href="https://www.govtech.com/artificial-intelligence/6-6b-data-center-a-major-investment-in-independence-kan">Kansas approved a $6.6 billion data center</a> in Independence, with construction starting this summer and continuing three to five years. These projects promise tax revenue and jobs, but <a href="https://www.govtech.com/artificial-intelligence/data-center-projects-and-the-benefits-they-promise-the-public">Mississippi&#8217;s CIO noted</a> such gains aren&#8217;t always easy to quantify. Policymakers can push developers to deliver, but often lack leverage once projects are approved.</p><p><a href="https://www.route-fifty.com/infrastructure/2026/01/nj-lawmakers-ok-plan-charge-data-centers-spiking-electric-costs/410580/">New Jersey took a different approach</a>, advancing legislation to charge data centers new tariffs for driving higher electric costs. The move acknowledges infrastructure strain rather than just celebrating economic development. As AI adoption accelerates data center demand, states must balance growth promises against grid capacity, water availability, and community impact.</p><div><hr></div><h2>Local</h2><h3>Practical AI: Permitting and Planning</h3><p><a href="https://www.govtech.com/artificial-intelligence/honolulu-is-among-cities-bringing-ai-to-planning-permitting">Honolulu is using CivCheck&#8217;s platform</a> to review applications and speed up the permitting process, joining cities bringing AI to planning and permitting workflows. <a href="https://www.govtech.com/artificial-intelligence/pueblo-county-colo-joins-localities-using-ai-for-permitting">Pueblo County, Colorado partnered with Blitz AI</a> to make building permit processes more efficient through integration that automates formerly time-consuming manual application reviews. Bellevue, Washington already uses AI permitting tools, and Louisville, Kentucky will soon pilot them.</p><p>These deployments share characteristics: clear use cases, vendor partnerships, measurable efficiency gains. No massive internal AI teams required. No lengthy governance debates. Just practical automation of document-heavy processes that consume staff time without adding judgment value. The AI handles initial review, flagging issues for human assessment. Staff focus on edge cases and applicant interaction rather than checkbox verification.</p><p><a href="https://www.govtech.com/artificial-intelligence/washington-county-considers-formalizing-ai-surveillance-guardrails">Thurston County, Washington is considering formalizing AI surveillance guardrails</a> through a draft ordinance regulating the county&#8217;s acquisition and use of AI-enabled surveillance technology. The approach balances deployment with oversight, acknowledging that technology decisions have civil liberties implications.</p><h3>When Students Lose Access</h3><p><a href="https://www.route-fifty.com/artificial-intelligence/2026/01/denver-schools-blocking-students-access-chatgpt-over-concerns-about-group-chats-adult-content/410670/">Denver schools are blocking student access to ChatGPT</a> over concerns about the chatbot&#8217;s new features enabling group chats and potentially exposing students to content related to self-harm, violence, and cyberbullying. District officials cited the popular AI tool&#8217;s evolution beyond its original question-and-answer format into social features that raise student safety concerns.</p><p>The move illustrates the whiplash schools face with consumer AI tools. Districts initially blocked ChatGPT over academic integrity concerns, then some reconsidered as AI literacy became important, and now safety concerns from feature additions prompt new restrictions. Schools lack control over product roadmaps, forcing reactive policy changes that confuse students and teachers. The challenge isn&#8217;t AI itself but rapidly evolving consumer products entering educational contexts without guardrails designed for minors.</p><div><hr></div><h2>Key Insights for Practitioners</h2><p><strong>Workforce capacity trumps AI strategy</strong>: Survey data showing 55% of agencies cite workforce skills gaps as a major AI barrier while simultaneously losing technical staff through DOGE reductions reveals a fundamental contradiction. No amount of strategic planning compensates for lack of people who can execute.</p><p>Action: Audit your current technical capacity against AI adoption goals. If gaps exist, determine whether you&#8217;re building internal expertise, relying on contractors, or pausing ambitions until staffing stabilizes. Wishful thinking about Tech Force or vendor solutions won&#8217;t maintain production systems.</p><p><strong>Fraud detection offers the AI blueprint government needs</strong>: PRAC&#8217;s fraud engine demonstrates what successful government AI looks like&#8212;clear problem definition, quality training data, measurable outcomes, pre-payment intervention rather than post-payment recovery. The model applies beyond fraud to any high-volume decision process where patterns matter more than individual judgment.</p><p>Action: Identify processes in your organization that involve reviewing large volumes of applications, claims, or requests where anomaly detection could flag issues before action. These are your highest-value AI targets, not chatbots or general productivity tools.</p><p><strong>Local deployment doesn&#8217;t require federal capacity</strong>: Cities implementing AI for permitting aren&#8217;t waiting for federal guidance, state frameworks, or massive internal teams. They&#8217;re partnering with vendors for narrow use cases that deliver measurable time savings.</p><p>Action: Stop treating AI as enterprise infrastructure requiring organization-wide strategy. Start with specific workflow pain points where automation reduces manual review time. Vendor partnerships accelerate deployment, but ensure contracts include performance metrics and exit clauses.</p><div><hr></div><p><strong>What I&#8217;m watching:</strong> How agencies manage technical staff losses through 2026 as AI ambitions scale. If DOGE reductions continue targeting technology teams while AI Action Plan demands accelerate, something breaks. Either agencies admit they lack capacity and pause deployment, or they proceed with insufficient staff and face system failures that undermine public trust.</p><div><hr></div><p>What workforce challenges are you seeing in your organization as AI expectations increase? How are you balancing adoption ambitions against technical capacity? Share your experiences in the comments.</p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://brief.dylanhayden.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">The Public AI Brief is a reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div>]]></content:encoded></item><item><title><![CDATA[Agentic AI Arrives, Data Centers Face Reckoning]]></title><description><![CDATA[The Public AI Brief &#183; Issue No. 24]]></description><link>https://brief.dylanhayden.com/p/agentic-ai-arrives-data-centers-face</link><guid isPermaLink="false">https://brief.dylanhayden.com/p/agentic-ai-arrives-data-centers-face</guid><dc:creator><![CDATA[Dylan Hayden]]></dc:creator><pubDate>Sat, 10 Jan 2026 17:23:04 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/c64ab6da-bc49-49c4-9009-0da84249077e_1024x1024.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>I hope everyone had a happy holiday season and a chance to recharge. Happy New Year! If the first week of 2026 is any indication, this year will be anything but quiet in the AI world. We&#8217;re seeing a major shift from AI as experimental tool to AI as autonomous agent, while communities across the country are drawing hard lines on data center expansion. The public sector is caught between embracing AI&#8217;s promise and managing its very real infrastructure and social costs.</p><p>While I took a short break from writing, one thing I noticed ovcer the holidays as an uptick in the disucssions around agentic AI, likely because everyone wanted to get in their &#8220;2026 will be the Year of&#8230;&#8221; predictions. Agentic AI, the capability for AI systems to take action independently rather than just respond to prompts, is moving from concept to reality in government operations. At the same time, the data center boom that powers all this AI has been triggering a backlash that&#8217;s impossible to ignore. From New Jersey charging data centers for spiking electric costs to Michigan townships imposing moratoriums, the &#8220;build it and prosperity will follow&#8221; narrative is colliding with resident concerns about infrastructure strain, environmental impact, and who actually benefits.</p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://brief.dylanhayden.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">The Public AI Brief is a reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><h3>This Week&#8217;s Key Developments:</h3><ul><li><p><strong>Federal agencies embrace agentic AI</strong> for planning, casework, and operational efficiency</p></li><li><p><strong>Industry predicts 2026 as breakthrough year</strong> for AI agents that act rather than just respond</p></li><li><p><strong>Data center backlash intensifies</strong> with New Jersey tariffs, Michigan moratorium, Baltimore County permit halt</p></li><li><p><strong>Louisville launches AI permitting pilot</strong> while hiring public-sector AI leader focused on affordable housing</p></li><li><p><strong>States move to protect children</strong> from AI in toys and excessive screen time</p></li><li><p><strong>Army creates new AI/ML career path</strong> for officers seeking specialization</p></li></ul><div><hr></div><h2>Federal</h2><h3>Agentic AI Moves From Concept to Operations</h3><p>The federal government is shifting from experimenting with AI to deploying systems that can act independently. <a href="https://federalnewsnetwork.com/artificial-intelligence/2026/01/from-gift-lists-to-government-systems-agentic-ai-is-changing-how-we-plan-and-prepare/">Agentic AI is changing how government plans and prepares</a>, moving beyond the chatbot model to systems that can manage complex workflows without constant human oversight. Industry leaders are calling <a href="https://www.route-fifty.com/artificial-intelligence/2026/01/2026-set-be-year-agentic-ai-industry-predicts/410473/">2026 the year of agentic AI</a>, with major technology companies reporting client demand for AI solutions that can handle end-to-end processes rather than just answer questions.</p><p>This isn&#8217;t theoretical. The <a href="https://www.nextgov.com/artificial-intelligence/2026/01/how-government-publishing-office-using-ai-enhance-operations/410520/">Government Publishing Office is using AI to enhance operations</a> by converting internal documents into AI-generated podcasts through Google&#8217;s NotebookLM, making information more accessible for its workforce. <a href="https://www.govtech.com/artificial-intelligence/utah-looks-to-ai-to-make-prescription-renewals-more-efficient">Utah&#8217;s Office of AI Policy is working with an AI-powered health platform</a> to streamline prescription renewals for residents with chronic conditions, reducing administrative burden while maintaining safety protocols.</p><p>The most compelling case for agentic AI may be in overwhelmed systems. The <a href="https://federalnewsnetwork.com/management/2026/01/the-snap-program-is-under-pressure-and-states-are-drowning-in-paper-as-new-mandates-kick-in/">SNAP program faces a paper crisis as new federal mandates kick in</a>, with caseworkers drowning in documentation requirements. AI systems that can process applications, flag issues, and route cases could help caseworkers focus on the human judgment calls that matter most. The <a href="https://federalnewsnetwork.com/ask-the-cio/2026/01/dlas-foundation-to-use-ai-is-built-on-training-platforms/">Defense Logistics Agency is building its AI foundation on continuous training and integrated platforms</a>, ensuring every employee can work effectively with AI tools rather than treating them as specialized technical capabilities.</p><p>The workforce implications are already visible. The <a href="https://federalnewsnetwork.com/army/2025/12/army-launches-ai-and-machine-learning-career-path-for-officers/">Army launched an AI and machine learning career path for officers</a>, with applications opening January 5 through the Voluntary Transfer Incentive Program. This signals a recognition that AI expertise needs to be embedded throughout the organization, not siloed in IT departments.</p><p>The shift to agentic AI raises fundamental questions about accountability and oversight. When AI systems can take action without human approval for every step, agencies need new frameworks for defining acceptable autonomy, monitoring decisions, and maintaining meaningful human control. State leaders are grappling with this transition in real time. At a recent <a href="https://innovate-us.org/">Innovate(us)</a> workshop I attended on AI governance, practitioners reported moving &#8220;from experimentation to more of a repeatable practice,&#8221; with the focus shifting from &#8220;is AI allowed?&#8221; to &#8220;under what conditions does this create value?&#8221; The challenge isn&#8217;t just deploying these systems but governing them responsibly while maintaining the agility to experiment.</p><div><hr></div><h2>State</h2><h3>The Data Center Reckoning</h3><p>States are waking up to the hidden costs of the data center boom, and they&#8217;re not impressed with what they&#8217;re finding. But before we get to the backlash, it&#8217;s worth noting what&#8217;s working. State AI programs are maturing beyond policy documents into operational practice. Arizona established an Office of Digital Solutions and formed an AI steering committee with over 150 applicants representing municipalities, counties, industry, and academia. New York embedded AI governance as &#8220;a service, not a gate,&#8221; working alongside pilot projects rather than reviewing them after the fact. Utah built an open-source automated AI risk assessment tool because manual assessments were too time-consuming. The pattern is governance embedded in operations, not bolted on afterward.</p><p>Now for the reckoning. <a href="https://www.route-fifty.com/infrastructure/2026/01/nj-lawmakers-ok-plan-charge-data-centers-spiking-electric-costs/410580/">New Jersey lawmakers advanced a plan to charge data centers for spiking electric costs</a>, imposing new tariffs on facilities that are driving utility rates higher for everyone else. It&#8217;s a direct challenge to the assumption that data centers are economic development wins without qualification.</p><p>The backlash is spreading fast. <a href="https://www.govtech.com/artificial-intelligence/michigan-township-passes-temporary-data-center-moratorium">Michigan&#8217;s Springfield Township passed a 180-day moratorium</a> barring data center plans from even being reviewed, with the possibility of extension if needed. <a href="https://www.govtech.com/policy/baltimore-county-md-may-halt-permits-during-data-center-reviews">Baltimore County is considering legislation to halt permits during impact reviews</a>. <a href="https://www.route-fifty.com/infrastructure/2026/01/2026-more-data-center-regulations-could-be-coming-maryland/410446/">Maryland lawmakers are signaling that more data center regulations are coming in 2026</a>, acknowledging that the Virginia model of aggressive data center development is moving across state lines with consequences the state isn&#8217;t prepared to manage.</p><p>The consitent pattern I&#8217;ve seen is that communities are promised economic growth and tax revenue, then discover the infrastructure strain, environmental impact, and resource demands that come with massive energy-hungry facilities. <a href="https://www.route-fifty.com/artificial-intelligence/2026/01/data-center-rush-appalachia/410504/">The data center rush in Appalachia</a> shows big tech eyeing coal country as AI demand soars, but rural communities are pushing back against the industry narrative. <a href="https://www.route-fifty.com/infrastructure/2026/01/data-center-gold-rush-pits-local-officials-hunt-new-revenue-against-residents-concerns/410474/">Georgia counties are taking vastly different approaches</a> to managing the surge in data center proposals, with no state-level regulations to guide them.</p><p>I&#8217;m skeptical of the &#8220;gold rush&#8221; framing that dominates data center coverage. A lot of money will be made, certainly. But the benefits are flowing to a very narrow group of landowners, developers, and tech companies, while the infrastructure costs, environmental impacts, and electricity rate increases hit everyone else. The public sector is left managing the externalities while private companies capture the gains. States and localities that slow down to assess real costs and benefits, rather than racing to approve projects out of economic development desperation, are making the smarter long-term choice.</p><p>One bright spot: <a href="https://www.route-fifty.com/customer-experience/2026/01/new-jerseys-innovation-office-first-be-enshrined-state-law/410507/">New Jersey codified its Office of Innovation into law</a>, becoming the first state to enshrine its digital delivery team in statute. The office, now the New Jersey Innovation Authority, will continue into the new gubernatorial administration. This matters. Innovation and AI work shouldn&#8217;t disappear with political transitions. Recognizing that technology modernization is institutional, not political, and giving it structural permanence rather than treating it as a pet project is exactly the kind of governance maturity states need.</p><div><hr></div><h2>Local</h2><h3>Building AI Leadership Where It Matters</h3><p><a href="https://www.govtech.com/biz/louisville-govstream-ai-launch-ai-backed-permitting-test">Louisville launched an AI-backed permitting test</a> that represents something more significant than just another pilot program. The city recently hired a public-sector AI leader and is approaching AI deployment with a clear purpose: addressing the affordable housing shortage by streamlining the permitting process. This is AI leadership done right. Louisville isn&#8217;t deploying technology for technology&#8217;s sake. They&#8217;ve identified a concrete problem, the bottleneck in housing development, and they&#8217;re testing whether AI can help. They&#8217;re building internal expertise before scaling, not outsourcing strategic decisions to vendors.</p><p>The data center tensions playing out at the state level are even more acute locally. <a href="https://www.govtech.com/policy/data-center-rush-pits-hunt-for-revenue-against-resident-concerns">In Georgia, the data center rush is pitting local officials&#8217; hunt for new revenue against residents&#8217; concerns</a>. Twiggs County&#8217;s situation highlights what happens when counties lack guidance or regulations to evaluate proposals that promise jobs and tax revenue but deliver infrastructure strain and environmental questions. Local governments are making billion-dollar decisions with limited information and no playbook.</p><p>Leadership transitions continue to reshape local government technology. <a href="https://www.govtech.com/workforce/nyc-names-new-acting-cto-as-matthew-fraser-steps-down">New York City named a new acting CTO as Matthew Fraser stepped down</a> after four years leading the city&#8217;s technology efforts. Meanwhile, <a href="https://www.govtech.com/policy/biometric-data-use-gets-scrutiny-in-erie-county-n-y">Erie County, New York is scrutinizing biometric data use</a>, with the county executive directing staff to pass a local law barring collection of such data. If enacted, Erie County would be in the vanguard on biometric data oversight, addressing privacy concerns before they become crises rather than after.</p><div><hr></div><h2>Education</h2><h3>Protecting Children in the AI Age</h3><p>States are moving to address AI&#8217;s impact on children, though the approaches vary widely. <a href="https://sd18.senate.ca.gov/news/20260102-author-nations-first-chatbot-protections-proposes-first-nation-moratorium-ai-chatbots">California Senator Steve Padilla proposed a first-in-the-nation moratorium on AI chatbots in toys</a>, building on his earlier work establishing chatbot protections. The concern is straightforward: toys with AI capabilities create privacy risks and developmental questions that haven&#8217;t been adequately studied, let alone regulated.</p><p><a href="https://www.route-fifty.com/digital-government/2026/01/alabama-lawmaker-pushes-screen-time-limits-children/410445/">Alabama lawmakers are pushing for screen time limits for children</a>, with research showing that reduced screen time from birth to age five helps build social skills. It&#8217;s a different approach than AI-specific regulation, but it reflects the same underlying anxiety about technology&#8217;s impact on child development.</p><p><a href="https://www.cde.ca.gov/re/di/ai/">California&#8217;s Department of Education continues updating its AI guidance and resources</a>, providing districts with frameworks as they navigate these questions. The challenge for K-12 and higher education isn&#8217;t just setting rules but helping educators, parents, and students understand what AI means for learning and development.</p><p>The environmental dimension can&#8217;t be ignored either. State leaders are increasingly concerned about AI&#8217;s infrastructure demands. Arizona formed a task force specifically addressing data center expansion and environmental impacts, particularly around water and energy resources. As one state official put it at a recent workshop, &#8220;What are the policy changes that need to be made to make sure that everyone can come along in this fantastic ride?&#8221; The question applies to children&#8217;s development and environmental sustainability alike. We&#8217;re making decisions about children&#8217;s exposure to AI systems and about resource allocation for AI infrastructure with limited evidence about long-term effects, which should make everyone cautious about moving too fast.</p><div><hr></div><h2>Key Insights for Practitioners</h2><p><strong>Agentic AI requires new governance frameworks</strong>: Systems that act independently need different oversight than tools that only respond to prompts. Agencies can&#8217;t govern agentic AI with chatbot-era policies.</p><p>Action: Begin mapping which decisions you&#8217;re willing to delegate to AI systems and which require human judgment. Document the criteria now, before pressure to scale forces rushed choices.</p><p><strong>Data center economics don&#8217;t benefit communities equally</strong>: The promise of jobs and tax revenue often obscures infrastructure costs, environmental impacts, and utility rate increases that affect everyone. The winners are concentrated, the costs are distributed.</p><p>Action: If your jurisdiction is evaluating data center proposals, demand comprehensive impact assessments that include electricity demand, water usage, infrastructure strain, and realistic job projections. Compare promised benefits against actual outcomes in communities that approved similar projects three to five years ago.</p><p><strong>AI leadership means hiring for it</strong>: Louisville&#8217;s approach, hiring a public-sector AI leader before scaling deployments, inverts the usual pattern of deploying first and figuring out governance later. Building internal expertise gives agencies strategic capacity rather than vendor dependence. Workshop participants emphasized that successful AI adoption requires embedding governance deeply with day-to-day pilots, creating AI champions across agencies, and combining policy development with hands-on implementation.</p><p>Action: Identify whether your organization needs dedicated AI leadership or can integrate AI responsibilities into existing roles. If you&#8217;re deploying AI at scale, you need someone whose job is thinking strategically about AI&#8217;s role in your mission, not just implementing vendor solutions. Consider establishing AI leads within each department to build distributed expertise rather than centralizing all knowledge in IT.</p><div><hr></div><p><strong>What I&#8217;m watching:</strong> How states respond to the data center backlash, particularly whether Maryland and other states follow New Jersey&#8217;s lead in making facilities pay for infrastructure costs. If the &#8220;AI boom requires infinite data centers&#8221; narrative starts breaking down under scrutiny of who actually benefits and who pays, we&#8217;ll see a significant shift in how AI infrastructure gets built and where.</p><div><hr></div><p>What&#8217;s your take on agentic AI in government? Are you seeing autonomous AI systems in your agency, or is this still mostly hype? And on data centers: should states be more aggressive about regulating them, or will market forces and community pushback provide enough check on expansion? Share your thoughts in the comments.</p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://brief.dylanhayden.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">The Public AI Brief is a reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div>]]></content:encoded></item><item><title><![CDATA[States Draw the Line on AI Preemption]]></title><description><![CDATA[The Public AI Brief &#183; Issue No. 24]]></description><link>https://brief.dylanhayden.com/p/states-draw-the-line-on-ai-preemption</link><guid isPermaLink="false">https://brief.dylanhayden.com/p/states-draw-the-line-on-ai-preemption</guid><dc:creator><![CDATA[Dylan Hayden]]></dc:creator><pubDate>Fri, 19 Dec 2025 12:43:33 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/0b491d79-3889-42a4-ac79-4cf71ce206fe_1024x1024.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>The fight over who gets to regulate AI moved from threat to reality this week. Following President Trump&#8217;s executive order blocking state AI laws, <a href="https://oag.ca.gov/news/press-releases/attorney-general-bonta-opposes-fcc%E2%80%99s-inquiry-state-ai-preemption">24 state attorneys general told the Federal Communications Commission</a> it lacks authority to preempt state protections. California&#8217;s Attorney General Rob Bonta was blunt: the FCC inquiry &#8220;follows the troubling pattern of the Trump Administration attempting to limit states&#8217; ability to protect their residents.&#8221;</p><p>This isn&#8217;t posturing. States have been the only governments actually regulating AI while Congress has failed to pass a single comprehensive bill. They&#8217;ve enacted laws protecting children from AI chatbot harm, prohibiting deepfakes in elections, and requiring disclosure when consumers interact with AI systems. The executive order would wipe those protections away, leaving what 40 attorneys general called Americans &#8220;entirely unprotected from the potential harms of AI.&#8221;</p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://brief.dylanhayden.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">The Public AI Brief is a reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><h3>This Week&#8217;s Key Developments:</h3><ul><li><p><strong>24 state AGs challenge FCC authority</strong> to preempt state AI laws</p></li><li><p><strong>StateScoop reports state leaders, civil rights groups</strong> call order &#8220;dangerous&#8221;</p></li><li><p><strong>AI tops state CIO priorities</strong> for first time, overtaking cybersecurity</p></li><li><p><strong>Treasury&#8217;s viral job posting</strong> requires 10-page Gatsby analysis for AI position</p></li><li><p><strong>Federal agencies get procurement guardrails</strong> for buying AI tools</p></li></ul><div><hr></div><h2>Federal</h2><p>Congress&#8217;s refusal to regulate AI created the vacuum states rushed to fill. Now the administration wants to prevent states from acting without offering federal protections in return. The Department of Justice has 30 days to establish an AI Litigation Task Force whose sole purpose is challenging state laws. The Commerce Department must identify &#8220;onerous&#8221; state provisions within 90 days. The FTC has the same timeline to issue guidance on when state laws requiring truthful AI outputs might be preempted as &#8220;deceptive.&#8221;</p><p>The Office of Management and Budget <a href="https://federalnewsnetwork.com/acquisition-policy/2025/12/omb-sets-procurement-guardrails-for-buying-ai-tools/">set new procurement guardrails this week</a>, directing agencies to ensure large language models they purchase are &#8220;truth seeking and ideologically neutral.&#8221; The memo says acquisition policies must prevent what OMB calls &#8220;biased&#8221; outputs from AI tools. The procurement guardrails arrive as agencies navigate contradictory workforce signals: the Office of Personnel Management <a href="https://federalnewsnetwork.com/hiring-retention/2025/12/opm-seeks-early-career-talent-for-tech-force-federal-hiring-initiative/">launched a &#8220;Tech Force&#8221; initiative</a> seeking early-career tech talent even as the IRS has shed at least 2,000 technology employees.</p><h3>The Gatsby Test</h3><p>The Treasury Department <a href="https://www.nextgov.com/artificial-intelligence/2025/12/want-ai-job-treasury-write-10-page-analysis-great-gatsby/410212/">posted a job opening</a> this week for an IT Specialist focused on artificial intelligence. The application requirements went viral on social media: write a 10-page analysis of metaphors in &#8220;The Great Gatsby,&#8221; convert it to a 200-word executive summary, translate both into Spanish and Mandarin, create a comparison table with three other novels, then rewrite the entire essay as a scientific paper with abstract.</p><p>The posting appears designed to test whether applicants can effectively use AI tools. But as an AI expert told Nextgov, the skills being measured don&#8217;t align with the technical strategy and architecture work the position actually requires. The timing is awkward&#8212;President Trump hosted a &#8220;Great Gatsby&#8221;-themed Halloween party at Mar-a-Lago during the government shutdown, and Treasury is recruiting for AI expertise while simultaneously losing thousands of technology workers. The disconnect between stated workforce needs and actual hiring practices captures the federal government&#8217;s broader struggle to articulate what AI leadership actually requires.</p><p>Meanwhile, <a href="https://www.nextgov.com/artificial-intelligence/2025/12/inside-white-house-meeting-its-ai-genesis-mission/410277/">Nextgov reports</a> the White House convened companies and researchers to discuss the Genesis Mission, the administration&#8217;s initiative connecting AI capabilities with scientific research. Radical AI&#8217;s CEO, who participated in the meeting, said there was a goal-oriented, partnership-driven focus for Genesis Mission and the ways it can change how AI and science work together.</p><div><hr></div><h2>State</h2><h3>States Push Back Against Federal Preemption</h3><p><a href="https://statescoop.com/trump-state-ai-law-order-clash-between-states-industry/">State leaders and civil rights groups are responding forcefully</a> to what they call a &#8220;dangerous&#8221; executive order banning state AI laws. Following months of protest from state lawmakers, attorneys general, and civil rights organizations, the order potentially sets the stage for widespread legal challenges. The National Association of State Chief Information Officers released a statement in May expressing concern about the proposal&#8217;s impact on work states have done to regulate AI in the absence of federal laws.</p><p><a href="https://oag.ca.gov/news/press-releases/attorney-general-bonta-opposes-fcc%E2%80%99s-inquiry-state-ai-preemption">Twenty-four state attorneys general filed comments</a> with the FCC this week arguing the agency lacks statutory authority to preempt state AI laws. The letter responds to an FCC notice of inquiry from September suggesting the commission would use its regulatory authority to override state protections. The AGs argue federal preemption would harm state interests and leave residents unprotected. California&#8217;s Attorney General Bonta has been particularly vocal, having opposed multiple federal preemption attempts throughout 2025.</p><p>American Civil Liberties Union senior policy counsel Cody Venzke <a href="https://statescoop.com/trump-state-ai-law-order-clash-between-states-industry/">called the order &#8220;dangerous,&#8221;</a> noting it doubles down on a policy that the Republican-led Congress rejected twice: &#8220;displacing states from their critical role in ensuring that AI is safe, trustworthy, and nondiscriminatory.&#8221; American Federation of Teachers President Randi Weingarten called it an &#8220;outrageous and likely illegal directive.&#8221;</p><p><a href="https://www.route-fifty.com/artificial-intelligence/2025/12/states-will-keep-pushing-ai-laws-despite-trumps-efforts-stop-them/410194/">States show no signs of backing down</a>. Nearly 40 states adopted or enacted AI measures in 2025, with states and territories proposing more than 250 pieces of AI-related legislation. According to a November report from the Council of State Governments, states undertook this work because of federal inaction. The state laboratory of democracy is functioning exactly as designed&#8212;experimenting with approaches to emerging technology risks while the federal government debates whether to act at all.</p><h3>Building Governance Capacity</h3><p><a href="https://www.govtech.com/artificial-intelligence/ai-overtakes-cybersecurity-in-state-cio-priorities-for-2026">AI has overtaken cybersecurity</a> as the top priority for state CIOs in 2026, according to the National Association of State Chief Information Officers&#8217; 20th annual survey. The shift reflects how quickly AI has moved from experimental to essential in state operations, representing what NASCIO calls &#8220;a pivotal shift in how leaders are preparing for the next era of gov tech.&#8221;</p><p><a href="https://www.govtech.com/workforce/illinois-seeks-to-appoint-states-first-chief-ai-officer">Illinois is searching</a> for its first Chief AI Officer to lead the state&#8217;s artificial intelligence and machine learning strategy. The Department of Innovation and Technology is building out a formal AI office to coordinate deployment across agencies. California Governor Newsom <a href="https://www.govtech.com/education/higher-ed/gov-newsom-taps-universities-for-state-ai-council">announced a 30-member California Innovation Council</a> including executives and leaders from the UC system, Stanford University, the Brookings Institute, and the California Chamber of Commerce. The council will advise on responsible AI deployment and help position California as a leader in AI governance.</p><p>New Jersey <a href="https://www.govtech.com/biz/new-jersey-launches-ai-fund-for-innovative-startups">launched a $20 million AI fund</a> backed by the state and private sector to help companies develop AI tools. The move signals New Jersey&#8217;s ambition to become a national AI leader. <a href="https://www.route-fifty.com/artificial-intelligence/2025/12/new-jerseys-new-years-resolution-tap-ai-better-service-delivery/410244/">Route Fifty reports</a> the state is prioritizing AI for better service delivery under a new grant program, recognizing pressure to innovate public benefit systems.</p><p>These investments in governance capacity&#8212;chief AI officers, advisory councils, startup funds&#8212;represent states taking institutional responsibility for AI deployment. They&#8217;re building the organizational muscle to move beyond pilots to scaled implementation.</p><div><hr></div><h2>Local</h2><p><a href="https://www.govtech.com/artificial-intelligence/beyond-limits-cities-large-and-small-put-ai-to-use">Cities are deploying AI across diverse use cases</a>, from reducing first responder paperwork to streamlining permitting processes. Los Angeles is ramping up AI deployments ahead of hosting the World Cup, Super Bowl, Olympics and Paralympics, using global events as both deadline and justification for accelerated adoption. Smaller municipalities are finding AI helps stretch limited staff capacity, automating routine tasks so employees can focus on complex problems requiring human judgment.</p><p><a href="https://www.route-fifty.com/artificial-intelligence/2025/12/how-ai-helping-local-governments-access-federal-grant-funding/410254/">Local governments are using AI to navigate the notoriously complex federal grant application process</a>. Before, employees sifted through hundreds of pages and filled out unique applications for each opportunity. AI has made the process more streamlined and less time consuming, though success still requires human oversight to ensure accuracy and alignment with grant requirements.</p><p>Not every community welcomes AI infrastructure. <a href="https://www.govtech.com/artificial-intelligence/michigan-township-limits-where-data-centers-can-be-built">A Michigan township limited where data centers can be built</a>, restricting the facilities to land zoned for industrial and commercial revitalization. The move reflects growing community concern about data centers&#8217; energy consumption, water usage, and limited job creation relative to their physical footprint.</p><div><hr></div><h2>Key Insights for Practitioners</h2><p><strong>Federal vacuum creates state imperative</strong>: Congress&#8217;s failure to regulate AI hasn&#8217;t stopped AI deployment&#8212;it has forced states to act as the only layer of consumer protection. Without federal guardrails, states are operating as laboratories of democracy by necessity, not choice.</p><p>Action: Document the specific harms your constituents face from unregulated AI systems. State regulations work best when grounded in concrete problems affecting real people, not abstract technology policy.</p><p><strong>Governance capacity precedes implementation success</strong>: States investing in chief AI officers, advisory councils, and formal AI offices are building institutional capacity to move from pilots to production. The organizational infrastructure matters as much as the technology itself.</p><p>Action: If your organization lacks dedicated AI governance leadership, identify who owns AI strategy and accountability now. Informal arrangements don&#8217;t scale&#8212;create formal reporting structures and decision rights before expanding AI use.</p><p><strong>Workforce signals reveal strategic confusion</strong>: Federal agencies simultaneously recruit AI talent and shed technology workers. Job postings test AI tool proficiency through literary analysis while positions require technical architecture expertise. The disconnect suggests unclear thinking about what AI capabilities government actually needs.</p><p>Action: Define AI competencies your organization actually requires before recruiting or training. Distinguish between AI literacy (everyone needs some), AI tool proficiency (many roles need this), and AI technical expertise (few roles require deep skills). Don&#8217;t hire for one when you need another.</p><div><hr></div><p><strong>What I&#8217;m watching:</strong> How quickly the AI Litigation Task Force identifies its first targets for legal challenge. The 30-day deadline means we&#8217;ll see by mid-January which state laws DOJ considers most threatening to the administration&#8217;s &#8220;minimally burdensome&#8221; framework. The selection will signal whether this is about removing genuine regulatory barriers or simply preventing states from acting at all.</p><div><hr></div><p>What&#8217;s your take on the federal-state preemption fight? Should states continue regulating AI even as the administration challenges their authority, or does fragmented state-by-state regulation create more problems than it solves? Share your perspective in the comments.</p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://brief.dylanhayden.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">The Public AI Brief is a reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div>]]></content:encoded></item><item><title><![CDATA[The Shot Across the Bow]]></title><description><![CDATA[The Public AI Brief &#183; Issue No. 23]]></description><link>https://brief.dylanhayden.com/p/the-shot-across-the-bow</link><guid isPermaLink="false">https://brief.dylanhayden.com/p/the-shot-across-the-bow</guid><dc:creator><![CDATA[Dylan Hayden]]></dc:creator><pubDate>Sat, 13 Dec 2025 00:42:04 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/6897be4a-fce4-4e7f-b8cd-64e7a66fe52d_1024x1024.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Apologies for the nautical references, I&#8217;ve been deep into the Patrick O&#8217;Brian series lately. For months, we&#8217;ve watched two ships approach each other: states building AI governance frameworks to protect constituents, and a federal government threatening to sweep those efforts aside in the name of innovation. This week, the collision happened. President Trump signed an executive order targeting what the White House calls &#8220;cumbersome&#8221; state AI regulation, instructing federal agencies to identify which state laws undermine national competitiveness. Legal experts immediately called the action illegal. State leaders and civil rights groups called it dangerous. California called it corruption.</p><p>The irony is hard to miss. Just days before Trump&#8217;s order, <a href="https://www.route-fifty.com/artificial-intelligence/2025/12/age-darkness-and-deceit-desantis-proposes-ai-bill-rights-crack-down/409963/">Florida Governor Ron DeSantis proposed an &#8220;AI bill of rights&#8221;</a> prioritizing citizen protection over Silicon Valley preferences. DeSantis&#8217;s Florida House now plans <a href="https://www.route-fifty.com/artificial-intelligence/2025/12/ai-week-crashed-trump-desantis-battle-over-regulation-whats-next/410090/">a comprehensive, multi-committee study</a> of AI&#8217;s implications, even as the Trump administration moves to block exactly that kind of state-level scrutiny. The Republican coalition on technology policy isn&#8217;t just fractured - it&#8217;s at war with itself.</p><p>Meanwhile, states aren&#8217;t backing down. New York Governor Kathy Hochul is <a href="https://www.route-fifty.com/artificial-intelligence/2025/12/hochul-and-legislative-leaders-play-game-chicken-ai-regulations/410123/">playing hardball with legislative leaders</a> over rewriting the RAISE Act. <a href="https://www.govtech.com/workforce/louisville-ky-s-first-ai-officer-comes-from-private-sector">Louisville hired its first Chief AI Officer</a> with a $2 million budget. <a href="https://www.govtech.com/artificial-intelligence/officials-consider-adding-agentic-ai-to-myalaska-portal">Alaska is exploring agentic AI</a> for its citizen services portal. Perhaps their CIO, Bill Smith, is just making Alaska the <em>next</em> frontier instead of the last.  The executive order may have been signed, but the fight over who governs AI in America is just beginning.</p><h3>This Week&#8217;s Key Developments:</h3><ul><li><p><strong>Trump signs executive order</strong> targeting state AI regulation; legal experts call it illegal, states call it dangerous</p></li><li><p><strong>DeSantis breaks with Trump</strong> as Florida plans comprehensive AI study while White House moves to block state action</p></li><li><p><strong>Louisville names first Chief AI Officer</strong> with $2M budget as cities continue building capacity despite federal threats</p></li><li><p><strong>New York plays hardball</strong> on AI legislation as Hochul and lawmakers clash over RAISE Act rewrite</p></li></ul><div><hr></div><h2>Federal</h2><p><a href="https://www.govtech.com/artificial-intelligence/trump-signs-executive-order-reining-in-state-ai-regulation">President Trump signed an executive order</a> this week seeking to limit states&#8217; abilities to enact AI-related policy that could be deemed &#8220;burdensome.&#8221; The order instructs certain federal agencies to identify which state laws undermine federal efforts to help the U.S. lead globally in AI. Experts argue the action is illegal. <a href="https://statescoop.com/trump-state-ai-executive-order-ban/">StateScoop reported</a> state leaders and civil rights groups responding to what they call a &#8220;dangerous&#8221; order, while <a href="https://news.google.com/rss/articles/CBMimwFBVV95cUxObG5IZG1LblkxcVFmX2pVa2hQNk9EVGMzeGFNeWVpM0tfNU55TURDV0hvZE4tRlR6VEFxZjdBQUl4emN4dDNLMi1jd0lvY29oYzhLVkdnWWxVVDlRNlZuSzZscTN2NFBYU0xvWXFPd1h0VWkwd2lPQVZRRzJreGZsY1d1RVNxYmpMcjIyVm5abW04M0dkMXNPTkMxUQ?oc=5">California issued a statement</a> declaring Trump&#8217;s order &#8220;advances corruption, not innovation.&#8221;</p><p>The order has <a href="https://www.govtech.com/artificial-intelligence/one-rule-ai-regulation-might-be-bad-news-for-california">particular implications for California&#8217;s SB 53</a>, which set safety disclosure requirements for companies operating AI models. The law represents exactly the kind of state-level AI oversight the Trump administration now seeks to preempt. The tech industry&#8217;s Center for Data Innovation <a href="https://datainnovation.org/2025/12/the-white-house-ai-order-sends-the-right-message-on-fragmented-state-laws/">welcomed the order</a>, arguing it &#8220;sends the right message on fragmented state laws.&#8221; But <a href="https://federalnewsnetwork.com/commentary/2025/12/ai-executive-order-could-deepen-trust-crisis-not-solve-it/">Federal News Network&#8217;s analysis</a> suggests the order &#8220;could deepen trust crisis, not solve it,&#8221; noting that if the regulatory ecosystem was already messy, a nationwide legal battle over AI and federalism will only intensify the problem.</p><p>The White House also <a href="https://www.nextgov.com/artificial-intelligence/2025/12/white-house-instructs-agencies-stop-using-biased-ai/410135/">instructed agencies to stop using &#8220;biased&#8221; AI</a> (for my students, that&#8217;s a political statement and has no relation to any reality), with the Office of Management and Budget clarifying steps agencies must take to ensure contracted large language models do not produce &#8220;woke&#8221; outputs. The directive arrives as <a href="https://www.nextgov.com/artificial-intelligence/2025/12/bipartisan-bicameral-bill-looks-help-government-hire-more-ai-talent/410092/">a bipartisan bill looks to help government hire more AI talent</a> following the exodus of hundreds of thousands of government employees under the administration&#8217;s push to shrink the workforce.</p><p>Congress established <a href="https://www.route-fifty.com/artificial-intelligence/2025/12/new-house-commission-scrutinize-ai-impact-economy/410061/">a new House commission to scrutinize AI&#8217;s impact on the economy</a>, addressing policy issues like guardrails for AI and its economic, safety and health impacts. Meanwhile, the <a href="https://www.nextgov.com/artificial-intelligence/2025/12/ndaa-includes-directive-dod-prioritize-use-ai-mental-health-needs/410084/">National Defense Authorization Act includes a directive</a> for DOD to prioritize AI for mental health needs, with the House Armed Services Committee stating &#8220;the rate of military suicide is unacceptably high and a new approach is required.&#8221;</p><div><hr></div><h2>State</h2><h3>The Trump-DeSantis Split</h3><p><a href="https://www.route-fifty.com/artificial-intelligence/2025/12/ai-week-crashed-trump-desantis-battle-over-regulation-whats-next/410090/">Florida&#8217;s &#8220;AI Week&#8221; was crashed</a> by the Trump-DeSantis battle over regulation. Just days before Trump&#8217;s executive order, <a href="https://www.route-fifty.com/artificial-intelligence/2025/12/age-darkness-and-deceit-desantis-proposes-ai-bill-rights-crack-down/409963/">DeSantis proposed an &#8220;AI bill of rights&#8221;</a> marking a sharp departure from the administration&#8217;s deregulatory posture. The Florida House now plans a comprehensive, multi-committee study of AI&#8217;s implications, examining exactly the kind of state-level policy questions the White House executive order seeks to prevent.</p><p>The conflict reveals a deeper fracture in Republican governance philosophy. DeSantis framed his proposal around an &#8220;age of darkness and deceit,&#8221; arguing citizen protection should take precedence over industry preferences. The Trump administration&#8217;s order explicitly prioritizes industry concerns about &#8220;50 discordant&#8221; state laws hindering competitiveness. One governor sees constituent protection as the primary responsibility; the other sees it as an obstacle to innovation.</p><h3>New York Plays Hardball</h3><p><a href="https://www.route-fifty.com/artificial-intelligence/2025/12/hochul-and-legislative-leaders-play-game-chicken-ai-regulations/410123/">Governor Kathy Hochul is playing a game of chicken</a> with legislative leaders over AI regulations. Hochul wants to rewrite the RAISE Act with language nearly identical to California&#8217;s law - a hard no for lawmakers, particularly now that California&#8217;s approach faces direct federal challenge. The standoff demonstrates how Trump&#8217;s executive order complicates state-level AI policymaking even in blue states that might otherwise align on regulatory approaches.</p><p>Hochul isn&#8217;t retreating entirely from AI investment. <a href="https://www.govtech.com/artificial-intelligence/new-york-state-to-invest-40m-in-ai-training-nuclear-energy">New York committed $40 million</a> to AI training and nuclear energy, pouring resources into clean energy workforce development with aims of expanding the state&#8217;s nuclear and AI capabilities. But the governor&#8217;s administration is simultaneously <a href="https://www.route-fifty.com/artificial-intelligence/2025/12/lawmakers-push-regulate-ai-advertising-new-york-agency-running-ai-ads/410033/">running AI-generated advertisements</a> featuring AI-created faces without disclosing the technology to viewers, even as lawmakers push to regulate exactly that kind of AI use in advertising. The disconnect between the state&#8217;s regulatory ambitions and its own practices highlights the governance challenges AI creates.</p><h3>States Building Despite Federal Threats</h3><p><a href="https://www.route-fifty.com/artificial-intelligence/2025/12/still-early-days-state-chief-ai-officers/410091/">Route Fifty examined how state Chief AI Officers</a> are defining their roles in real time, noting it remains early days as states stay uncertain about the technology&#8217;s future and how to address its impacts. <a href="https://www.govtech.com/artificial-intelligence/officials-consider-adding-agentic-ai-to-myalaska-portal">Alaska is considering adding agentic AI modules</a> to the myAlaska app, which residents use for key services, with a recent request for information seeking industry input.</p><p><a href="https://www.route-fifty.com/digital-government/2025/12/coge-report-recommends-new-hampshire-split-apart-health-department-embrace-ai/410125/">New Hampshire&#8217;s Commission on Government Efficiency</a> recommended splitting apart the health department and embracing AI, with the commission&#8217;s final report calling for breaking the department into smaller agencies to improve accountability. The recommendation reflects growing recognition that AI adoption requires organizational restructuring, not just technology deployment.</p><div><hr></div><h2>Local</h2><p><a href="https://www.govtech.com/workforce/louisville-ky-s-first-ai-officer-comes-from-private-sector">Louisville hired its first Chief AI Officer</a>, with Pamela McKnight coming from the private sector to lead the city&#8217;s AI strategy. Officials announced plans to hire a CAIO and build out an AI team earlier this year, powered by a $2 million budget expansion. Louisville&#8217;s investment demonstrates cities&#8217; willingness to build governance capacity even as the federal-state regulatory landscape grows more chaotic.</p><p>The local government tech sector continues attracting investment. <a href="https://www.govtech.com/biz/local-government-tech-startup-madison-ai-raises-3-5m">Madison AI raised $3.5 million</a> for its chatbot and AI-backed services to cities, counties and local agencies. The young company&#8217;s backers include several government technology veterans along with officials from Nevada, suggesting confidence in municipal AI adoption regardless of federal-state tensions.</p><p>Data center tensions persist at the local level. <a href="https://www.govtech.com/products/residents-push-indiana-community-toward-halt-on-data-centers">Indiana residents are pushing</a> for a one-year moratorium on large hyperscale data centers in Starke County, with the proposal headed to the County Board of Commissioners for consideration. <a href="https://www.route-fifty.com/artificial-intelligence/2025/12/google-data-centers-will-bring-nuclear-power-back-tornado-country/409995/">Google&#8217;s plans to reopen Iowa&#8217;s nuclear plant</a> to power nearby data centers raise questions about extreme weather threatening reactor safety, particularly after a 2020 storm prematurely shut down the state&#8217;s only nuclear facility.</p><p><a href="https://www.route-fifty.com/infrastructure/2025/12/data-centers-ai-could-nearly-triple-san-joses-energy-use-who-foots-bill/410032/">San Jose faces energy concerns</a> as AI&#8217;s planned data center boom could nearly triple the city&#8217;s energy use, straining California&#8217;s grid forecasts and raising fears customers could pay for upgrades if projects never materialize. These local land use and energy decisions remain firmly in municipal hands, one area where Trump&#8217;s executive order cannot reach.</p><div><hr></div><h2>Education</h2><h3>Universities Navigate Uncertain Terrain</h3><p><a href="https://www.govtech.com/education/higher-ed/texas-christian-university-commits-10m-to-expand-ai-use">Texas Christian University committed $10 million</a> to expand AI use through a partnership with Dell, accelerating AI deployment on campus while implementing systems that keep critical data in-house. The private research university&#8217;s investment reflects higher education&#8217;s push to build AI infrastructure despite regulatory uncertainty.</p><p><a href="https://www.govtech.com/education/higher-ed/2-years-into-nairr-pilot-shared-infrastructure-boosts-ai-innovation">Two years into the NAIRR pilot</a>, shared infrastructure is boosting AI innovation. The National Artificial Intelligence Research Resource pilot connects researchers, educators and industry partners, providing shared computing power, AI tools and educational support for pushing boundaries with the technology.</p><h3>Federal Scrutiny of EdTech</h3><p><a href="https://www.route-fifty.com/people/2025/12/feds-float-tying-kids-screen-time-school-subsidies/410064/">Federal officials are floating tying kids&#8217; screen time to school subsidies</a>, with NTIA Administrator Arielle Roth stating the agency will study whether schools are too reliant on educational technology and if spending has resulted in bad outcomes for students. The proposal represents federal willingness to intervene in educational technology adoption even as the administration moves to block state AI regulation in other contexts.</p><p><a href="https://www.route-fifty.com/management/2025/12/away-day-indiana-lawmakers-take-bill-expanding-cellphone-ban-entire-school-day/410063/">Indiana lawmakers are taking up a bill</a> expanding cellphone bans to the entire school day, building on existing state law restricting student use during instructional time. The &#8220;away for the day&#8221; approach shows states addressing technology&#8217;s educational impacts through straightforward restrictions rather than complex AI governance frameworks.</p><div><hr></div><h2>Key Insights for Practitioners</h2><p><strong>Legal challenges will define AI governance for years, not months</strong>: Trump&#8217;s executive order won&#8217;t settle the federal-state question - it opens a legal battle that will work through courts while agencies continue deploying AI systems. State and local governments should build governance capacity assuming regulatory uncertainty persists.</p><p>Action: Establish AI governance frameworks that can adapt to changing federal-state dynamics. Focus on internal policies, procurement standards, and workforce development that provide value regardless of how preemption battles resolve.</p><p><strong>Republican fracture creates state-level opportunity</strong>: DeSantis&#8217;s break with Trump on AI regulation reveals Republican governors aren&#8217;t uniformly aligned with federal deregulation. States led by governors prioritizing constituent protection over industry preferences may find unexpected allies across party lines.</p><p>Action: Monitor which Republican governors follow DeSantis&#8217;s model versus Trump&#8217;s approach. Build coalitions based on shared governance priorities rather than partisan alignment.</p><p><strong>Local governments hold leverage federal orders can&#8217;t touch</strong>: Cities and counties control land use permitting, energy infrastructure decisions, and procurement choices. Trump&#8217;s executive order targets state AI laws, not municipal authority over data centers, energy use, or vendor selection.</p><p>Action: Assert local control over AI infrastructure questions. Use land use authority, energy planning, and procurement standards to shape AI deployment in your community regardless of federal-state battles.</p><div><hr></div><p><strong>What I&#8217;m watching:</strong> Which states sue to challenge the executive order, and whether Republican attorneys general join the legal fight. The Trump-DeSantis split suggests at least some Republican states might defend their AI regulatory authority rather than defer to federal preemption. If red states and blue states both challenge the order, the legal landscape gets significantly more complicated for the White House.</p><div><hr></div><p>The federal government just fired a shot across the bow. How are states responding in your jurisdiction? Are local leaders building AI capacity or waiting for federal-state clarity? Share your observations in the comments.</p>]]></content:encoded></item><item><title><![CDATA[Building Capability, Not Just Buying Technology]]></title><description><![CDATA[The Public AI Brief &#183; Issue No. 22]]></description><link>https://brief.dylanhayden.com/p/building-capability-not-just-buying</link><guid isPermaLink="false">https://brief.dylanhayden.com/p/building-capability-not-just-buying</guid><dc:creator><![CDATA[Dylan Hayden]]></dc:creator><pubDate>Sun, 30 Nov 2025 03:07:26 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/b6b2983b-a850-49da-b5d4-bd7b21260783_1024x1024.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>The federal workforce question finally got asked out loud this week: what happens to workers after AI takes over the mundane tasks? It&#8217;s the question agencies have been dodging, and the answer reveals whether AI becomes genuine capability transformation or just another efficiency-driven headcount reduction exercise.</p><p>This connects to a broader pattern I see across state and local governments. The most serious AI efforts prioritize organizational capability over technology acquisition. Georgia is training public employees statewide on AI literacy. Tennessee released a four-pillar action plan that treats workforce development and governance as equal priorities with pilots and infrastructure. El Paso created an AI apprenticeship program to build local talent rather than compete for expensive outside hires. The common thread is recognition that technology without organizational readiness consistently produces expensive failures.</p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://brief.dylanhayden.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">The Public AI Brief is a reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><p>Perhaps, they&#8217;re realizing what I&#8217;ve been saying for a while, that AI can be great for task completion, but it requires the deep human subject matter expertise as the key to unlocking the true potential. An organization of experts leveraging the power of AI will be light years ahead of short-sighted oragnizations who see AI simply as a means of reducing headcount.</p><h3>This Week&#8217;s Key Developments:</h3><ul><li><p><strong>Federal workforce transformation</strong>: Agencies confront what comes after AI automates routine work</p></li><li><p><strong>Georgia launches statewide AI literacy</strong> initiative to train public employees across all agencies</p></li><li><p><strong>Tennessee releases action plan</strong> with four strategic pillars including workforce development and governance</p></li><li><p><strong>El Paso creates AI apprenticeship</strong> program building local government talent</p></li><li><p><strong>AWS commits $50B</strong> to government AI infrastructure while data center resource concerns intensify</p></li></ul><div><hr></div><h2>Federal</h2><p>The <a href="https://federalnewsnetwork.com/artificial-intelligence/2025/11/what-comes-next-for-federal-workers-after-ai-takes-over-the-mundane-tasks/">conversation about federal workers and AI automation</a> moved beyond displacement fears to capability transformation this week. Federal leaders acknowledge AI will handle routine tasks, but the harder question is whether agencies can successfully transition workers to higher-value activities. The challenge isn&#8217;t technological. It&#8217;s also not new. Even before ChatGPT came on the scene, senior leadership across the federal government has struggled to deploy the workforce to truly innovate, often settling for the tried and tested. The real question now is whether AI-promised efficiency gains become workforce development opportunities or just headcount reduction targets dressed up as modernization.</p><p>This connects directly to ongoing <a href="https://federalnewsnetwork.com/it-modernization/2025/11/doge-and-its-long-term-counterpart-remain-with-a-full-slate-of-modernization-projects-underway/">DOGE and U.S. Digital Service modernization work</a>. DOGE continues as a temporary organization within USDS, with Amy Gleason as acting head, though long-term structure remains unclear. What matters more than organizational charts is the full slate of modernization projects underway, suggesting federal AI adoption continues regardless of political messaging around efficiency versus transformation.</p><p><a href="https://www.route-fifty.com/artificial-intelligence/2025/11/aws-invest-50b-ai-and-supercomputing-infrastructure-government-customers/409776/">Amazon Web Services announced $50 billion in AI and supercomputing infrastructure investment</a> specifically for government customers, granting expanded access while scaling supporting infrastructure. The investment signals vendor confidence in sustained public sector AI demand, though it raises questions about whether agencies are building genuine capability or deepening cloud dependency. Thinking back to last month&#8217;s AWS outage, or last week&#8217;s Cloudflare failure, maybe we should think a little more about what the implications could be? Separate reporting on <a href="https://www.route-fifty.com/artificial-intelligence/2025/11/how-make-data-centers-less-thirsty/409729/">reducing data center water consumption</a> highlights the environmental costs, suggesting infrastructure decisions must balance economic development incentives with resource constraints.</p><h2>State</h2><h3>Building Organizational Capacity</h3><p><a href="https://www.govtech.com/artificial-intelligence/georgia-moves-to-expand-ai-literacy-across-state-agencies">Georgia is partnering with InnovateUS</a> to train public employees statewide on working with AI. CIO Shawnzia Thomas emphasized that empowering people is central to digital transformation, not an afterthought. The literacy initiative recognizes what failed IT projects consistently demonstrate: technology adoption without workforce capability building produces expensive failures, not efficiency gains.</p><p><a href="https://www.govtech.com/artificial-intelligence/tennessee-action-plan-looks-to-guide-how-ai-will-be-used">Tennessee released an AI action plan</a> built on four strategic pillars: pilots, infrastructure, workforce development, and governance. The framework emphasizes organizational discipline over shiny tools, a refreshing contrast in a landscape dominated by vendor promises. The plan aims to modernize services and strengthen the economy, though success depends on whether the state can execute across all four pillars simultaneously or whether governance gets deprioritized when budget pressures hit.</p><p><a href="https://www.route-fifty.com/digital-government/2025/11/digital-front-door-helps-new-mexico-boost-customer-experience-and-staff-productivity/409812/">New Mexico&#8217;s Health Care Authority launched a &#8220;digital front door&#8221;</a> to streamline resident access to public assistance programs and call centers. The initiative focuses on customer experience and staff productivity simultaneously, avoiding the trap of optimizing one at the other&#8217;s expense. Like the <a href="https://benefits.maryland.gov/home/#/">Benefits Portal</a> we have here in Maryland, I think success will depend on whether the technology actually reduces friction or creates digital barriers for residents with limited connectivity or digital literacy.</p><h3>Workforce Challenges</h3><p><a href="https://www.route-fifty.com/people/2025/11/government-workers-young-and-old-need-shared-purpose-say-local-leaders/409778/">State and local leaders emphasized &#8220;shared purpose&#8221;</a> as essential for managing multi-generational workforces amid retirements and private sector competition. Agencies must appeal to civic duty while wrestling with AI&#8217;s role in workforce transformation. The challenge intensifies as AI potentially displaces some roles while creating demand for new capabilities that existing workers may or may not develop.</p><p>Leaders at the <a href="https://www.route-fifty.com/digital-government/2025/11/amid-distrust-and-volatility-leaders-urge-governments-walk-talk/409731/">GOVIT Summit urged governments to &#8220;walk the talk&#8221;</a> amid public distrust and volatility. Communication and keeping promises matter more than ambitious AI strategies that fail delivery. States navigating AI adoption while maintaining public trust need consistent execution, not just compelling vision statements.</p><h2>Local</h2><h3>Workforce Development Models</h3><p><a href="https://www.govtech.com/biz/supercity-ai-crafts-el-paso-ai-apprenticeship-program">El Paso partnered with SuperCity AI</a> to create an AI apprenticeship program training applicants with AI skills for local government work. The initiative represents an alternative to traditional hiring, building capability from within the community rather than competing for experienced talent in expensive markets. Success depends on whether apprenticeships actually lead to government employment or just provide training that benefits private sector employers.</p><p><a href="https://www.govtech.com/artificial-intelligence/in-long-beach-calif-ai-is-part-of-digital-skills-training">Long Beach wrapped workshops</a> teaching residents digital skills including AI usage and how the city deploys it. The initiative combines resident education with transparency about city AI applications, recognizing that public trust requires understanding. Whether workshops reach beyond early adopters to residents with limited digital access remains an open question.</p><h3>Governance Structures</h3><p><a href="https://www.govtech.com/artificial-intelligence/new-york-city-council-sets-up-a-new-ai-oversight-office">New York City Council passed legislation</a> creating an Office of Algorithmic Accountability to audit, monitor and regulate city agency AI tools. A separate initiative aims to educate the public on AI. This is NYC&#8217;s second attempt at AI regulation after previous efforts stalled, so the real test is whether this office gains actual enforcement authority or becomes another advisory body that agencies ignore when convenient.</p><h2>Education</h2><p><a href="https://www.govtech.com/education/higher-ed/new-jersey-community-college-launches-ai-robotics-program">Warren County Community College in New Jersey launched an associate degree program</a> combining five technology fields, including AI and robotics, into one curriculum. The program aims to prepare students for automation and manufacturing careers in a region where traditional pathways are disappearing. Community colleges often lead in pragmatic workforce development because they&#8217;re directly accountable to local employers and students who can&#8217;t afford credential programs that don&#8217;t lead to jobs.</p><div><hr></div><h2>Key Insights for Practitioners</h2><p><strong>Capacity building beats technology acquisition</strong>: Georgia&#8217;s statewide literacy initiative and Tennessee&#8217;s four-pillar framework both prioritize workforce development and governance alongside pilots and infrastructure. Technology without organizational capability consistently fails.</p><p>Action: Conduct an honest assessment of your organization&#8217;s AI literacy across all levels, not just technical staff. Invest in training before expanding deployments, even if that slows adoption.</p><p><strong>Workforce transition planning is unavoidable</strong>: The federal question about what comes after AI automation applies to every government level. Organizations must decide now whether efficiency gains fund capability development or just become headcount reductions.</p><p>Action: Begin documenting where AI is replacing, augmenting, or transforming roles in your organization. Build transition plans that treat workforce development as essential infrastructure, not optional overhead.</p><p><strong>Community-based workforce development works</strong>: El Paso&#8217;s apprenticeship model and Long Beach&#8217;s resident workshops show alternatives to competing for expensive outside talent. Building local capability creates sustainable capacity and strengthens community connections.</p><p>Action: Explore partnerships with community colleges, nonprofits, or workforce development organizations to create local talent pipelines. Focus on converting existing community members rather than recruiting from competitive markets.</p><div><hr></div><p><strong>What I&#8217;m watching:</strong> How agencies actually handle workforce transitions as AI deployments scale beyond pilots. If automation gains only fund more automation rather than worker development, expect unions and employee groups to push back hard on future AI initiatives.</p><div><hr></div><p>How is your organization approaching AI workforce development? Are you investing in capability building before technology deployment, or running pilots first and figuring out the people side later? Share your experiences in the comments.</p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://brief.dylanhayden.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">The Public AI Brief is a reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div>]]></content:encoded></item><item><title><![CDATA[When "States' Rights" Meets Silicon Valley]]></title><description><![CDATA[The Public AI Brief &#183; Issue No. 21]]></description><link>https://brief.dylanhayden.com/p/when-states-rights-meets-silicon</link><guid isPermaLink="false">https://brief.dylanhayden.com/p/when-states-rights-meets-silicon</guid><dc:creator><![CDATA[Dylan Hayden]]></dc:creator><pubDate>Sat, 22 Nov 2025 03:16:08 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/2768885b-cf79-437a-90ee-35c14241d8a4_1024x1024.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>The party of small government just discovered it prefers big government when tech billionaires ask for it. This week, the Trump administration doubled down on efforts to block state AI regulation, with the White House reportedly drafting an executive order that would withhold federal funding from states with AI laws deemed too restrictive. Meanwhile, a new 50-state policy scan reveals just how much regulatory activity the administration wants to preempt: states aren&#8217;t waiting for federal leadership, and they&#8217;re not backing down.</p><p>The political irony is hard to miss. Republicans traditionally champion states&#8217; rights and local control, but those principles become flexible when Silicon Valley lobbying aligns with federal power. The result is an unusual intra-party rift, with Republican state officials defending their authority to regulate AI while their party&#8217;s administration works to strip it away.</p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://brief.dylanhayden.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">The Public AI Brief is a reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><p>Beyond the federalism fight, states are building governance capacity in real time while grappling with data center infrastructure demands that utilities can&#8217;t meet and communities didn&#8217;t consent to. The governance gap is widening faster than the technology gap.</p><h3>This Week&#8217;s Key Developments:</h3><ul><li><p><strong>Federal preemption escalates</strong>: White House drafts executive order targeting state AI laws</p></li><li><p><strong>State policy landscape mapped</strong>: New scan shows widespread state regulatory activity despite federal pressure</p></li><li><p><strong>Oklahoma elevates AI leadership</strong>: First state Chief AI and Technology Officer signals governance evolution</p></li><li><p><strong>Data center power crunch</strong>: Utilities cite availability as top challenge while communities demand transparency</p></li><li><p><strong>Washington revives union bill</strong>: Renewed push to require bargaining over public sector AI adoption</p></li><li><p><strong>Education moves beyond basics</strong>: K-12 districts embedding AI in career pathways, not just prompting skills</p></li></ul><div><hr></div><h2>Federal</h2><h3>The Preemption Push Intensifies</h3><p>The White House isn&#8217;t waiting for Congress to act. <a href="https://www.nextgov.com/artificial-intelligence/2025/11/white-house-considers-order-preempt-state-ai-laws/409657/">A draft executive order</a> circulating this week would withhold federal funding from states with AI regulations deemed overly punitive or in violation of the First Amendment. <a href="https://www.govtech.com/policy/trump-urges-congress-to-block-state-level-ai-legislation">President Trump publicly called for Congress</a> to establish a federal standard governing AI oversight, warning that varied state regulation risks slowing development. The administration is simultaneously <a href="https://www.govtech.com/policy/white-house-continues-push-for-ai-regulations-ban">pushing for preemption language</a> in the annual defense policy bill or through executive action directing the Justice Department to challenge state laws in court.</p><p>This escalation comes months after 40 state attorneys general, including Republicans from Ohio, Tennessee, Arkansas, Utah and Virginia, urged Congress to reject a 10-year moratorium on state AI enforcement. The bipartisan coalition called the proposal &#8220;sweeping and wholly destructive of reasonable state efforts to prevent known harms.&#8221; Now the administration is pursuing the same goal through executive channels.</p><p>The federalism tension here is striking. The party that built its brand on limiting federal power and respecting state sovereignty is advocating for sweeping federal preemption of state consumer protection laws. Tech industry lobbying appears to have reordered traditional Republican priorities, and Republican state officials are pushing back against their own party&#8217;s federal leadership.</p><h3>Federal Operations Continue</h3><p>While the policy battle rages, agencies continue deploying AI. <a href="https://www.nextgov.com/artificial-intelligence/2025/11/ai-enhances-defense-logistics-agencys-end-end-operations-cio-says/409683/">The Defense Logistics Agency&#8217;s CIO emphasized</a> that accelerating Pentagon-wide AI adoption is critical to keeping pace with adversaries in China and Russia. <a href="https://federalnewsnetwork.com/federal-newscast/2025/11/u-s-cyber-command-has-a-new-chief-artificial-intelligence-officer/">U.S. Cyber Command appointed a new Chief Artificial Intelligence Officer</a> amid leadership turnover at the military&#8217;s cyber enterprise. <a href="https://www.nextgov.com/artificial-intelligence/2025/11/lawmakers-propose-grant-program-boost-ai-training-medical-schools/409645/">Lawmakers proposed a grant program</a> that would enable qualified medical schools to receive up to $100,000 in funding to promote AI literacy, while <a href="https://www.nextgov.com/artificial-intelligence/2025/11/lawmakers-signal-support-using-ai-prevent-veteran-suicides-fy26-va-funding-bill-reports/409576/">House appropriators signaled support</a> for using AI to prevent veteran suicides in the FY26 VA funding bill.</p><p><a href="https://www.nextgov.com/artificial-intelligence/2025/11/white-house-official-lawmaker-call-amplifying-us-tech-policy-abroad/409697/">OSTP Director Michael Kratsios and Sen. Ted Budd advocated</a> for amplifying U.S. tech policy abroad and maintaining a light-touch regulatory regime that has become a centerpiece of the Trump administration&#8217;s approach. The message is consistent across federal touchpoints: regulation should enable, not constrain, and states shouldn&#8217;t complicate that vision.</p><div><hr></div><h2>State</h2><h3>Mapping the State Policy Landscape</h3><p><a href="https://www.govtech.com/artificial-intelligence/on-ai-states-generally-seek-innovation-with-protection">A new state-by-state AI policy scan</a> from the Council of State Governments offers a clear view of what the federal government wants to preempt: states are actively seeking innovation with protection, building regulatory frameworks even as the administration eyes restrictions. The scan reveals widespread state activity on AI governance, demonstrating that states aren&#8217;t waiting for federal leadership and showing no signs of backing down despite pressure from Washington.</p><p>The public remains skeptical of AI, according to the report&#8217;s findings, which helps explain why state legislatures and attorneys general continue moving forward with consumer protections. States have been filling the regulatory void Congress created through inaction on data privacy, social media harms, and now AI. The pattern is consistent: federal paralysis creates state activity, then federal officials complain about the resulting &#8220;patchwork.&#8221;</p><h3>Building Governance Capacity</h3><p>States aren&#8217;t just resisting federal overreach. They&#8217;re building institutional capacity to govern AI effectively. <a href="https://www.govtech.com/workforce/oklahoma-appoints-first-chief-ai-and-technology-officer">Oklahoma appointed its first Chief AI and Technology Officer</a> this week, elevating Tai Phan to a dual role leading responsible AI adoption and statewide technology strategy. The move signals how the CIO position is evolving from technical implementer to strategic translator between technology capabilities and policy realities. Phan brings experience in both public and private sector technology modernization, positioning Oklahoma to navigate AI adoption with organizational discipline rather than vendor-driven enthusiasm.</p><p><a href="https://www.route-fifty.com/artificial-intelligence/2025/11/new-jersey-uses-ai-tool-boost-resident-and-staff-experiences/409636/">New Jersey released a report</a> highlighting how AI is improving service delivery for critical resources like food assistance and unemployment insurance. The report demonstrates states are using AI to solve real operational challenges, not chase technology for its own sake. <a href="https://www.route-fifty.com/workforce/2025/11/west-virginia-turns-tech-implement-new-child-care-payment-model/409672/">West Virginia deployed a data platform</a> to address unemployment resulting from child care shortages, using technology to keep more residents in the workforce by filling childcare gaps.</p><p><a href="https://www.route-fifty.com/artificial-intelligence/2025/11/report-how-public-sector-can-help-drive-ai-innovation/409712/">A new report offers strategies</a> for government leaders to encourage AI innovation that prepares communities for an AI-ready future, emphasizing that public sector leadership matters in shaping how AI develops locally.</p><h3>Workforce Tensions Surface</h3><p><a href="https://www.route-fifty.com/workforce/2025/11/washington-lawmakers-consider-requiring-union-talks-over-government-use-ai/409696/">Washington lawmakers plan to reintroduce a bill</a> requiring government agencies to bargain with public sector unions over AI adoption. If passed, Washington would become the first state to explicitly mandate AI bargaining under the Public Employees&#8217; Collective Bargaining Act. The bill stalled in 2025 but its prime sponsor is trying again in 2026, recognizing that AI deployment decisions are becoming labor relations issues, not just IT procurement choices. Workers want a voice in how AI changes their roles before systems go live, not after.</p><h3>The Data Center Dilemma</h3><p>States face mounting pressure around AI infrastructure, and the conflicts are intensifying. <a href="https://www.route-fifty.com/artificial-intelligence/2025/11/utilities-struggling-deal-data-center-power-demand-report-says/409535/">More than half of utility leaders</a> in a Black &amp; Veatch report said available power is the biggest challenge to getting data centers online, with more proactive planning needed. The gap between AI ambitions and electrical grid capacity is real, and utilities are struggling to keep pace with demand.</p><p>Communities are pushing back on projects imposed without their input. Energy bills, water use and noise are driving <a href="https://www.route-fifty.com/artificial-intelligence/2025/11/data-center-growth-drives-locals-fight-more-say/409563/">locals to fight for more say</a> in data center approvals. As municipalities move to enact ordinances, some communities are turning to ballot measures while state lawmakers rush to pass legislation that may favor developers over residents.</p><p><a href="https://www.govtech.com/artificial-intelligence/top-of-mind-for-data-center-best-practices-its-transparency">Experts emphasized</a> that governments and communities must work together to ensure AI data center projects meet residents&#8217; current and future needs. Transparency isn&#8217;t a courtesy, it&#8217;s a requirement for projects that will affect everyone&#8217;s utility bills, water supply and quality of life. The economic benefits are real, but so is the backlash when communities discover they&#8217;re bearing costs for infrastructure they never approved.</p><h3>Innovation in Practice</h3><p>Not all state news involves conflict. <a href="https://www.govtech.com/artificial-intelligence/this-colorado-ski-town-is-adopting-a-suite-of-ai-tools">Vail, Colorado, is adopting agentic AI tools</a> for fire detection and public engagement, focusing on efficiency with everyday tasks. <a href="https://www.govtech.com/artificial-intelligence/ai-and-technology-help-manage-traffic-in-raleigh-n-c">Raleigh is bringing together GIS, AI and other tools</a> to develop a traffic management system that improves safety for all road users. These deployments reflect what&#8217;s possible when technology serves clear operational needs rather than searching for problems to justify the investment.</p><div><hr></div><h2>Education</h2><h3>Beyond &#8220;Googlification&#8221;</h3><p>K-12 districts are figuring out what AI means for career preparation, and the answer goes deeper than teaching students to prompt ChatGPT. <a href="https://www.govtech.com/education/k-12/what-is-ai-bringing-to-career-and-technical-education">An official from the Association for Career and Technical Education</a> discussed CTE programs moving beyond the &#8220;Googlification&#8221; of AI, examining its impact on culinary arts, HVAC programs and other vocational tracks. The question isn&#8217;t whether students can use AI tools, it&#8217;s whether they understand how AI will transform the careers they&#8217;re training for and whether they&#8217;re developing skills AI can&#8217;t easily replicate.</p><p><a href="https://www.govtech.com/education/k-12/how-ai-is-transforming-career-prep-at-bentonville-ark-schools">Bentonville Public Schools in Arkansas</a> added an AI component to its &#8220;Ignite&#8221; career-track program, helping students understand how technology is transforming their potential future jobs. This is the practical application that matters: students need to see AI in the context of actual work environments and career pathways, not as an abstract technology topic disconnected from their futures.</p><h3>Rural Schools Get Strategic Support</h3><p><a href="https://www.govtech.com/education/k-12/university-researchers-to-create-ai-strategy-for-rural-schools">Washington State University researchers</a> received $82,500 from Microsoft to develop an AI integration roadmap for rural K-12 schools in three northwestern states. Rural districts face distinct challenges around capacity, connectivity and resources that urban and suburban districts don&#8217;t experience. A one-size-fits-all approach won&#8217;t work, and this research recognizes that rural schools need strategies designed for their specific constraints and opportunities.</p><div><hr></div><h2>Key Insights for Practitioners</h2><p><strong>Infrastructure decisions require community consent</strong>: Data center conflicts reveal what happens when major infrastructure projects that affect everyone&#8217;s utilities and quality of life get imposed without meaningful community input. The backlash isn&#8217;t about opposing AI, it&#8217;s about demanding shared decision-making on projects with long-term public impact.</p><p>Action: If you&#8217;re planning AI infrastructure investments, engage community stakeholders and utility providers at the beginning of the process, not when you need final approvals. Build timelines that accommodate public input and treat community concerns as legitimate planning considerations, not obstacles to overcome.</p><p><strong>CIO roles are fundamentally changing</strong>: Oklahoma&#8217;s creation of a Chief AI and Technology Officer position reflects a broader shift. The CIO role is evolving from managing IT systems to translating technology capabilities into policy realities and organizational strategy. Success increasingly depends on governance capacity, not just technical implementation.</p><p>Action: Evaluate whether your organization&#8217;s technology leadership structure matches the governance demands AI creates. If your CIO is still primarily focused on keeping systems running, you may need to rethink how technology leadership connects to policy, risk management and strategic planning.</p><p><strong>Career preparation means understanding AI&#8217;s industry impact</strong>: The shift from &#8220;Googlification&#8221; to career-context AI education matters. Students don&#8217;t just need to know how to use AI tools, they need to understand how AI is transforming specific industries and careers, and which skills remain distinctly human.</p><p>Action: If you&#8217;re involved in workforce development or career training programs, audit whether your AI education focuses on tools or on career transformation. Partner with industry to understand how AI is actually changing specific jobs, not just general workforce trends.</p><div><hr></div><p><strong>What I&#8217;m watching:</strong> Whether the draft executive order on state AI law preemption moves forward, and if so, whether it triggers immediate legal challenges from states. The constitutional questions around using federal funding as leverage to override state consumer protection laws are substantial, and Republican state attorneys general defending state sovereignty against their own party&#8217;s administration would create fascinating federalism case law.</p><div><hr></div><p>What&#8217;s your take on the federal preemption push? Should the administration use executive power and funding threats to override state AI laws, or do states have legitimate authority to protect residents when Congress won&#8217;t act? Share your perspective in the comments below.</p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://brief.dylanhayden.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">The Public AI Brief is a reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div>]]></content:encoded></item><item><title><![CDATA[When Temporary Fixes Become the Plan]]></title><description><![CDATA[The Public AI Brief &#183; Issue No. 20]]></description><link>https://brief.dylanhayden.com/p/when-temporary-fixes-become-the-plan</link><guid isPermaLink="false">https://brief.dylanhayden.com/p/when-temporary-fixes-become-the-plan</guid><dc:creator><![CDATA[Dylan Hayden]]></dc:creator><pubDate>Fri, 14 Nov 2025 12:55:09 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/7f7bd76e-df24-44bd-a77d-626161c92c2d_1024x1024.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>This week, Congress ended the longest government shutdown in U.S. history: 43 days that exposed how fragile our public sector technology and cybersecurity infrastructure really is. The resolution funds the government through January 30 and temporarily extends two critical programs: the <a href="https://www.route-fifty.com/cybersecurity/2025/11/government-funding-deal-reups-cyber-grant-program/409489/">State and Local Cybersecurity Grant Program</a> and the <a href="https://www.nextgov.com/cybersecurity/2025/11/bill-end-shutdown-includes-temporary-cyber-info-sharing-law-extension/409442/">Cybersecurity Information Sharing Act</a>. Temporary is the operative word. What happens in ten weeks when these deadlines return?</p><p>The shutdown revealed something uncomfortable: federal capacity for AI and cybersecurity isn&#8217;t just strained, it&#8217;s held together by short-term patches and handshake agreements. While federal agencies scrambled to restore basic operations, state governments formed regional coalitions, K-12 districts wrote their own AI policies, and cities deployed practical solutions to immediate problems. The gap between federal policy development and state-local implementation isn&#8217;t just widening, it&#8217;s becoming the new normal.</p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://brief.dylanhayden.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">The Public AI Brief is a reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><h3>This Week&#8217;s Key Developments</h3><ul><li><p><strong>43-day shutdown ends</strong> with temporary fixes: cyber grants and info-sharing law extended only through January 30</p></li><li><p><strong>CISA lost 65% of workforce</strong> during shutdown; federal AI legislation stalled indefinitely</p></li><li><p><strong>Six heartland states form AI caucus</strong> to drive regional policy while federal framework remains absent</p></li><li><p><strong>Maryland launches multi-agency AI partnership</strong> to modernize benefits access and reduce child poverty</p></li><li><p><strong>K-12 districts create vetting frameworks</strong> for AI tools as American Institutes for Research launches implementation studies</p></li></ul><div><hr></div><h2>Federal</h2><p>Congress ended the shutdown, but it didn&#8217;t solve the underlying problem. The continuing resolution funds the government through January 30, 2026, giving lawmakers just ten weeks before they face the same fight again. What&#8217;s more troubling for technology leaders is the deal includes only temporary extensions of two programs that state and local governments depend on. The <a href="https://www.route-fifty.com/cybersecurity/2025/11/government-funding-deal-reups-cyber-grant-program/409489/">State and Local Cybersecurity Grant Program</a>, which expired in September and provides critical funding for smaller jurisdictions, got a reprieve until January. So did the <a href="https://www.nextgov.com/cybersecurity/2025/11/bill-end-shutdown-includes-temporary-cyber-info-sharing-law-extension/409442/">Cybersecurity Information Sharing Act</a>, which shields companies from liability when they share threat intelligence with government partners. Both are stopgaps, not solutions.</p><p>The damage from 43 days of paralysis runs deeper than delayed paychecks. CISA lost 65% of its workforce during the shutdown. 1,651 employees were furloughed from a 2,540-person agency responsible for cybersecurity across all levels of government. Contractors who patch vulnerabilities and manage incident response stopped coming to work. <a href="https://www.govinfosecurity.com/state-cyber-teams-brace-for-impact-us-government-shutdown-a-29644">State cybersecurity officials told reporters</a> they felt the impact immediately, particularly in smaller jurisdictions that depend heavily on federal support and grant funding. Mike Hamilton, former CISO of Seattle, put it bluntly: &#8220;Cybersecurity isn&#8217;t something that you can pause. Adversaries don&#8217;t take days off.&#8221;</p><p>The shutdown&#8217;s toll on AI policy may prove equally costly. <a href="https://www.nextgov.com/artificial-intelligence/2025/10/shutdown-could-delay-congress-getting-serious-about-ai-policy/408730/">Federal legislation on artificial intelligence</a>, already moving slowly, stalled completely during the impasse. The National Defense Authorization Act, which typically includes AI provisions, got pushed aside. Industry groups warned that the prolonged closure threatens U.S. leadership in AI innovation. Lawmakers now face a compressed timeline to address not just funding but also the expiring Affordable Care Act subsidies, farm bill extensions, and energy credits. AI policy will compete for attention in an already crowded agenda. The temporary nature of the shutdown resolution guarantees this will all happen again in January.</p><h2>State</h2><h3>States Fill the Federal Vacuum</h3><p>While Congress debated, six states decided they couldn&#8217;t wait. Arkansas, Illinois, Louisiana, Ohio, Oklahoma, and Tennessee <a href="https://www.govtech.com/artificial-intelligence/heartland-states-form-ai-group-to-drive-policy-opportunity">formed the Heartland AI Caucus</a>, a bipartisan effort to shape regional AI strategies and drive innovation where federal policy remains absent. The mission of the caucus is &#8220;to advance practical AI policy that strengthens local economies, prepares workers and modernizes government systems across the region.&#8221; The caucus is a sign that states are moving from reactive compliance with federal mandates to proactive regional coordination. They&#8217;re not asking permission anymore.</p><p>Texas is taking a different approach, <a href="https://www.govtech.com/artificial-intelligence/texas-contemplates-ethics-code-for-government-use-of-ai">proposing an ethics code for government AI use</a> that would apply to all state agencies and local entities. The code, developed by the state Department of Information Resources, is now open for public comment. It&#8217;s the kind of framework that might typically come from federal guidance, but Texas isn&#8217;t waiting. Neither is Virginia, but the state&#8217;s <a href="https://www.route-fifty.com/artificial-intelligence/2025/11/virginias-artificial-intelligence-registry-faces-transparency-challenges-study-shows/409459/">AI registry faces mounting criticism</a> from agencies frustrated with transparency challenges and usability issues. Building governance tools is one thing; making them work is another.</p><p>The <a href="https://www.govtech.com/artificial-intelligence/will-patchwork-of-state-ai-laws-inhibit-innovation">patchwork of state AI laws is growing more complicated</a>. State leaders increasingly worry that a mosaic of different rules will create obstacles for technology developers and businesses operating across multiple jurisdictions. Without a unified federal approach, states are simultaneously innovating and creating potential compliance nightmares. Regulation by geography rather than coherent national policy.</p><h3>Maryland&#8217;s Multi-Agency Model</h3><p>Here in my state of Maryland, following the lead of States like Pennsylvania and Colorad, we&#8217;re moving from governance debates to actual implementation. The state <a href="https://www.govtech.com/artificial-intelligence/maryland-leverages-ai-partnership-to-update-public-services">launched a multi-agency AI partnership</a> designed to bring AI tools directly to residents with the aim of simplifying access to benefits, reducing child poverty, and improving housing access. The initiative embeds AI in daily workflows for staff across multiple agencies, moving beyond pilots to enterprise-scale deployment.</p><p>What makes Maryland&#8217;s approach notable isn&#8217;t just the technology. It&#8217;s the institutional coordination. Multi-agency efforts typically die in committee or fragment into competing priorities. Success here depends less on the AI itself than on whether state leaders can translate technical capabilities into sustained cross-agency collaboration. If Maryland can pull this off, it becomes a template for other states trying to move from proof-of-concept to production. If it stalls, it joins the long list of ambitious government tech projects that couldn&#8217;t scale.</p><h3>Infrastructure Tensions and Privacy Concerns</h3><p>State governments are grappling with harder questions about AI&#8217;s physical and social infrastructure. <a href="https://www.route-fifty.com/digital-government/2025/11/most-states-dont-disclose-which-companies-get-data-center-incentives-report-finds/409488/">Most states don&#8217;t disclose which companies receive data center incentives</a>, even though at least 36 states provide these subsidies. Only 11 reveal the recipients. Virginia&#8217;s incoming governor, Abby Spanberger, <a href="https://www.route-fifty.com/infrastructure/2025/11/democrats-charge-virginia-spanberger-targets-lower-energy-bills-and-higher-costs-data-centers/409393/">promises to reshape utility policy</a>, targeting lower energy bills for residents while raising costs for data centers. The collision between AI&#8217;s energy demands and public infrastructure constraints is no longer theoretical.</p><p>In Arizona, <a href="https://www.route-fifty.com/artificial-intelligence/2025/11/ai-powered-balloons-have-been-photographing-arizona-homes-insurance-risk-assessments/409487/">AI-powered balloons have been photographing homes</a> for insurance risk assessments, raising immediate privacy and policy concerns that state regulations haven&#8217;t caught up with. Pennsylvania legislators are working to <a href="https://www.route-fifty.com/artificial-intelligence/2025/11/pennsylvania-bill-seeks-close-loophole-ai-generated-child-sexual-abuse-materials/409458/">close what they call a &#8220;loophole&#8221; for AI-generated child sexual abuse materials</a>, recognizing that existing laws weren&#8217;t written with generative AI in mind. These aren&#8217;t abstract policy debates. They&#8217;re states scrambling to regulate technologies that arrived faster than the legal frameworks meant to govern them.</p><h2>Education</h2><h3>K-12 Districts Improvising Policy</h3><p>K-12 districts can&#8217;t afford to wait for federal or state AI guidance, so they&#8217;re writing their own rules in real time. The <a href="https://www.govtech.com/education/k-12/nonprofit-launches-studies-to-assess-ais-role-in-k-12">American Institutes for Research launched its AI in Education Network</a> this week, aiming to give educators and policymakers clearer understanding of how AI tools perform in actual classroom settings. The initiative recognizes a gap: there&#8217;s plenty of vendor marketing about AI in education, but very little rigorous evidence about what works, what doesn&#8217;t, and what unintended consequences emerge when schools deploy these systems at scale.</p><p><a href="https://www.govtech.com/education/k-12/how-harlingen-texas-schools-are-evolving-with-ai">Harlingen Independent School District in south Texas</a> illustrates what homegrown policy looks like. The district developed digital responsibility guidelines and created a vetting process for AI tools before purchasing anything. Teachers now use several AI applications (Snorkl for engagement, Eureka Math for instant feedback) but only after the district established guardrails. It&#8217;s a practical middle path: not banning AI out of fear, not embracing every tool uncritically, but building institutional capacity to evaluate and deploy thoughtfully.</p><p>A <a href="https://www.route-fifty.com/artificial-intelligence/2025/11/new-tool-aims-help-schools-vet-ai-tech-education/409414/">new tool launched this week</a> aims to help other districts follow Harlingen&#8217;s lead. Developed to address the opacity around AI products in education, it gives school leaders a framework to examine tools more closely when tech companies aren&#8217;t transparent about how their AI actually works. Meanwhile, <a href="https://www.govtech.com/education/k-12/oklahoma-high-school-to-offer-class-on-ai">Broken Arrow High School in Oklahoma</a> is going a step further, offering an AI Foundations class starting this spring that includes lessons on coding and data storytelling. The implicit message: if you can&#8217;t wait for curriculum guidance from above, build your own.</p><h2>Local</h2><h3>Cities Where AI Meets Service Delivery</h3><p><a href="https://www.govtech.com/artificial-intelligence/boston-311-services-shaped-by-adaptable-ai-powered-platform">Boston replaced its aging 311 system</a> with an AI-powered, no-code platform that , unlike the old system, adapts as needs evolve. The previous system had become too rigid, unable to keep pace with how residents actually want to interact with city services. The new platform uses AI to help officials be more efficient and agile, letting them modify workflows without waiting for vendor updates or expensive customization. It&#8217;s not flashy, but it solves a real problem. Hopefully, Philadelphia (where I work) and Baltimore (where I live) will follow suit, but I&#8217;m not holding my breath.</p><p><a href="https://www.govtech.com/transportation/san-anselmo-calif-to-expand-ai-driven-traffic-signals">San Anselmo, California</a>, ran the numbers on its AI-driven traffic signal pilot and decided to expand. The system detects traffic patterns and adjusts signals to speed up or slow down flow, decreasing wait times at a busy intersection by 25 to 30 percent. After proving the concept worked, the city is rolling it out to more locations. <a href="https://www.route-fifty.com/artificial-intelligence/2025/11/colorado-town-strives-become-agentic-smart-city/409475/">Vail, Colorado</a>, is taking a broader swing, implementing what it calls an &#8220;agentic smart city&#8221; platform designed to improve government operations and boost customer experience across multiple services. The ambition is higher, the complexity greater, but the goal is the same: use technology to make government work better for residents.</p><p><a href="https://www.govtech.com/policy/rent-setting-algorithm-ban-re-emerges-in-portland-ore">Portland, Oregon</a>, offers a different kind of local AI story. The city council is reconsidering a ban on algorithms that set residential rents, a measure pulled from consideration this spring but now back on the table. The debate centers on whether prohibiting algorithmic rent-setting would discourage housing developers, potentially worsening Portland&#8217;s housing crunch. It&#8217;s a microcosm of the broader tension caused by cities trying to regulate AI applications that may harm residents while avoiding policies that stifle development and make other problems worse.</p><h2>Key Insights for Practitioners</h2><p><strong>Temporary fixes guarantee permanent crisis management</strong>: The January 30 funding deadline means state and local leaders have ten weeks before facing renewed uncertainty about cyber grants and information-sharing protections.</p><p>Action: Treat the current continuing resolution as a planning window, not a solution. Identify which programs depend on federal funding that could lapse again in January, and develop contingency plans now for operating without those resources.</p><p><strong>Regional coalitions are the new federal policy</strong>: The Heartland AI Caucus shows states aren&#8217;t waiting for national frameworks. They&#8217;re building regional coordination structures that may outlast whatever federal guidance eventually emerges.</p><p>Action: If your state isn&#8217;t part of a regional AI working group, explore creating or joining one. Interstate coordination on procurement standards, ethical frameworks, and implementation lessons learned provides leverage that individual states lack.</p><p><strong>Build vetting capacity before buying tools</strong>: Harlingen&#8217;s approach (digital responsibility guidelines first, vetted AI tools second) offers a template that works across government levels. Virginia&#8217;s registry struggles show that governance tools built without user input often fail.</p><p>Action: Establish internal evaluation frameworks for AI tools before vendors arrive with proposals. Include frontline staff in developing vetting criteria. The people who will actually use these systems know which promises are realistic and which are marketing.</p><div><hr></div><p><strong>What I&#8217;m watching:</strong> Whether the January 30 deadline produces another shutdown or genuine long-term funding commitments. Having been in the federal government for the last decade, you can guess what my bet is. If states continue forming regional AI coalitions while federal policy stalls, we may be witnessing a fundamental shift in how technology governance happens in the U.S. Not top-down from Washington, but horizontally across state and regional partnerships. Wouldn&#8217;t that be interesting to see?</p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://brief.dylanhayden.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">The Public AI Brief is a reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div>]]></content:encoded></item><item><title><![CDATA[Accountability Arrives: AI Meets the Jobs Question]]></title><description><![CDATA[The Public AI Brief &#183; Issue No. 19]]></description><link>https://brief.dylanhayden.com/p/the-public-ai-brief</link><guid isPermaLink="false">https://brief.dylanhayden.com/p/the-public-ai-brief</guid><dc:creator><![CDATA[Dylan Hayden]]></dc:creator><pubDate>Sat, 08 Nov 2025 13:02:23 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/1aeaf35c-74a7-4b9d-b5ef-2100bc47b77f_1024x1024.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Friends, welcome to the new home of <em>The Public AI Brief</em> on Substack. After building this community on LinkedIn, I&#8217;m excited to bring you deeper analysis and a better reading experience here. If you are following me from LinkedIn, I&#8217;d love to hear from you about the new format to make sure you&#8217;re getting the very best content about the latest developments in public sector AI.</p><p>Also, do you know someone who wants to begin a career in public service or nonprofit leadership? The Fels Institute of Government at the University of Pennsylvania is now accepting applications for Fall 2026 for the Master of Public Administration and Executive Master of Public Administration programs. </p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://www.fels.upenn.edu/admissions&quot;,&quot;text&quot;:&quot;Click here for more information!&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://www.fels.upenn.edu/admissions"><span>Click here for more information!</span></a></p><p>This week, two bipartisan bills force the federal government to answer what it&#8217;s been avoiding: how many jobs is AI actually replacing? Meanwhile, states and cities navigate the messier reality of deploying AI without clear answers on workforce impact, student wellbeing, or vendor accountability.</p><h2>Federal</h2><p>Congress is finally asking the question agencies have been dodging: what happens to workers when AI takes their jobs? <a href="https://www.nextgov.com/artificial-intelligence/2025/11/lawmakers-propose-requiring-agencies-major-firms-report-ais-job-impact/409331/">Two bipartisan bills introduced this week would require federal agencies and major private firms to report quarterly on AI-driven layoffs and workforce displacement</a>. Sens. Mark Warner (D-Va.) and Josh Hawley (R-Mo.) aren&#8217;t waiting for voluntary transparency. <a href="https://federalnewsnetwork.com/federal-newscast/2025/11/a-new-bill-would-require-agencies-to-disclose-when-ai-replaces-a-federal-job/">The legislation mandates disclosure when AI replaces a federal position</a>, a data point agencies currently don&#8217;t track and certainly don&#8217;t publicize.</p><p>The bills arrive as AI optimism collides with institutional denial. Agencies tout efficiency gains while refusing to quantify headcount impacts. This legislation won&#8217;t stop automation, but it forces a public accounting. If you&#8217;re deploying AI that eliminates roles, Congress now expects you to say so. The federal workforce has operated in a data vacuum on this question. These bills would turn the lights on.</p><h2>State</h2><p>Forty state attorneys general drew a line this week: <strong>no federal preemption of state AI laws</strong>. <a href="https://statescoop.com/state-attorneys-general-reject-federal-ai-law/">The coalition called a proposed 10-year moratorium on state AI enforcement &#8220;sweeping and wholly destructive of reasonable state efforts to prevent known harms&#8221;</a>. The moratorium, tucked into federal budget reconciliation, would leave Americans &#8220;entirely unprotected&#8221; while providing no replacement regulatory scheme. With 48 states and Puerto Rico introducing AI legislation in 2025, and 26 states adopting at least 75 new AI measures, states aren&#8217;t backing down from their consumer protection role. More than 140 civil rights and consumer protection organizations joined the opposition. The message: states have been filling the regulatory void Congress created, and they&#8217;re not stopping now.</p><p>This federal-state tension played out vividly in California. <a href="https://www.govtech.com/policy/california-ai-laws-causing-tension-with-tech-giants">Tech companies are threatening to leave the state if legislators don&#8217;t back down from restrictive AI regulation</a>. The message is blunt: regulate us too hard, and we&#8217;ll take our jobs and tax revenue elsewhere. <a href="https://www.route-fifty.com/artificial-intelligence/2025/10/openai-just-cut-deal-california-critics-say-its-full-holes/409210/">OpenAI&#8217;s recent deal with California&#8217;s attorney general, converting to for-profit status while settling an investigation, has critics calling out holes in the agreement</a>. Meanwhile, <a href="https://www.govtech.com/artificial-intelligence/ohio-lawmakers-seek-penalties-when-chatbots-promote-self-harm">Ohio lawmakers are proposing penalties up to $50,000 per violation when chatbots promote self-harm</a>, a direct response to documented cases of AI tools encouraging dangerous behavior. The pattern is clear: policymakers are chasing problems that have already materialized, while industry demands regulatory forbearance.</p><p>States are building AI governance infrastructure in real time, and the GovAI Coalition Summit in San Jose this week showcased both the urgency and the improvisation. <a href="https://www.govtech.com/artificial-intelligence/san-jose-is-introducing-new-ai-skills-training-for-residents">San Jose announced a new public-private partnership to bring AI skills training to any resident who wants it</a>, a recognition that workforce transformation isn&#8217;t optional. <a href="https://www.govtech.com/artificial-intelligence/for-local-government-challenge-and-opportunity-in-ai">Summit attendees described an &#8220;industrial revolution&#8221; underway in local government</a>, with service delivery and workforce upskilling taking center stage. <a href="https://www.govtech.com/artificial-intelligence/oakland-calif-s-ai-experiment-testing-before-buying-it">Meanwhile, Oakland is taking a more cautious approach, issuing an RFI that invites innovators to test AI solutions before the city commits to buying anything</a>. Test before procurement: a refreshingly skeptical posture in a landscape dominated by vendor promises.</p><p>The challenge isn&#8217;t just capacity, it&#8217;s knowing what capacity to build. <a href="https://www.govtech.com/artificial-intelligence/university-of-pennsylvania-partners-with-the-state-on-ai">Pennsylvania formalized this approach with a Cooperative Agreement for Artificial Intelligence Advising Services with the University of Pennsylvania</a>, enabling Penn faculty experts to serve as official advisers to state government on AI policy, strategy, risk assessment, and governance frameworks. <a href="https://www.govtech.com/artificial-intelligence/university-of-pennsylvania-partners-with-the-state-on-ai">The partnership builds on Pennsylvania&#8217;s existing AI initiatives, including a 2023 Generative AI Governing Board and an OpenAI pilot that demonstrated employees saved an average of 95 minutes per day</a>. Rather than build redundant internal expertise, Pennsylvania is leveraging academic research capacity, a model other states are watching closely.</p><p>I&#8217;m personally enthusiastic about this approach. As the professor teaching Penn&#8217;s course on <em><strong>AI for Public Sector Leadership</strong></em>, this partnership will directly enhance our students&#8217; ability at the Fels Institute to engage with state government on using AI for the public good. It&#8217;s one thing to teach AI governance in a classroom. It&#8217;s another to have students working alongside state officials on real policy challenges, risk assessments, and implementation strategies. This is how you build the next generation of public sector leaders who understand both the technology and the institutional realities.</p><p><a href="https://www.govtech.com/artificial-intelligence/initiative-aims-to-help-state-governments-build-ai-capacity">A new AI Readiness Project aims to help states, territories and tribal governments use AI responsibly through convenings, knowledge sharing and pilots</a>. The initiative recognizes what&#8217;s increasingly obvious: most governments lack the institutional muscle to evaluate AI claims, design governance frameworks, or navigate vendor markets.</p><p>While some states build capacity, others are already deploying at scale, and the results are mixed. <a href="https://www.route-fifty.com/artificial-intelligence/2025/11/california-rolls-out-ai-powered-id-verification-benefits/409236">California&#8217;s Employment Development Department rolled out AI-powered identity verification for benefits applications</a>, evaluating devices, IP addresses and risk signals to combat fraud. <a href="https://www.route-fifty.com/artificial-intelligence/2025/11/work-group-calls-ai-enabled-nonemergency-phone-system-statewide-ease-burden-911/409282/">Maryland is considering an AI-enabled nonemergency phone system statewide to ease the burden on 911</a>. At an estimated $2.5 million for two years, the state would be the first in the nation to implement such a system. <a href="https://statescoop.com/indiana-gen-ai-notary-content/">Indiana took a quieter approach, using generative AI to revamp content for its notary education system</a>. These deployments share a common thread: states are moving from pilots to production, often without waiting for federal guidance or comprehensive risk assessments.</p><h2>Local</h2><p>Cities are where AI theory meets service delivery reality. <a href="https://www.route-fifty.com/artificial-intelligence/2025/11/game-la-ramps-tech-ahead-major-sporting-events/409257/">Los Angeles is ramping up AI deployments ahead of hosting the World Cup, Super Bowl, Olympics and Paralympics</a>, using the global spotlight as both deadline and justification for accelerated adoption. The city is betting that AI can improve services under the pressure of massive events. <a href="https://www.govtech.com/artificial-intelligence/beyond-limits-cities-large-and-small-put-ai-to-use">A broader survey of cities large and small shows AI being used for everything from reducing first responder paperwork to streamlining permitting</a>. These aren&#8217;t moonshot projects. They&#8217;re operational tools addressing immediate friction points in service delivery.</p><p><a href="https://www.firehouse.com/technology/artificial-intelligence/article/55328688/fdny-enhances-brush-fire-response-with-new-ai-system-and-existing-cameras">New York City&#8217;s fire department deployed AI-powered cameras at city parks for early detection of brush fires</a>. Following a busy brush fire season in 2024, FDNY updated eight existing cameras in five locations with AI detection capabilities, powered by solar panels. When the AI detects smoke or fire, it triggers a notification to the on-duty officer at FDNY&#8217;s Command Center, who assesses the situation and determines if fire companies need to be dispatched. The system augments rather than replaces existing infrastructure and personnel, with human oversight remaining central to decision-making.</p><p><a href="https://www.govtech.com/education/higher-ed/mass-ai-challenge-awards-universities-for-risk-assessment-engineering">Massachusetts is taking a different approach, awarding seven grants through its AI Models program to university-led research projects in manufacturing, energy and climate resilience</a>. The state is using academia as an R&amp;D engine before scaling AI into government operations, a model that prioritizes risk assessment and engineering rigor over speed.</p><h2>Education</h2><p>Higher education is discovering that AI policy is inseparable from student mental health. <a href="https://www.govtech.com/education/higher-ed/educause-25-how-ai-policies-affect-student-mental-health">A panel at EDUCAUSE &#8216;25 highlighted how punitive, fear-driven AI policies deepen mistrust, stress and disconnection among students</a>. Institutions that lead with prohibition rather than pedagogy are creating anxiety, not learning. <a href="https://www.route-fifty.com/artificial-intelligence/2025/11/students-want-schools-incorporate-ai-learning-express-some-fears/409255/">Meanwhile, a survey found that students overwhelmingly want schools to incorporate AI into learning, but they&#8217;re afraid</a>. They fear being accused of plagiarism, letting AI think for them, not knowing where the boundaries are. The data is clear: schools are lagging behind their students in using AI.</p><p>The K-12 reality is messier. <a href="https://www.edweek.org/technology/schools-ai-policies-are-still-not-clear-to-teachers-and-students/2025/01">Nearly half of educators say their district does not have an AI policy</a>, with only Ohio and Tennessee requiring districts to have comprehensive AI policies. <a href="https://www.edweek.org/technology/how-school-districts-are-crafting-ai-policy-on-the-fly/2025/10">Districts are taking wildly divergent approaches: Tucson created a task force of 40+ people over two years to develop comprehensive policy, while Arlington opted for a flexible &#8220;framework&#8221; with continuous website updates instead of formal policy</a>. <a href="https://www.edweek.org/technology/how-school-districts-are-crafting-ai-policy-on-the-fly/2025/10">Just 40% of states have AI guidance according to SETDA surveys</a>. Districts emphasize that professional development must accompany policy implementation, but most lack resources for both.</p><p><a href="https://www.govtech.com/education/k-12/decatur-educators-forum-encourages-embracing-ai">In Decatur, Alabama, educators agreed at a State of Education forum that AI has already become essential for both teachers and students</a>. The consensus: embrace it, don&#8217;t ban it. <a href="https://www.govtech.com/education/higher-ed/opinion-college-students-need-not-panic-about-ai">A counterpoint from a college administrator argues that students need not panic about AI automation</a>. The key is building the right skills and relationships to turn uncertainty into advantage. But this assumes students have access to clear guidance and thoughtful policies, which most don&#8217;t. The gap between student demand, educator acceptance and institutional policy is widening, and it&#8217;s students who bear the cost of that misalignment.</p><h2>Key Insights for Practitioners</h2><p><strong>Transparency isn&#8217;t optional anymore</strong>: The federal job displacement bills signal a broader shift. Stakeholders expect public accounting of AI&#8217;s workforce impact, not just efficiency narratives. Action: Begin tracking and documenting where AI is replacing, augmenting or transforming roles in your organization now, before disclosure requirements force rushed assessments.</p><p><strong>Test-before-buy beats vendor promises</strong>: Oakland&#8217;s RFI approach, inviting solutions to prove value before procurement, should become standard practice, not an outlier. Action: Build evaluation frameworks that require vendors to demonstrate outcomes in your operational environment before you sign contracts or commit budgets.</p><p><strong>Student mental health is an AI governance issue</strong>: Punitive AI policies in education create anxiety and disconnection, undermining learning outcomes. Action: Review your institution&#8217;s AI policies through a mental health lens. If they lead with fear rather than pedagogy, revise them to emphasize learning opportunities and clear boundaries instead of prohibition.</p><div><hr></div><p><strong>What I&#8217;m watching:</strong> Federal guidance on workforce transition planning as AI deployments scale. If agencies begin publicly tracking job displacement, expect states and municipalities to face pressure to do the same, and for collective bargaining agreements to start addressing AI&#8217;s role in workforce transformation.</p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://brief.dylanhayden.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">The Public AI Brief is a reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div>]]></content:encoded></item><item><title><![CDATA[November 7, 2025]]></title><description><![CDATA[Originally published on LinkedIn]]></description><link>https://brief.dylanhayden.com/p/november-7-2025</link><guid isPermaLink="false">https://brief.dylanhayden.com/p/november-7-2025</guid><dc:creator><![CDATA[Dylan Hayden]]></dc:creator><pubDate>Sat, 08 Nov 2025 01:05:00 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/20a6bd3b-d336-40e8-8cc8-bf370f7248f5_1024x1024.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<h2><strong>Federal</strong></h2><p><strong><a href="https://www.nextgov.com/artificial-intelligence/2025/10/lawmaker-advocacy-leader-underscore-legal-immigration-central-us-ai-dominance/409189/">Leaders Link AI Growth to Legal Immigration and Jobs</a></strong></p><p>At a Washington, D.C. conference, Sen. Mike Rounds and manufacturing leader <strong><a href="https://www.linkedin.com/in/jaytimmons/">Jay Timmons</a></strong> emphasized that legal immigration and expanded manufacturing jobs are essential to sustaining U.S. leadership in artificial intelligence. They argued that AI infrastructure demands will require both skilled trades and permitting reform to accelerate development. This discussion underscores a critical but often overlooked reality: AI innovation isn&#8217;t just about algorithms, it depends on people, infrastructure, and policy. For public leaders, aligning immigration, workforce development, and permitting processes with AI goals is a strategic imperative, not a side issue.</p><p><strong><a href="https://fedscoop.com/ai-job-cuts-senate-bill-mark-warner-josh-hawley/">Senate Bill Seeks AI Job Impact Reporting from Agencies</a></strong></p><p>A bipartisan bill from Senators Mark Warner and Josh Hawley would require federal agencies and major companies to report quarterly on AI-related job losses, new hires, retraining efforts, and unfilled positions due to automation. The <strong><a href="https://www.linkedin.com/company/u-s-department-of-labor/">U.S. Department of Labor</a></strong> would compile and publish the data with input from other federal offices. This proposal reflects growing concern about the opaque effects of AI on employment and the need for data-driven workforce policy. For public leaders, it&#8217;s a reminder that transparency and proactive planning are essential to managing technological disruption in both government and the broader economy.</p><p><strong><a href="https://fedscoop.com/pure-storage-high-performance-storage/">Modernizing Federal Data Storage for AI and Analytics</a></strong></p><p>Federal agencies are exploring high-performance storage solutions to support growing demands from AI, analytics, and cloud-based operations. Vendors like <strong><a href="https://www.linkedin.com/company/purestorage/">Pure Storage</a></strong> are promoting scalable, energy-efficient systems tailored to government needs. As agencies modernize their digital infrastructure, storage is often overlooked but foundational. Smart investments here can unlock better data use, reduce energy costs, and future-proof systems for AI-driven services.</p><p><strong><a href="https://www.hstoday.us/subject-matter-areas/cybersecurity/ai-guardrails-and-the-national-security-implications/">AI Guardrails Tied to National Security Concerns</a></strong></p><p>A recent article explores how the U.S. government is accelerating AI adoption for cyber defense while grappling with the need for regulatory guardrails to manage national security risks. As AI becomes embedded in national defense strategies, the challenge isn&#8217;t just technical&#8212;it&#8217;s institutional. Public leaders must ensure that oversight mechanisms evolve alongside deployment to maintain trust and democratic accountability.</p><h2><strong>State</strong></h2><p><strong><a href="https://statescoop.com/state-local-government-ai-beeck-center/">Beeck Center Supports AI Use in State and Local Government</a></strong></p><p>The <strong><a href="https://www.linkedin.com/company/beeckcenter/">Beeck Center for Social Impact + Innovation</a></strong> at <strong><a href="https://www.linkedin.com/school/georgetown-university/">Georgetown University</a></strong> is launching initiatives to guide state and local governments as they expand use of generative AI tools, focusing on responsible deployment and public value. This kind of institutional support is critical as governments move from AI policy talk to practical implementation. The Beeck Center&#8217;s involvement signals a growing recognition that public sector innovation needs both ethical guardrails and operational guidance.</p><p><strong><a href="https://statescoop.com/ai-readiness-project-ccf-state-local-government/">AI Readiness Project Expands to State Governments</a></strong></p><p>The AI Readiness Project, initially focused on local governments, is now offering support to state agencies to help them build foundational AI capabilities through training, experimentation, and peer learning. This expansion reflects growing demand among state leaders for structured, low-risk ways to build AI literacy and governance capacity. As states face pressure to modernize services, initiatives like this can help bridge the gap between interest and implementation.</p><h2><strong>Local</strong></h2><p><strong><a href="https://www.route-fifty.com/artificial-intelligence/2025/10/tucson-pd-used-border-security-money-controversial-surveillance-software/409170/">Tucson Police Used Border Funds for AI Surveillance Tool</a></strong></p><p>The <strong><a href="https://www.linkedin.com/company/tucson-police-department/">Tucson Police Department</a></strong> used $277,500 from Arizona&#8217;s Border Security Fund to purchase AI-powered surveillance software from Cobwebs Technologies, despite not using it for border-related crimes. The software, capable of tracking individuals via social media and mobile data, has raised legal and privacy concerns among lawmakers and civil rights advocates. This case highlights the growing disconnect between the intended use of public safety funds and how emerging surveillance technologies are actually deployed. It also underscores the urgent need for clearer legal frameworks and oversight mechanisms to govern AI surveillance in local law enforcement.</p><h2><strong>International</strong></h2><p><strong><a href="https://americanbazaaronline.com/2025/11/05/nvidias-2-billion-bet-on-india-why-other-ai-titans-are-investing-too-469657/">Nvidia Invests $2 Billion in India&#8217;s AI Sector</a></strong></p><p><strong><a href="https://www.linkedin.com/company/nvidia/">NVIDIA</a></strong> is committing $2 billion to expand its presence in India, joining other major tech firms investing in the country&#8217;s growing AI ecosystem. The Indian government is also increasing funding for deep tech initiatives to position the country as a global AI hub. India&#8217;s strategic push into AI, backed by both public and private investment, signals a shift in global innovation centers. For governments elsewhere, it&#8217;s a reminder that national AI capacity is increasingly tied to coordinated public-private action and long-term infrastructure planning.</p><p><strong><a href="https://www.globalgovernmentforum.com/skills-gaps-harming-public-sector-ai-adoption-warns-uk-spending-watchdog/">UK Audit Office Flags AI Skills Gap in Government</a></strong></p><p>The <strong><a href="https://www.linkedin.com/company/naoorguk/">UK National Audit Office</a></strong> warns that a lack of digital and data skills is hindering the public sector&#8217;s ability to adopt and govern AI technologies effectively. Several departments are experimenting with tools like Microsoft Copilot, but progress is uneven due to workforce limitations. This is a familiar challenge across governments: the technology moves faster than institutional capacity. Without strategic investment in digital skills, public agencies risk both underutilizing AI and mismanaging its risks.</p><h2><strong>Education</strong></h2><p><strong><a href="https://www.euronews.com/next/2025/11/05/anthropic-to-bring-its-ai-to-hundreds-of-teachers-in-iceland-to-help-them-prepare-lessons">Icelandian Teachers to Use Anthropic AI for Lesson Planning</a></strong></p><p><strong><a href="https://www.linkedin.com/company/anthropicresearch/">Anthropic</a></strong> is partnering with Iceland&#8217;s public sector to provide AI tools to hundreds of teachers, aiming to streamline lesson preparation and reduce administrative burdens. This initiative reflects a growing trend of using AI to <em><strong>support </strong></em>educators. For public education systems, the key challenge will be ensuring these tools enhance equity and align with national curriculum goals, not just efficiency.</p><p><strong><a href="https://www.route-fifty.com/artificial-intelligence/2025/10/future-workforce-cant-wait-why-states-must-redefine-student-success-now/409171/">States Urged to Redefine Student Success for AI Era</a></strong></p><p>A new commentary highlights the growing disconnect between traditional academic metrics and the skills needed in an AI-driven economy, calling on states to adopt broader definitions of student success. Efforts like the <strong><a href="https://www.linkedin.com/company/national-governors-association/">National Governors Association</a></strong>&#8216;s &#8216;Let&#8217;s Get Ready&#8217; roadmap and state-level initiatives in Kentucky, New York, and North Carolina aim to integrate durable skills and AI literacy into K&#8211;12 education. This piece underscores a critical shift in public education: moving from test-based achievement to competency-based readiness. For state leaders, the challenge is not just policy reform but building the infrastructure, including curriculum, assessments, and teacher support, that makes 21st-century skills real and measurable in every classroom.</p><h2><strong>Public Sector</strong></h2><p><strong><a href="https://hls.harvard.edu/events/how-ai-is-changing-big-law/">AI Is Reshaping Legal Work in Major Law Firms</a></strong></p><p>A <strong><a href="https://www.linkedin.com/school/harvard-law-school/">Harvard Law School</a></strong> event explored how artificial intelligence is transforming operations in large law firms, with insights from leaders at <strong><a href="https://www.linkedin.com/company/aoshearman/">A&amp;O Shearman</a></strong> on global AI strategy and workforce impact. While focused on private law firms, the discussion offers a preview of how AI could streamline legal workflows in public agencies&#8212;from contract review to case analysis&#8212;raising important questions about legal workforce readiness and ethical oversight.</p><p><strong><a href="https://www.linkedin.com/posts/suellenventura_ai-publicsector-techforgood-activity-7391903297613443072-Joh-">AI Learning Series Highlights Tech&#8217;s Role in Public Services</a></strong></p><p><strong><a href="https://www.linkedin.com/in/suellenventura/">Suellen Ventura</a></strong> is hosting a series of 30-minute sessions introducing various AI technologies and their applications in public sector services, aiming to showcase their potential impact on government operations. Short, focused learning formats like this are a smart way to build AI literacy across public agencies. As governments navigate digital transformation, accessible education is key to turning curiosity into capability.</p><p><strong><a href="https://www.kearney.com/industry/aerospace-defense/article/see-threats-before-they-strike-with-advanced-ai-security">AI Tools Aim to Predict Security Threats Early</a></strong></p><p>A new report from consulting firm <strong><a href="https://www.linkedin.com/company/kearneyandcompany/">Kearney &amp; Company</a></strong> explores how advanced AI systems are being used in aerospace and defense to detect and respond to threats before they materialize. The approach emphasizes public&#8211;private collaboration and real-time data analysis to enhance national security readiness. While the defense sector often leads in AI adoption, the broader public sector should take note&#8212;early threat detection isn&#8217;t just about military readiness. These tools could inform emergency management, cybersecurity, and critical infrastructure protection across all levels of government.</p><p><strong><a href="https://www.causeartist.com/causeartist-weekly-297-food-waste-ai-global-climate-funding-and-100k-accelerator-grants/">AI Startups Target Food Waste and Climate Solutions</a></strong></p><p>This week&#8217;s Causeartist roundup highlights <strong><a href="https://www.linkedin.com/company/fruitscout/">FruitScout</a></strong>, a U.S.-based startup that raised $4.8 million to use AI in reducing food waste, alongside updates on global climate funding and $100K accelerator grants for impact-driven ventures. AI-driven solutions to food waste and climate resilience are increasingly relevant to public agencies facing sustainability mandates. Governments should watch how these startups scale and consider partnerships or pilots that align with public goals.</p><p><strong><a href="https://mickryan.substack.com/p/confronting-complacency">Analytical AI Advances Pose Challenge to Complacency</a></strong></p><p>A recent essay by strategist Mick Ryan argues that while generative AI garners attention, the rapid progress in analytical AI deserves equal scrutiny, especially for its implications in defense and public sector planning. This is a timely reminder that public institutions must not fixate solely on flashy AI tools like chatbots. Analytical AI, used in forecasting, logistics, and decision support, may have the most immediate impact on how governments operate and serve communities.</p><p><strong><a href="https://markets.financialcontent.com/wral/article/bizwire-2025-11-5-generative-ai-cybersecurity-research-and-forecast-report-2025-2031-key-market-dynamics-case-study-analysis-technology-insights-regulatory-landscape-researchandmarketscom">Generative AI Spurs Growth in Cybersecurity Tools</a></strong></p><p>A new market report forecasts increased adoption of generative AI in cybersecurity from 2025 to 2031, particularly in sectors like government, healthcare, and finance where sensitive data is prevalent. The report highlights growing use of static application security testing (SAST) to manage AI-related risks. As governments integrate generative AI into operations, the parallel investment in cybersecurity signals a maturing understanding of AI&#8217;s dual-use nature. Public leaders should treat cybersecurity as foundational infrastructure, not an afterthought, in AI deployment.</p><p><strong><a href="https://www.brookings.edu/articles/the-future-of-data-centers/">Data Centers Face Growing Strain from AI Demands</a></strong></p><p>A Brookings analysis highlights how the rise of generative AI is driving massive increases in energy and infrastructure needs for data centers, raising concerns about sustainability and national competitiveness. As governments adopt AI, they must also grapple with the physical infrastructure it requires. Public leaders should be thinking not just about digital policy, but also about energy grids, zoning, and environmental impact tied to AI&#8217;s backend systems.</p><p><strong><a href="https://statescoop.com/ppg-nonprofit-announcement-2025/">New Nonprofit to Support Government Modernization Efforts</a></strong></p><p><strong><a href="https://www.linkedin.com/company/partnersforpublicgood/">Partners for Public Good</a></strong>, a newly launched nonprofit, will collaborate with state and local governments to address persistent challenges in procurement, budgeting, staffing, and technology modernization. This initiative reflects a growing recognition that structural barriers are holding back public sector innovation. By focusing on the nuts and bolts of government operations, the nonprofit could help build the institutional capacity needed for long-term change.</p>]]></content:encoded></item><item><title><![CDATA[November 2, 2025]]></title><description><![CDATA[Originally posted on LinkedIn]]></description><link>https://brief.dylanhayden.com/p/november-2-2025</link><guid isPermaLink="false">https://brief.dylanhayden.com/p/november-2-2025</guid><dc:creator><![CDATA[Dylan Hayden]]></dc:creator><pubDate>Mon, 03 Nov 2025 01:12:00 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/62741c22-f1c5-4424-8ebf-1cf6444d722b_1024x1024.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<h2><strong>Federal</strong></h2><p><strong><a href="https://fedscoop.com/homeland-security-artificial-intelligence-tools-framework/">A Framework for Responsible AI in Homeland Security</a></strong></p><p>A new five-step framework urges homeland security leaders to treat AI adoption as a strategic, mission-critical process rather than a routine IT procurement. The model emphasizes defining operational use cases, ensuring data integrity, building trusted vendor relationships, integrating across systems, and establishing strong governance.</p><p><strong><a href="https://fedscoop.com/perplexity-chatgpt-error-ridden-orders-federal-judges/">Federal Judges Blame AI for Faulty Court Orders</a></strong></p><p>Two U.S. district judges acknowledged that law clerks used generative AI tools like ChatGPT and <strong><a href="https://www.linkedin.com/company/perplexity-ai/">Perplexity</a></strong> to draft legal orders that were later found to contain significant errors. The judges have since implemented stricter internal review processes and formal AI usage policies. This most recent in a string of cases underscores the urgent need for clear, enforceable AI policies within the judiciary and other branches of government. As generative tools become more accessible, public institutions must balance innovation with safeguards that preserve trust, accuracy, and due process.</p><p><strong><a href="https://fedscoop.com/interim-ai-guidance-us-courts-aims-experimentation-guardrails/">Federal Courts Issue Interim AI Guidance with Safeguards</a></strong></p><p>The U.S. federal judiciary has released interim guidance allowing courts to experiment with AI tools while emphasizing security, ethical standards, and judicial accountability. The guidance, developed by an AI task force, cautions against delegating core judicial functions to AI and encourages independent verification of AI-assisted work. This measured approach reflects the judiciary&#8217;s need to balance innovation with its foundational principles of independence and integrity. By encouraging experimentation within clear boundaries, the courts are signaling openness to AI while reinforcing that responsibility for decisions remains human.</p><p><strong><a href="https://fedscoop.com/from-proof-of-concept-to-powerhouse-why-federal-agencies-need-ai-factories/">Why Federal Agencies Need AI Factories Now</a></strong></p><p>A new article argues that federal agencies must move beyond scattered AI pilots and adopt &#8220;AI factories,&#8221; integrated systems that align infrastructure, data, and workforce processes to scale AI across missions. This piece underscores a growing consensus: AI success in government depends less on tools and more on institutional readiness. Building AI factories is really about modernizing how agencies organize people, data, and decisions, rather than just deploying new tech.</p><p><strong><a href="https://fedscoop.com/ai-workforce-legal-immigration-mike-rounds/">Senator: Legal Immigration Key to U.S. AI Competitiveness</a></strong></p><p>Sen. Mike Rounds (R-S.D.) argued that expanding legal immigration is essential for the U.S. to compete with China in AI, citing workforce shortages and the need for skilled labor in manufacturing and data center operations. He also called for stronger industry-education partnerships to prepare American workers for AI-era jobs. This is a rare bipartisan moment of clarity: the future of AI in the U.S. hinges not just on technology, but on people. A modern immigration policy and coordinated workforce development strategy are foundational to any serious national AI agenda.</p><p><strong><a href="https://fedscoop.com/gsa-grok-for-government-public-citizen-xai-elon-musk/">Advocates Challenge Federal Use of Grok AI Tool</a></strong></p><p>A coalition of advocacy groups is urging the <strong><a href="https://www.linkedin.com/company/office-of-management-and-budget/">Office of Management and Budget</a></strong> to suspend the federal government&#8217;s use of xAI&#8217;s Grok chatbot, citing concerns over ideological bias and antisemitic content. The push follows a General Services Administration contract allowing agencies to access Grok models at a reduced cost through 2027. This controversy highlights the growing tension between rapid AI adoption and the need for rigorous vetting aligned with public values. As agencies integrate generative tools, transparency in procurement decisions and adherence to ethical standards will be critical to maintaining public trust and institutional accountability.</p><p><strong><a href="https://www.govexec.com/technology/2025/10/salesforce-pitches-ai-agents-government-sheds-staff/409017/">AI Agents Touted as Solution to Shrinking Federal Workforce</a></strong></p><p>As the federal government faces significant staffing reductions, Salesforce is promoting AI agents as a way to maintain service delivery. While still in pilot phases, these technologies are being tested for use in customer service and claims processing, with human oversight emphasized. This story underscores a growing tension in public administration: how to maintain service levels amid workforce cuts. AI agents may offer relief, but their deployment raises critical questions about accountability, oversight, and the long-term role of human expertise in government operations.</p><h2><strong>State</strong></h2><p><strong><a href="https://www.streetinsider.com/Corporate+News/Maryland+deploys+Google+AI+tools+to+43%2C000+state+employees/25516269.html">Maryland Rolls Out Google AI Tools to State Workforce</a></strong></p><p><strong><a href="https://www.linkedin.com/company/state-of-maryland/">State of Maryland</a></strong> has partnered with <strong><a href="https://www.linkedin.com/showcase/google-public-sector/">Google Public Sector</a></strong> to provide generative AI tools to 43,000 state employees, aiming to improve productivity and streamline government operations. This is one of the largest state-level deployments of generative AI to date, signaling a shift from pilot projects to enterprise-scale integration. The real test will be whether these tools enhance service delivery without compromising transparency or public trust.</p><p><strong><a href="https://www.route-fifty.com/people/2025/10/technologist-digital-governor-state-cio-role-has-evolved-dramatically/409009/">State CIOs Shift From Technologists to Strategic Leaders</a></strong></p><p>The role of state chief information officers has evolved from technical experts to strategic change leaders, according to the 2025 <strong><a href="https://www.linkedin.com/company/nascio/">National Association of State Chief Information Officers (NASCIO)</a></strong> survey. With high turnover and upcoming gubernatorial elections, CIOs are now expected to bridge policy, technology, and communication across state agencies. This shift reflects a broader transformation in public sector leadership&#8212;where digital governance is no longer about managing infrastructure but about navigating complexity, building trust, and aligning technology with policy goals. As states adopt AI and modernize legacy systems, CIOs must act as both translators and tacticians.</p><p><strong><a href="https://statescoop.com/missouri-ai-drones-track-waterfowl/">Missouri Uses AI and Drones to Monitor Waterfowl</a></strong></p><p>The <strong><a href="https://www.linkedin.com/company/missouri-department-of-conservation/">Missouri Department of Conservation</a></strong> is piloting a system that combines drones and artificial intelligence to track waterfowl populations more efficiently and accurately than traditional methods. This is a smart example of how AI can support core public functions like wildlife management&#8212;enhancing data collection while reducing labor and environmental disruption. It also raises important questions about transparency and public trust when deploying emerging tech in the field.</p><h2><strong>Local</strong></h2><p><strong><a href="https://linknky.com/news/2025/10/29/covington-is-using-ai-heres-how/">Covington Launches AI Policy for City Operations</a></strong></p><p>The <strong><a href="https://www.linkedin.com/company/city-of-covington/">City of Covington, Kentucky</a></strong> has introduced a formal policy to guide the use of artificial intelligence in local government, focusing on transparency, data governance, and responsible deployment. Covington&#8217;s proactive approach is a model for smaller municipalities navigating AI adoption. By setting clear guidelines early, the city is positioning itself to use AI tools effectively while maintaining public trust and accountability.</p><p><strong><a href="https://www.govtech.com/public-safety/denver-mayor-extends-citys-use-of-flock-license-plate-readers">Denver Extends Use of AI License Plate Readers</a></strong></p><p><strong><a href="https://www.linkedin.com/company/city-and-county-of-denver/">City and County of Denver</a></strong> Mayor <strong><a href="https://www.linkedin.com/in/mike-johnston-6738b212/">Mike Johnston</a></strong> has extended the city&#8217;s contract with Flock Safety, which provides AI-powered license plate readers, for an additional five months at no extra cost. Short-term contract extensions like this suggest cities are still weighing the trade-offs between public safety tools and civil liberties. It&#8217;s a reminder that AI deployments in public spaces demand ongoing oversight and community trust.</p><p><strong><a href="https://www.route-fifty.com/artificial-intelligence/2025/10/how-ai-can-aid-procurement-and-purchasing-cities-and-counties/409014/">AI Offers Practical Gains in Local Government Procurement</a></strong></p><p>Cities and counties are exploring AI tools to streamline procurement tasks like drafting RFPs, summarizing vendor responses, and tracking contract deadlines. The structured nature of procurement makes it a low-risk entry point for AI adoption in local government. Procurement is one of the few areas in government where AI can deliver measurable value without overhauling policy or risking public trust. Starting here allows agencies to build internal capacity and confidence before expanding AI use to more complex domains.</p><p><strong><a href="https://statescoop.com/vail-colorado-agentic-ai-municipal-services/">Vail Uses Agentic AI to Modernize City Services</a></strong></p><p><strong><a href="https://www.linkedin.com/in/vail-colorado-0509a9319/">Vail Colorado</a></strong> has deployed an agentic AI platform to integrate and manage municipal services including housing, emergency response, and transportation systems. This is a promising example of a small town using AI not just for efficiency, but to rethink how services are coordinated. It raises important questions about how local governments can responsibly adopt emerging tech without overextending their capacity to govern it.</p><p><strong><a href="https://statescoop.com/los-angeles-maryland-google-gemini-ai/">Los Angeles Adds Generative AI to City Staff Tools</a></strong></p><p>The <strong><a href="https://www.linkedin.com/company/city-of-los-angeles/">City of Los Angeles</a></strong> has begun integrating generative AI tools, including Google Gemini, into its standard suite of software available to city employees. The move follows similar efforts in other jurisdictions exploring AI to improve productivity and service delivery. As cities like Los Angeles adopt generative AI, the real test will be whether these tools are deployed with clear governance, transparency, and measurable public value. This is less about tech adoption and more about institutional readiness to manage change responsibly.</p><h2><strong>International</strong></h2><p><strong><a href="https://financialpost.com/technology/canada-ai-government-surveillance-uoft-director">Canada Criticized for Inaction on AI Surveillance Safeguards</a></strong></p><p>A <strong><a href="https://www.linkedin.com/school/university-of-toronto/">University of Toronto</a></strong> director has warned that Canada is falling behind in regulating government use of AI surveillance technologies, raising concerns about privacy and civil liberties. This highlights a growing accountability gap in how democratic governments deploy AI tools. Without clear oversight, public trust in digital governance will erode, especially when surveillance is involved.</p><h2><strong>Business</strong></h2><p><strong><a href="https://www.clickorlando.com/video/news/2025/10/28/amazon-is-implementing-ai-to-downsize-corporate-ranks-14000-jobs-cut-WyeFO/">Amazon Uses AI to Cut 14,000 Corporate Jobs</a></strong></p><p><strong><a href="https://www.linkedin.com/company/amazon/">Amazon</a></strong> is leveraging generative AI to automate tasks in its corporate offices, resulting in the elimination of 14,000 jobs as part of a broader restructuring effort. This move underscores how AI is not just transforming frontline services but also reshaping white-collar work. Public sector leaders should take note: workforce planning and reskilling strategies must now account for AI-driven shifts in administrative and knowledge-based roles.</p><h2><strong>Public Sector</strong></h2><p><strong><a href="https://fedscoop.com/in-the-age-of-doge-new-venture-aims-war-chest-at-government-efficacy/">New Fund Targets $120M to Modernize Government Services</a></strong></p><p>The <strong><a href="https://www.linkedin.com/company/recoding-america-fund/">Recoding America Fund</a></strong>, a new bipartisan nonprofit, has launched with a goal of raising $120 million over six years to improve state and federal government capacity. Led by former public officials from both parties, the fund will invest in talent, technology, and operational models to accelerate digital transformation and service delivery. This fund reflects a growing recognition that effective governance depends on institutional capacity, not just policy. By focusing on the &#8220;plumbing&#8221;&#8212;talent, tools, and delivery models&#8212;it offers a pragmatic, cross-partisan path to rebuilding trust in government performance amid political disruption.</p><p><strong><a href="https://www.thesoftwarereport.com/microsoft-expands-judson-althoffs-role-as-ceo-of-commercial-business-in-major-organizational-shift/">Microsoft Restructures to Focus on AI and Public Sector</a></strong></p><p><strong><a href="https://www.linkedin.com/company/microsoft/">Microsoft</a></strong> has expanded <strong><a href="https://www.linkedin.com/in/judsonalthoff/">Judson Althoff</a></strong>&#8217;s role as CEO of Commercial Business in a major reorganization aimed at better serving commercial and public sector clients through AI integration and workforce transformation. This move signals Microsoft&#8217;s intent to deepen its role as a strategic partner to governments navigating AI adoption. Public leaders should note how tech firms are aligning their leadership and services to meet the operational and workforce needs of the public sector.</p><p><strong><a href="https://www.citybiz.co/article/765334/qa-with-jon-gacek-gm-of-veritone-public-sector/?abkw=citybiznewyork">Veritone Exec Discusses AI Use in Public Sector</a></strong></p><p>Jon Gacek, General Manager of <strong><a href="https://www.linkedin.com/company/veritone-inc-/">Veritone</a></strong>&#8216;s Public Sector division, outlines how the company&#8217;s aiWARE platform supports public agencies with AI-powered tools for transcription, redaction, and evidence management. This interview highlights the growing role of AI in operational tasks like document processing and digital evidence handling&#8212;areas where efficiency gains can free up staff for more complex work. But as adoption grows, so does the need for clear governance and transparency in how these tools are deployed.</p><p><strong><a href="https://defensescoop.com/2025/10/29/lockheed-martin-google-generative-ai-gemini/">Lockheed Martin Adopts Google&#8217;s Generative AI Tools</a></strong></p><p><strong><a href="https://www.linkedin.com/company/lockheed-martin/">Lockheed Martin</a></strong> is partnering with Google Public Sector to integrate generative AI tools, including the Gemini model, into its internal workflows and operations. This collaboration signals growing acceptance of generative AI in high-stakes, regulated environments. For public agencies, it raises important questions about vendor partnerships, data governance, and the operational readiness of AI tools in mission-critical contexts.</p><p><strong><a href="https://www.linkedin.com/posts/karen-dahut-24135811_today-is-the-day-were-here-at-the-google-activity-7389381702785220609-KBv5">Google Event Highlights AI Use in Public Sector</a></strong></p><p><strong><a href="https://www.linkedin.com/in/karen-dahut-24135811/">Karen Dahut</a></strong> shared insights from a Google Public Sector event, noting that many public sector organizations have deployed more than 10 AI agents in their operations. The rapid adoption of AI agents across public agencies signals a shift from experimentation to operational integration. Leaders should now focus on governance frameworks to ensure these tools enhance service delivery without compromising accountability.</p><p><strong><a href="https://kpmg.com/xx/en/our-insights/transformation/global-customer-experience-excellence.html">Public Sector Lags in Global Customer Experience Rankings</a></strong></p><p><strong><a href="https://www.linkedin.com/company/kpmg/">KPMG</a></strong>&#8216;s 2025&#8211;2026 Global Customer Experience Excellence report finds that public sector organizations score 9.4% below the global average in delivering customer experience, particularly in areas enhanced by proactive and predictive AI technologies. This gap underscores the urgency for public institutions to modernize service delivery using AI not just for efficiency, but to meet rising citizen expectations. Trust and legitimacy increasingly hinge on how well governments can match the responsiveness of the private sector.</p><p><strong><a href="https://publicsectornetwork.com/insight/?__hstc=179202683.2f3f33a24b44870ec4a577029c49e44b.1728000000073.1728000000074.1728000000075.1&amp;__hssc=179202683.1.1728000000076&amp;__hsfp=868907044&amp;page=46">AI Streamlines Public Sector Case Management</a></strong></p><p>A new report from <strong><a href="https://www.linkedin.com/company/public-sector-network/">Public Sector Network</a></strong> highlights how AI-powered workflows are improving case management by increasing speed, accuracy, and efficiency in public service delivery. This is a practical example of AI&#8217;s value in reducing administrative burden and improving responsiveness in government services. As agencies face growing caseloads and limited staff, smart automation can help maintain service quality without sacrificing accountability.</p>]]></content:encoded></item><item><title><![CDATA[October 26, 2025]]></title><description><![CDATA[Originally posted on LinkedIn]]></description><link>https://brief.dylanhayden.com/p/october-26-2025</link><guid isPermaLink="false">https://brief.dylanhayden.com/p/october-26-2025</guid><dc:creator><![CDATA[Dylan Hayden]]></dc:creator><pubDate>Mon, 27 Oct 2025 00:41:00 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!jKZq!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F295131e6-ff3c-45ec-ba8d-239d24b1b9aa_1488x992.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Last week, I was invited to participate in the inaugural OpenAI Forum Higher Education Guild at <strong><a href="https://www.linkedin.com/company/openai/">OpenAI</a></strong> in San Francisco. I got to share how I&#8217;m integrating AI into PublicAdministration education through my <em>AI for Public Sector Leadership</em> course at the <strong><a href="https://www.linkedin.com/school/fels-institute/">Fels Institute of Government at the University of Pennsylvania</a></strong> .</p><p>The most energizing part about attending was getting to know new colleagues from various disciplines. From neuroscience, to the arts, to humanities and social science, to leadership initiaitves, there was nothing artificial about the intelligent applications of AI by many of the people in the room. I&#8217;m looking forward to finding new opportunties to partner with <strong><a href="https://www.linkedin.com/in/tinaaustin/">Tina Austin</a></strong>, <strong><a href="https://www.linkedin.com/in/kate-elkins/">Katherine Elkins</a></strong>, <strong><a href="https://www.linkedin.com/in/drsdgrady/">Siobahn Grady, Ph.D.</a></strong>, <strong><a href="https://www.linkedin.com/in/jen-garcia-ai-adoption-educator/">Jen Garc&#237;a</a></strong>, <strong><a href="https://www.linkedin.com/in/daniel-albert-93015087/">Daniel Albert</a></strong>, and so many others.</p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://brief.dylanhayden.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">The Public AI Brief is a reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><p>Special thanks to <strong><a href="https://www.linkedin.com/in/natalie-cone-pmp-2364b7154/">Natalie Cone PMP</a></strong>, <strong><a href="https://www.linkedin.com/in/alexnawar/">Alex Nawar</a></strong>, and <strong><a href="https://www.linkedin.com/in/janejkim/">Jane Kim</a></strong> from OpenAI for organizing such an inspiring event!</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!jKZq!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F295131e6-ff3c-45ec-ba8d-239d24b1b9aa_1488x992.jpeg" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!jKZq!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F295131e6-ff3c-45ec-ba8d-239d24b1b9aa_1488x992.jpeg 424w, https://substackcdn.com/image/fetch/$s_!jKZq!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F295131e6-ff3c-45ec-ba8d-239d24b1b9aa_1488x992.jpeg 848w, https://substackcdn.com/image/fetch/$s_!jKZq!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F295131e6-ff3c-45ec-ba8d-239d24b1b9aa_1488x992.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!jKZq!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F295131e6-ff3c-45ec-ba8d-239d24b1b9aa_1488x992.jpeg 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!jKZq!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F295131e6-ff3c-45ec-ba8d-239d24b1b9aa_1488x992.jpeg" width="1456" height="971" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/295131e6-ff3c-45ec-ba8d-239d24b1b9aa_1488x992.jpeg&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:971,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:null,&quot;alt&quot;:&quot;Article content&quot;,&quot;title&quot;:null,&quot;type&quot;:null,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="Article content" title="Article content" srcset="https://substackcdn.com/image/fetch/$s_!jKZq!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F295131e6-ff3c-45ec-ba8d-239d24b1b9aa_1488x992.jpeg 424w, https://substackcdn.com/image/fetch/$s_!jKZq!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F295131e6-ff3c-45ec-ba8d-239d24b1b9aa_1488x992.jpeg 848w, https://substackcdn.com/image/fetch/$s_!jKZq!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F295131e6-ff3c-45ec-ba8d-239d24b1b9aa_1488x992.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!jKZq!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F295131e6-ff3c-45ec-ba8d-239d24b1b9aa_1488x992.jpeg 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption">OpenAI Forum Higher Education Guild</figcaption></figure></div><h2><strong>Federal</strong></h2><p><strong><a href="https://fedscoop.com/social-media-ai-surveillance-unions-state-dhs-lawsuit/">Unions Sue State, DHS Over AI Social Media Surveillance</a></strong></p><p>Three major labor unions have filed a lawsuit against the Departments of State and Homeland Security, alleging that AI-powered surveillance tools are being used to monitor and suppress the online speech of noncitizens and university-affiliated visa holders. The suit claims the program violates First Amendment rights and has chilled lawful organizing and expression. This case raises urgent questions about how AI is being deployed in federal surveillance and the potential for overreach. Public agencies must tread carefully to balance national security with constitutional protections, especially when automated tools risk amplifying bias and suppressing civic participation.</p><p><strong><a href="https://fedscoop.com/energy-rfp-oak-ridge-ai-data-centers/">DOE Seeks Partners for AI Data Centers at Oak Ridge</a></strong></p><p>The Department of Energy has issued a request for proposals to build and operate AI-focused data centers and energy infrastructure at Oak Ridge National Laboratory. The RFP encourages bids from experienced private firms and consortia, and acknowledges potential environmental and community impacts. This initiative reflects the federal government&#8217;s growing commitment to AI infrastructure, but also highlights the tension between innovation and sustainability. Public leaders should watch how DOE balances technological ambition with environmental stewardship and local accountability.</p><p><strong><a href="https://www.csis.org/blogs/innovation-lightbulb-federal-rd-funding-matters-us-ai-leadership">Federal R&amp;D Key to Sustaining U.S. AI Leadership</a></strong></p><p>A <strong><a href="https://www.linkedin.com/company/csis/">Center for Strategic and International Studies (CSIS)</a></strong> blog post argues that continued federal investment in research and development is essential for maintaining U.S. competitiveness in artificial intelligence, emphasizing that public and private R&amp;D efforts reinforce each other. This is a timely reminder that public funding doesn&#8217;t just fill gaps&#8212;it sets the foundation for long-term innovation ecosystems. For public leaders, sustained R&amp;D investment is not optional if we want AI systems that reflect democratic values and serve public needs.</p><p><strong><a href="http://www.fcw.com/ideas/2025/10/what-federal-buyers-need-succeed-ai-enabled-procurement/408978/?oref=ng-homepage-river">What Federal Buyers Need for AI-Driven Procurement</a></strong></p><p>A recent <strong><a href="https://www.linkedin.com/company/fcw/">FCW</a></strong> article outlines the skills, tools, and governance structures federal procurement officials need to effectively adopt AI-enabled systems, particularly as generative AI evolves toward more autonomous &#8216;agentic&#8217; models. As AI tools become more autonomous, procurement professionals must evolve from compliance enforcers to strategic stewards of risk and innovation. This shift demands not just technical literacy, but also institutional support for responsible experimentation.</p><h2><strong>State</strong></h2><p><strong><a href="https://www.route-fifty.com/artificial-intelligence/2025/10/generative-ais-state-government-use-ticks/408972/?oref=rf-homepage-river">States Find Over 100 Uses for Generative AI</a></strong></p><p>State governments have identified more than 100 applications for generative AI, ranging from document drafting to language translation, according to reporting from the National Association of State Chief Information Officers&#8217; annual conference in Denver. This growing list of use cases reflects a shift from experimentation to operational integration. The challenge now is ensuring these tools align with public values as adoption scales across state agencies.</p><p><strong><a href="https://statescoop.com/public-sector-gen-ai-new-jersey-guide/">New Jersey Shares Guide for Public-Sector GenAI Projects</a></strong></p><p>New Jersey has published a generative AI guide aimed at helping other states and agencies avoid common pitfalls when developing AI tools for government use. This kind of knowledge-sharing is exactly what the public sector needs. It&#8217;s a reminder that intergovernmental collaboration is a powerful tool for institutional learning.</p><p><strong><a href="https://statescoop.com/nevadas-big-cyberattack-spurs-two-new-projects/">Nevada Launches Cybersecurity Projects After Major Attack</a></strong></p><p>Following a significant cyberattack, Nevada&#8217;s CIO Timothy Galluzi secured $300,000 in state funding to initiate two new cybersecurity initiatives aimed at strengthening the state&#8217;s digital defenses. This is a timely example of how crisis can accelerate investment in digital resilience. State leaders should view cybersecurity not just as an IT issue, but as a core component of public trust and service continuity.</p><p><strong><a href="https://statescoop.com/maryland-vdp-md-isac-expansion/">Maryland Launches Statewide Cyber Vulnerability Program</a></strong></p><p>The <strong><a href="https://www.linkedin.com/company/state-of-maryland/">State of Maryland</a></strong> has established a Vulnerability Disclosure Program (VDP) and is mandating participation in its Information Sharing and Analysis Center (MD-ISAC) for all state and local government entities. This is a smart move toward institutionalizing cybersecurity accountability across all levels of state government. By formalizing vulnerability reporting and centralizing threat intelligence, Maryland is setting a practical example of how to build cyber resilience into public infrastructure.</p><h2><strong>Local</strong></h2><p><strong><a href="https://www.route-fifty.com/digital-government/2025/10/improving-communications-ahead-next-wildfire-emergency/408780/">Sonoma County Upgrades Wildfire Response with AI Tools</a></strong></p><p><strong><a href="https://www.linkedin.com/company/county-of-sonoma/">County of Sonoma</a></strong> officials are using AI and geospatial technology to improve emergency communication and evacuation planning ahead of future wildfires. Lessons from the 2017 Tubbs Fire have led to investments in platforms that integrate data from multiple sources and support real-time coordination across agencies. This case highlights how local governments can evolve from past crises by integrating AI into daily operations and emergency planning. The emphasis on interagency coordination and keeping humans in the loop reflects a mature approach to technology adoption in public safety.</p><p><strong><a href="https://statescoop.com/stretched-thin-san-jose-calif-reaches-for-generative-ai/">San Jose Turns to AI to Ease Staff Workload</a></strong></p><p><strong><a href="https://www.linkedin.com/company/cityofsanjose/">City of San Jos&#233;</a></strong> is seeking generative AI tools that would let city employees build custom digital assistants to automate routine tasks and reduce burnout. This is a pragmatic move by a city under pressure to use AI not for flash, but to support an overstretched workforce. It&#8217;s a reminder that the most meaningful public sector AI applications often start with internal operations, not citizen-facing services.</p><h2><strong>International</strong></h2><p><strong><a href="https://www.startupdaily.net/topic/politics-news-analysis/openai-just-won-a-2nd-australian-government-contract-after-being-the-only-company-invited-to-bid/">OpenAI Wins Second Exclusive Contract with Australian Government</a></strong></p><p>OpenAI secured a second contract with the <strong><a href="https://www.linkedin.com/company/australiangovernment/">Australian Government</a></strong> after being the sole company invited to bid, raising questions about procurement transparency and vendor competition. Single-vendor deals can expedite adoption but risk undermining public trust and market fairness. Governments need clear, accountable frameworks for AI procurement that balance innovation with open competition.</p><p><strong><a href="https://castlegarsource.com/2025/10/22/almost-half-of-employed-canadian-job-seekers-fear-their-job-will-be-eliminated-due-to-ai/">Nearly Half of Canadian Workers Fear AI Job Loss</a></strong></p><p>A recent survey found that 46% of employed Canadian job seekers are concerned that artificial intelligence could eliminate their current roles. Despite this anxiety, many respondents support using generative AI tools during the job search process. This data highlights a growing tension in the workforce: AI is seen both as a threat to job security and a tool for career advancement. Public sector leaders should take note that addressing these fears through transparent workforce planning and upskilling initiatives will be critical to maintaining trust and stability.</p><p><strong><a href="https://blogs.timesofisrael.com/from-innovation-to-control-ais-impact-in-a-nationalist-era/">AI Innovation Risks Being Co-opted by Nationalist Agendas</a></strong></p><p>A blog post by Vinay Lohar explores how artificial intelligence, once a symbol of global innovation, is increasingly being shaped by nationalist policies and governance failures, particularly in developing nations. The piece argues that wealthier countries with strong nationalist leaders may attract global talent while limiting open collaboration. This perspective underscores a growing tension: as AI becomes more strategic, governments face pressure to balance openness with control. For public leaders, the challenge is to foster innovation ecosystems that are both globally connected and locally accountable.</p><h2><strong>Education</strong></h2><p><strong><a href="https://statescoop.com/ny-school-district-ai-powered-classroom-surveillance/">AI Surveillance in NY Classrooms Raises Privacy Concerns</a></strong></p><p>The <strong><a href="https://www.linkedin.com/company/plainedge-union-free-school-district/">PLAINEDGE UNION FREE SCHOOL DISTRICT</a></strong> deployed the AI-powered XSponse surveillance system in classrooms without public notice, prompting criticism from the ACLU over student privacy and transparency. This case underscores the growing tension between safety technologies and civil liberties in schools. Public institutions must prioritize transparency and community trust when introducing AI tools, especially those that monitor vulnerable populations like students.</p><p><strong><a href="https://hls.harvard.edu/events/public-interest-tech-law-and-policy-a-viable-and-growing-career-path/">Public Interest Tech Law Emerges as Career Path</a></strong></p><p>A <strong><a href="https://www.linkedin.com/school/harvard-law-school/">Harvard Law School</a></strong> event highlights the growing need for legal professionals focused on public interest technology, particularly in response to AI and data governance challenges. As governments grapple with AI&#8217;s societal impacts, there&#8217;s a clear need for legal minds who understand both technology and public values. This signals a shift in legal education toward roles that bridge law, policy, and digital ethics in service of the public good.</p><p><strong><a href="https://www.route-fifty.com/cybersecurity/2025/10/new-rhode-island-cyber-range-prepares-students-real-world-danger-lurking-cyberspace/408850/">Rhode Island College Opens Cybersecurity Training Center</a></strong></p><p>Rhode Island College has launched a new cyber range to simulate real-world cyberattacks and train students in high-pressure incident response. The facility uses IBM&#8217;s Cloud Range platform and is part of the college&#8217;s Institute for Cybersecurity &amp; Emerging Technologies. This investment reflects a growing recognition that cybersecurity readiness must start at the local and institutional level. By preparing students with hands-on, scenario-based training, Rhode Island is building the kind of workforce resilience that public agencies increasingly depend on in the face of escalating digital threats.</p><h2><strong>Public Sector</strong></h2><p><strong><a href="https://www.route-fifty.com/artificial-intelligence/2025/10/report-how-protect-public-sector-workers-against-ais-rise-government/408987/?oref=rf-homepage-river">New Report Urges Protections for Public Workers Amid AI Adoption</a></strong></p><p>A recent report outlines strategies for governments and labor unions to collaborate on responsible AI implementation, emphasizing worker protections, retraining, and transparency in deployment decisions. This report is a timely reminder that AI adoption in government isn&#8217;t just a tech issue, it&#8217;s a workforce issue. Public leaders need to prioritize inclusive planning and labor engagement to ensure AI enhances, rather than erodes, public service jobs.</p><p><strong><a href="https://www.linkedin.com/posts/mibin-boban_thinkai-publicsector-qualityengineering-activity-7386754136840511489-6COm">Panel Discusses AI Evaluation in Public Sector</a></strong></p><p>A recent panel hosted by Think Digital Partners brought together experts to discuss how governments can evaluate and monitor AI systems used in public services. As public agencies adopt AI, the conversation is rightly shifting toward oversight and quality assurance. Panels like this help surface practical approaches to responsible deployment, something every government leader should be thinking about now, not later.</p><p><strong><a href="https://www.fticonsulting.com/insights/articles/how-ai-drive-business-transformation-utilities-energy-companies">How AI Is Reshaping Utilities and Energy Operations</a></strong></p><p>A recent article from FTI Consulting outlines how utilities and energy companies are leveraging AI to streamline operations, improve customer service, and enhance grid reliability. Case examples include Entergy&#8217;s deployment of AI tools to optimize service delivery and reduce operational costs. While the focus is on private utilities, the lessons are highly relevant for public energy providers and regulators. AI&#8217;s role in infrastructure resilience and service efficiency should be on the radar of any public leader overseeing critical systems.</p><p><strong><a href="https://www.jdsupra.com/legalnews/ai-chatbots-at-the-crossroads-4732395/">AI Chatbots Face Growing Legal and Compliance Scrutiny</a></strong></p><p>As generative AI chatbots become more common, legal experts warn of increasing regulatory and compliance risks, particularly around data privacy, consumer protection, and misinformation. State and local governments are among those navigating how to deploy these tools responsibly. Public agencies experimenting with AI chatbots must now weigh not just technical feasibility but also legal exposure. This is a reminder that innovation in government must be accompanied by rigorous oversight and clear accountability frameworks.</p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://brief.dylanhayden.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">The Public AI Brief is a reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div>]]></content:encoded></item><item><title><![CDATA[October 17, 2025]]></title><description><![CDATA[Originally posted on LinkedIn]]></description><link>https://brief.dylanhayden.com/p/october-17-2025</link><guid isPermaLink="false">https://brief.dylanhayden.com/p/october-17-2025</guid><dc:creator><![CDATA[Dylan Hayden]]></dc:creator><pubDate>Sat, 18 Oct 2025 01:06:00 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/0ec35e11-21b0-4405-a974-491b0a0a717f_1024x1024.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<h2><strong>Federal</strong></h2><p><strong><a href="https://fedscoop.com/why-agencies-are-embracing-a-private-cloud-model-for-mission-success/">Federal Agencies Turn to Private Cloud for Better Control and AI Use</a></strong></p><p>Federal agencies are shifting toward hybrid and private cloud models to better balance innovation, security, and mission outcomes. Leaders say this approach gives them more control over data and improves the effectiveness of AI tools in areas like defense, cybersecurity, and public services. This article highlights how agencies are moving from &#8220;cloud-first&#8221; to &#8220;cloud-smart&#8221; strategies, using hybrid and private clouds to pair AI innovation with stronger control over mission-critical data. It reflects a maturing federal mindset that treats cloud not as a destination but as an operational model for accountability, security, and agility.</p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://brief.dylanhayden.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">The Public AI Brief is a reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><p><strong><a href="https://fedscoop.com/ai-data-centers-canceled-energy-projects-colorado-democrats/">Colorado Lawmakers Push Back on Energy Project Cuts Amid AI Power Demands</a></strong></p><p><strong><a href="https://www.linkedin.com/company/state-of-colorado/">State of Colorado</a></strong> lawmakers are criticizing the <strong><a href="https://www.linkedin.com/company/energy/">U.S. Department of Energy (DOE)</a></strong> for canceling over $7 billion in energy project funding, arguing the move undermines efforts to manage rising electricity demand from AI data centers. They say the cancellations threaten jobs, increase utility costs, and hinder progress on affordable energy solutions. This marks a growing tension between scaling AI infrastructure and energy policy: as demand for power-hungry AI data centers grows, the U.S. Department of Energy cancelled billions in clean-energy project funding.</p><p><strong><a href="https://markets.financialcontent.com/wral/article/tokenring-2025-10-15-fhwa-embraces-ai-aurigo-masterworks-selected-to-revolutionize-federal-infrastructure-planning">Federal Highway Administration Adopts AI Tool for Infrastructure Planning</a></strong></p><p>The <strong><a href="https://www.linkedin.com/company/federal-highway-administration/">Federal Highway Administration</a></strong> has chosen Aurigo Masterworks, an AI-powered platform, to help modernize how it plans and manages infrastructure projects. This move could influence how other government agencies use AI in public works. This move &#8220;signifies a pivotal moment where AI is no longer confined to experimental labs or consumer applications but is actively deployed to enhance the efficiency and resilience of national assets.&#8221; We&#8217;ll see what comes of that.</p><h2><strong>State</strong></h2><p><strong><a href="https://www.route-fifty.com/cybersecurity/2025/10/preparing-state-cios-future-cyber-and-ai-defense/408705/">States Face New Cyber and AI Security Mandates Under Federal Orders</a></strong></p><p>New federal directives are pushing state governments to strengthen their cybersecurity and AI defenses. To stay eligible for funding, states must clarify responsibilities, prioritize critical infrastructure, and integrate AI protections into their broader security plans. State CIOs and CISOs are being urged to treat generative AI systems as critical infrastructure assets: mapping out where AI is embedded, applying the same risk assessments and safeguards used for cyber systems, and integrating AI into enterprise security and grant strategies</p><p><strong><a href="https://statescoop.com/nascio-2025-annual-survey/">State IT Leaders Embrace Fast-Paced Tech Changes, Including AI</a></strong></p><p>A new survey from the <strong><a href="https://www.linkedin.com/company/nascio/">National Association of State Chief Information Officers (NASCIO)</a></strong> shows that state IT leaders are rapidly adopting new technologies, including artificial intelligence, to improve government services and operations. State CIOs are embracing emerging technologies like AI while also facing tightening budgets and increased turnover in the role; over half reported budget increases, yet many expect future fiscal pressures.</p><p><strong><a href="https://statescoop.com/nascio-2025-state-technology-innovator-award/">State Tech Leaders Recognized for Innovation in AI and Public Services</a></strong></p><p>Three state technology leaders received the NASCIO State Technology Innovator Award for their efforts in improving government services through application development, citizen engagement, and the use of AI. Congratulations to: <strong><a href="https://www.linkedin.com/in/keith-perry-innovation/">Keith Perry, MBA</a></strong>, <strong><a href="https://www.linkedin.com/company/georgia-technology-authority/">Georgia Technology Authority (GTA)</a></strong> <strong><a href="https://www.linkedin.com/in/bryannapardoe/">Bryanna Pardoe</a></strong>, Commonwealth Office of Digital Experience (CODE PA) <strong><a href="https://www.linkedin.com/in/josiah-raiche-a504a523a/">Josiah Raiche</a></strong>, <strong><a href="https://www.linkedin.com/company/vermont-agency-of-digital-services/">Vermont Agency of Digital Services</a></strong></p><p><strong><a href="https://statescoop.com/texas-expands-digital-assistant-services-to-include-boat-and-drivers-licenses/">Texas Adds Boat and Driver&#8217;s License Services to Online Assistant</a></strong></p><p>Texas has expanded its digital assistant services to let residents renew boat registrations and upgrade driver&#8217;s licenses online, making it easier to access these services without visiting an office. Tackling friction points at the motor vehicle department is typically a public win.</p><p><strong><a href="https://statescoop.com/accessibility-nascio-state-priority-2025/">State Tech Leaders Prioritize Accessibility Alongside AI</a></strong></p><p>State technology leaders are making digital accessibility a key focus alongside AI, as they prepare to meet a compliance deadline in April 2025. This shift was highlighted at the NASCIO annual conference. The need to address the human side of AI is more important than ever.</p><p><strong><a href="https://statescoop.com/newsom-vetoes-ai-safety-bill-aimed-at-companion-chatbots/">Governor Newsom Blocks Bill to Limit AI Chatbots for Minors</a></strong></p><p><strong><a href="https://www.linkedin.com/company/state-of-california/">State of California</a></strong> Governor <strong><a href="https://www.linkedin.com/in/gavinnewsom/">Gavin Newsom</a></strong> rejected a bill that would have limited how minors interact with AI-powered companion chatbots, citing concerns about the bill&#8217;s approach. Newsom rejected AB&#8239;1064 because it could amount to a near total ban on minors using companion AI chatbots, a good example of how balancing innovation and protection plays out in real time.</p><h2><strong>Local</strong></h2><p><strong><a href="https://www.route-fifty.com/artificial-intelligence/2025/10/los-alamos-and-university-michigan-want-build-national-security-data-center-ypsilanti-residents-and-local-officials-see-few-benefits/408680/?oref=rf-homepage-river">Ypsilanti Residents Question National Security Data Center Project</a></strong></p><p><strong><a href="https://www.linkedin.com/company/los-alamos-national-laboratory/">Los Alamos National Laboratory</a></strong> and the <strong><a href="https://www.linkedin.com/school/university-of-michigan/">University of Michigan</a></strong> plan to build a national security data center in the <strong><a href="https://www.linkedin.com/company/city-of-ypsilanti/">City of Ypsilanti</a></strong>, but local officials and residents are concerned about the project&#8217;s benefits and impact on the community. The Ypsilanti controversy highlights a growing tension in local governance: the national push for AI infrastructure often collides with community concerns about transparency, environmental impact, and equitable development. Public administrators should see this as a case study in how &#8220;innovation zones&#8221; can fail without local trust and inclusive planning.</p><p><strong><a href="https://siliconangle.com/2025/10/15/city-kyle-pioneering-agentic-ai-public-sector-dreamforce/">City of Kyle Uses AI to Improve Public Services and Engagement</a></strong></p><p>The <strong><a href="https://www.linkedin.com/company/cityofkyletx/">City of Kyle, TX</a></strong> is using a new form of AI to improve public services and increase community involvement as part of a broader digital transformation effort. This is using agentic AI not just as a flashy tech experiment but as a strategic tool to manage rapid growth, unify silos, and deliver 311 services more efficiently.</p><p><strong><a href="https://siliconprairienews.com/2025/10/oma-x-ai-brings-omaha-together-to-learn-build-and-lead-with-artificial-intelligence/">Omaha Launches Citywide AI Initiative to Boost Learning and Innovation</a></strong></p><p>The <strong><a href="https://www.linkedin.com/company/city-of-omaha/">City of Omaha</a></strong> launched &#8216;OMA x AI&#8217;, a citywide initiative to educate residents and city workers about artificial intelligence and explore how it can improve local government services. This is a good example of what happens when local government, academia, and business sectors align around AI for workforce development and civic innovation.</p><p><strong><a href="https://www.govtech.com/artificial-intelligence/campbell-county-va-rejects-rezoning-to-draw-data-centers">Campbell County Votes Against Rezoning Land for Data Centers</a></strong></p><p><strong><a href="https://www.linkedin.com/company/ccvadper/">Campbell County, Virginia</a></strong>, has decided not to rezone 57 acres of land that could have been used to attract data centers, keeping existing land protections in place. As seen in <strong><a href="https://www.linkedin.com/company/city-of-memphis/">City of Memphis</a></strong>, <strong><a href="https://www.linkedin.com/company/city-of-ypsilanti/">City of Ypsilanti</a></strong>, and now places like <strong><a href="https://www.linkedin.com/company/village-of-caledonia/">Village of Caledonia</a></strong>, <strong><a href="https://www.linkedin.com/company/city-of-tucson/">City of Tucson</a></strong>, and <strong><a href="https://www.linkedin.com/in/jerome-township-a57172227/">Jerome Township</a></strong>, there&#8217;s a growing tension between the race to expand AI infrastructure and local demands for transparency, environmental protection, and community benefit. These conflicts are becoming a defining test of how public institutions balance innovation with trust.</p><p><strong><a href="https://www.govtech.com/gov-experience/hartford-conn-integrates-ai-for-translation-services">Hartford Adds AI Translation to Public Meetings for Better Access</a></strong></p><p>The <strong><a href="https://www.linkedin.com/company/city-of-hartford/">City of Hartford</a></strong>, Connecticut is using AI-powered translation tools at public meetings to help residents who speak different languages participate more easily. The effort is part of a partnership with <strong><a href="https://www.linkedin.com/showcase/google-public-sector/">Google Public Sector</a></strong> to improve access to local government. As a former linguist, this makes my heart sing. It&#8217;s a reminder that some of the most meaningful AI applications in government aren&#8217;t flashy, they quietly expand who gets to participate and be heard.</p><h2><strong>International</strong></h2><p><strong><a href="https://cepa.org/article/europes-troubled-bet-on-ai-factories/">Europe Pushes AI Use in Industry and Government Amid Setbacks</a></strong></p><p>The <strong><a href="https://www.linkedin.com/company/european-commission/">European Commission</a></strong> has launched a new strategy to boost the use of AI in major industries and public services, but challenges remain in funding and implementation. It reminds me of all those states and cities that promised to be the &#8220;next Silicon Valley for [insert industry here].&#8221; Ambition alone isn&#8217;t enough; without clear demand, skilled talent, and a supportive policy environment, even the best-funded AI hubs can struggle to take root.</p><p><strong><a href="https://www.prnewswire.com/ae/news-releases/opentext-partners-with-core42-to-advance-united-arab-emirates-digital-transformation-through-ai-innovations-302585424.html">OpenText and Core42 Team Up to Boost UAE&#8217;s Public Sector with AI</a></strong></p><p><strong><a href="https://www.linkedin.com/company/opentext/">OpenText</a></strong> is partnering with <strong><a href="https://www.linkedin.com/company/core42ai/">Core42</a></strong> to support the United Arab Emirates&#8217; public sector by improving AI, cloud, and automation systems as part of the country&#8217;s digital transformation efforts.This kind of public-private collaboration shows how governments can pair global expertise with local infrastructure to accelerate digital transformation on their own terms.</p><p><strong><a href="https://www.thenationalnews.com/future/technology/2025/10/15/microsoft-brings-ai-data-processing-to-uae-to-boost-compliance/">Microsoft to Process AI Data in UAE to Support Compliance</a></strong></p><p><strong><a href="https://www.linkedin.com/company/microsoft/">Microsoft</a></strong> will begin processing AI data locally in the UAE starting in 2026 to help government and regulated industries meet data compliance rules. As the UAE strengthens its foothold in a region that is rapidly positioning itself as a digital and regulatory leader, I see this giving Microsoft an advantage over competitors that still rely on offshore or multi-region processing.</p><p><strong><a href="https://aimagazine.com/news/servicenow-nvidia-driving-ai-transformation-in-uk">ServiceNow and Nvidia Expand AI Infrastructure in the UK</a></strong></p><p><strong><a href="https://www.linkedin.com/company/servicenow/">ServiceNow</a></strong> is partnering with <strong><a href="https://www.linkedin.com/company/nvidia/">NVIDIA</a></strong> to bring advanced AI infrastructure to data centers in the UK, aiming to support local innovation and digital services. This expansion shows how AI transformation depends as much on where data is processed as on the technology itself. Governments want AI capability that fits within national rules and values, not just imported innovation.</p><h2><strong>Research</strong></h2><p><strong><a href="https://www.cmu.edu/news/stories/archives/2025/october/amazon-and-carnegie-mellon-university-launch-strategic-ai-innovation-hub">Amazon and Carnegie Mellon Team Up on Responsible AI Research</a></strong></p><p><strong><a href="https://www.linkedin.com/company/amazon/">Amazon</a></strong> and <strong><a href="https://www.linkedin.com/school/carnegie-mellon-university/">Carnegie Mellon University</a></strong> have partnered to create an AI Innovation Hub focused on developing responsible AI solutions and addressing public policy challenges.I think we&#8217;re likely to see an academic arms race as universities compete to establish AI innovation hubs of their own. As with past &#8220;hub&#8221; initiatives, the differentiator tends to be whether the university builds lasting public partnerships rather than chasing corporate alignment alone.</p><p><strong><a href="https://aeon.co/essays/generative-ai-has-access-to-a-small-slice-of-human-knowledge">Study Finds AI Systems Miss Much of Human Knowledge</a></strong></p><p>A new study finds that generative AI systems often overlook large parts of human knowledge, especially information outside dominant languages and cultures. Researchers say these systems tend to reinforce common patterns, leaving out less-represented perspectives. As I teach my graduate students at the <strong><a href="https://www.linkedin.com/school/fels-institute/">Fels Institute of Government at the University of Pennsylvania</a></strong>, this is exactly why <em>domain expertise is critical</em>. AI can surface information quickly, but without human context and subject knowledge, it can miss the nuance, judgment, and depth that real understanding requires.</p><h2><strong>Public Sector</strong></h2><p><strong><a href="https://fedscoop.com/artificial-intelligence-agency-pilots-process-intelligence/">Why Government AI Projects Often Fail &#8212; and How to Fix Them</a></strong></p><p>Government agencies are gaining easier access to AI tools, but many pilot projects fail due to unclear goals and poor data. Experts suggest using Process Intelligence to better understand operations before applying AI, helping ensure real improvements and taxpayer value. Is &#8220;Process Intelligence&#8221; the new Systems Thinking?</p><p><strong><a href="https://www.snowflake.com/en/blog/data-engineering-lakehouse-open-formats/">Snowflake Offers Tools to Help Public Sector Get AI-Ready</a></strong></p><p><strong><a href="https://www.linkedin.com/company/snowflake-computing/">Snowflake</a></strong> is promoting tools to help public sector organizations prepare their data systems for AI use, focusing on open formats and integration with AI services. If your data lakes use closed or proprietary formats, you may struggle to integrate across silos, maintain governance, or scale analytics pipelines for generative-AI tasks. If you don&#8217;t know your data, you don&#8217;t know AI.</p><p><strong><a href="https://www.cdomagazine.tech/community/leading-with-growth-mindset-inside-3rd-annual-cdo-magazine-global-data-leadership-summit-2025">Public Sector AI Leader Honored at Global Data Summit</a></strong></p><p>At the 3rd Annual <strong><a href="https://www.linkedin.com/company/cdo-magazine/">CDO Magazine</a></strong> Global Data Leadership Summit, a public sector AI leader was recognized for her efforts in advancing data and AI use in government work. A growth mindset isn&#8217;t just inspirational this time, it seems to be the framing that data leaders are using to bridge governance, ethics, talent, and AI imperatives at scale.</p><p><strong><a href="https://www.ntu.org/foundation/detail/why-the-united-states-needs-better-designed-ai-sandboxes">Report Calls for Improved AI Testing Programs in U.S. Government</a></strong></p><p>A new report argues that the U.S. needs better-designed AI regulatory sandboxes to help government agencies and companies test new technologies safely and effectively. These programs can support innovation while managing risks. Something I learned ealry on from my kids: sometime we all just need more time in the sandbox.</p><p><strong><a href="https://www.executivebiz.com/articles/tensor-carahsoft-partner-predictive-ai-sarahai">Tensor and Carahsoft Team Up to Offer Predictive AI to Local Governments and Researchers</a></strong></p><p><strong><a href="https://www.linkedin.com/company/tensor-hq/">Tensor</a></strong> and <strong><a href="https://www.linkedin.com/company/carahsoft/">Carahsoft</a></strong> have partnered to provide local governments, research institutions, and businesses with access to predictive AI tools designed to help anticipate outcomes rather than generate new content. The bet here is the frontier in AI for government is shifting from generative to predictive, anticipating risks and operational needs rather than just generating content.</p><p><strong><a href="https://www.theregister.com/2025/10/15/uk_gov_ai_savings/">UK Lawmakers Question AI Cost-Saving Claims in Public Services</a></strong></p><p>UK lawmakers are questioning the government&#8217;s claims that artificial intelligence will lead to major cost savings in public services like the NHS and local councils. Experts told MPs that the expected savings may be overstated and hard to measure. Good reminder that ambitious savings claims like the &#163;45 billion AI productivity goal by the <strong><a href="https://www.linkedin.com/company/scitechgovuk/">Department for Science, Innovation and Technology</a></strong> need to be treated as hypotheses, not guarantees. As public leaders, we must keep pushing for transparent assumptions, clear timeframes, and measurable metrics rather than letting big headline numbers stand unchecked.</p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://brief.dylanhayden.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">The Public AI Brief is a reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div>]]></content:encoded></item></channel></rss>