Federal AI Uses Blow Up, But Bottlenecks Threaten to Run Amid Public Skepticism: Brookings



In short

  • The use of Federal AI has grown rapidly, but adoption remains concentrated among a few large agencies.
  • Key barriers include a shortage of specialized AI talent, a risk-averse organizational culture, and procurement regulations inconsistent with rapidly evolving AI systems.
  • Public trust is a critical issue, with only 17% of Americans believing that AI will benefit the country, making transparency essential to building trust.

The use of artificial intelligence in the United States government has grown dramatically in recent years, but major obstacles — from talent shortages to public skepticism — are slowing the integration of the technology into public services, according to a new report from the The Brookings Institute.

Wednesday’s report draws on AI use case data from 2023 to 2025, government job data, Office of Management and Budget memoranda, and interviews with current and former technology experts at eight agencies.

These numbers tell a very quick story. In 2025, 41 agencies recorded more than 3,600 cases of using AI-69% above the number reported in 2024 and five times the number reported in 2023. These programs have many government services: More than half of the Social Security Administration that was reported to use cases of use of assistance and processing benefits, while more than half of the efforts of the Department of Justice to enforce laws.

However, growth is not evenly distributed. Over the past three years, the five largest organizations accounted for half of all cases of use of AI, and large organizations contributed to 76% of the total number of 2025. Small organizations are not managing well: The 11 small organizations that announced in 2025 together provided only 60 use cases, which represents only 2% of the total.

The report identifies a number of barriers to adoption. One of the biggest challenges is the lack of special talent. Of the more than 56,000 technology jobs posted by the government since 2016, just over 1,600 — less than 3% — show the potential of AI.

Mr. Biden’s long-term tenure aims to address this gap, but the early 2025 workforce shortage could undermine those efforts, as nearly 25% of AI job listings are slated for 2024 and beyond — meaning most of the new hires would be in the latest category and easily fired.

In addition to the workforce, the report shows a persistent culture of risk aversion within the public sector. About 60% of all AI use cases are in the pilot phase or pre-deployment, meaning that the federal AI landscape is still in rapid development—one that requires dedicated time to learn and experiment that many agencies struggle to capture. The report also shows that the direct communication of the Trump administration on the deployment of AI to the workforce is limited Department of Public Works (DOGE) may reinforce that suspicion.

Accountability gaps are another concern. More than 85% of all AI-related cases in 2025 do not have the necessary information about risk mitigation measures, despite the need from the OMB.

Dependence on people creates another problem. According to a recent survey by the Pew Research Center, nearly half of Americans now say they are more concerned than excited about the popularity of AI, up from 37% four years ago, and only 17% of Americans believe that AI will affect the US in the next two decades.

The report warns that the problems are serious. Public trust in the federal government remains near historic lows, with recent data showing just 16% of Americans say they believe Washington will do the right thing most or almost all of the time. In light of this, the authors argue that unmanaged AI deployments can be very damaging – but that well-designed programs that focus on tangible job changes can help rebuild trust in public institutions.

To get there, Brookings recommends expanding AI literacy training across organizations, revising procurement rules designed to enforce static software, promoting transparent practices regarding AI’s greatest risks, and prioritizing use cases that bring clear, positive benefits to society.

Daily Debrief A letter

Start each day with top stories right here, including originals, podcasts, videos and more.



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *