On AI, the government gets ready to throw its weight around

A $2.6 billion sandbox, AI transparency and new government officers took center stage, while a tech CEO drew the attention downstairs.

On AI, the government gets ready to throw its weight around

Tuesday’s first-ever Senate appearance of OpenAI CEO Sam Altman sparked a frenzy on Capitol Hill. But the more important action on artificial intelligence may actually have been just up the stairs in the same Senate office building, at a lower-profile hearing on the federal government’s own use of AI.

With Congress largely stalled on any serious plans to regulate AI across the tech industry, legislation applied specifically to federal agencies may be Washington’s best chance to put its imprint on the fast-moving technology.

The tech lobby remains largely torn on how (or whether) to regulate AI — and lawmakers, fearful of setting rules that could delay new products or kneecap U.S. tech companies on the global stage, are hesitant to pass industry-wide bills. But it’s often far easier for Congress to pass laws that rein in federal agencies. Even if those rules don’t technically apply to the private sector, they set norms and standards that often trickle out into the broader economy.

While Altman testified, the Senate Committee on Homeland Security and Governmental Affairs quietly held its own hearing on how agencies should be using AI. Convened by committee chair Gary Peters (D-Mich.), the hearing brought together current and former government officials, academia and civil society to discuss a bevy of ideas for how the federal government should channel its immense budget toward incorporating AI systems while guarding against unfairness and violations of privacy.



They included supercharging the federal AI workforce, shining a light on the federal use of automated systems, investing in public-facing computing infrastructure and steering the government’s billions of dollars in tech purchases toward responsible AI tools.

Russell Wald, director of policy for Stanford University’s Institute for Human-Centered Artificial Intelligence and a frequent adviser to national policymakers on AI, called it the real hearing to watch.

“In my view, the most substantive stuff actually really happened in here,” said Wald, who spoke with POLITICO on the sidelines of Tuesday’s hearing. “What the government sets actually sends a powerful message to the rest of what happens in this particular space.”

As Peters and other lawmakers heard a wide range of recommendations on Tuesday, a few themes came up repeatedly. Lynne Parker, former assistant director for AI at the White House Office of Science and Technology Policy, suggested each agency should tap one official to be a “chief AI officer.” Multiple panelists and lawmakers called boosting AI literacy a crucial first step toward new AI rules — and on Monday, Peters partnered with Sen. Mike Braun (R-Ind.) on a bill that would create an AI “training program” for federal supervisors and management officials.

There was also significant emphasis on standing up a National AI Research Resource. The Biden administration envisions NAIRR as a sandbox for AI researchers that can’t afford the massive computing infrastructure used by OpenAI and other private-sector players.

Through an initial $2.6 billion investment over six years, it would give AI researchers access to powerful computing capabilities in exchange for their agreement to follow a set of government-approved norms. But Congress still needs to sign off on the plan.

Other panelists said it was crucial that citizens understand when and how automated tools are being used to determine their eligibility for government services. “One theme that certainly comes up over and over again is transparency,” said Peters, who fretted that “black box systems” in use by government agencies could spark distrust and lead to biased outcomes.

But even though it’s a widely-shared goal, AI transparency is tough to pull off in practice — it’s often technically difficult for computer scientists to explain how algorithms even work, for example. When asked about how he thinks Congress could solve the transparency problem, Peters suggested this hearing would only be the beginning.

“There's a lot of noise about this issue, everybody’s talking about it,” the senator said.

When asked about the AI frenzy that’s overtaken some of his colleagues, Peters said it was important to be “thoughtful and deliberative, and take our time and not rush to any conclusions.” He said that’s especially key if Congress wants to avoid squashing innovation while it reins in harmful uses of the technology.

“Striking that balance is not an easy thing to do, and that's why I want to take some time,” Peters said.