Opinion | A Deceptively Radical Proposal to Govern AI

AI is transforming society. We should have a say.

Opinion | A Deceptively Radical Proposal to Govern AI

AI has burst into the public consciousness, capturing our collective attention more quickly and completely than any technology in memory. More than a hundred million people have already connected with ChatGPT to ask, “What can it do?” But the power and complexity of AI have quickly led to weightier questions like “What should it do?” and, just as important, “Who gets to decide?”

The answer to the last question must be: All of us.

Information technology already profoundly shapes our world and daily lives, and its impact is projected to grow exponentially. For any democracy to work, when something has this much power over us, we must have a say in how it works. But the traditional ways our society aligns private companies’ behavior with public needs — regulation and the market — are not keeping up. And both the public and the companies are asking for new solutions.

Based on my experience working both in big tech and for the U.S. government, I propose a new way forward to preserve the innovation engine while advancing shared values: Put a Citizen Representative in the room where AI happens.

This week, Sam Altman, the chief executive of OpenAI, testified before the Senate and called for a new federal agency to regulate AI. This is not the first time he’s asked for public participation to achieve a widely shared goal: a balance between sustained innovation and common values. A new agency could help, though federal rulemaking processes will struggle to keep up with the pace of technology. More pointedly, however, a proposal for a new regulatory agency continues the same old adversarial framework in which companies are cast as troublemakers that need to be policed by the government. What I am proposing is a way to respect the pace and promise of innovation, while at the same time bringing the rich and diverse voices of the public into the room.



Imagine if Congress passed legislation requiring every company that releases an AI product that has at least 10 million users to have an embedded Citizen Representative in the company.

This Citizen Representative, chosen by the public, would be paid for by the company, join the company’s senior leadership team, and have a budget proportional to the size of the company, including staff embedded in product teams. By law and enforced by regulators, the Citizen Representative and their staff would have access to internal product development and participate in the company’s critical decisions. Within limits needed to protect intellectual property and privacy, the Citizen Representative could communicate with the public about the company’s considerations and choices and solicit public input. The company’s leaders wouldn’t have to take the Citizen Representative’s recommendations, but they would make that choice in the open.

The Citizen Representative’s influence would take multiple forms. The first is the chance for a productive partnership that sets a positive agenda prioritizing the public’s values. The decisions facing tech companies are hard and only getting harder, and public input will ensure that companies don’t have to make these decisions alone. Indeed, the Citizen Representative can give a company cover for doing the right thing (particularly since there will also be Citizen Representatives at every competitor). This kind of “additional” voice in the room builds on the precedent of product teams that embed legal counsel to advise on compliance. In a mature corporate culture, a member of the Citizen Representative’s team would similarly become a trusted partner helping the team do its best work. Besides, there’s always the threat of more onerous regulation, and working closely with the Citizen Representative would help keep it at bay.

Another form of influence is transparency and dialogue. Technology companies are under constant scrutiny, with fewer and fewer observers willing to take companies’ stated missions and principles at face value. The presence of Citizen Representatives can help companies identify and more quickly address public concerns, especially if the people chosen to fulfill the Citizen Representative role are individuals who have deep experience applying social values to technology. If need be, the Citizen Representative could use their high-profile platform to include the public and policymakers in shaping the company’s decision-making.


The real power of the Citizen Representative, however, is more than the specific actions they take or input they provide. It’s the positive impact on a company’s culture. Eventually, after the Citizen Representative and their team have made their presence at the company felt, the question of how a particular choice — to release a product, to add a feature, to invest in better safeguards — aligns with public values will become a more transparent, structured part of every company decision.

Citizen Representatives may seem like a modest solution, in that this proposal leaves the decisions in the hands of the same private actors who make them today. But this solution would radically change the public’s relationship with technology. Today, decisions about AI and other critical technologies are opaque to the public. Citizen Representatives would give both companies and the public greater insight into one another’s ambitions.

In the context of today’s technology, traditional bureaucracies would be too crude an instrument to advance both sustained innovation and shared values. Instead, today’s AI requires a nimbler kind of oversight, the “soft power” of representatives whose participation and perception make it possible for them to deliver agile, independent advice. Just as central banks rely on coordination, relationships and shared knowledge to regulate the highly complex financial sector, Citizen Representatives can best influence the highly complex tech sector through daily participation, persuasion and transparency.

Eventually, Citizen Representatives could work well in other industries with major impacts on the public’s welfare. But AI makes sense as a place to introduce them, given the nascency and importance of the space and the industry’s openness to public input.

AI carries the potential for both triumph and tragedy. As its impact on our lives continues to grow, we the people can best advance our shared values, including a commitment to innovation, if we’re there every day — attuned, alert and active — in the room where it happens.