How a billionaire-backed network of AI advisers took over Washington

A sprawling network spread across Congress, federal agencies and think tanks is pushing policymakers to put AI apocalypse at the top of the agenda — potentially boxing out other worries and benefiting top AI companies with ties to the network.

How a billionaire-backed network of AI advisers took over Washington

An organization backed by Silicon Valley billionaires and tied to leading artificial intelligence firms is funding the salaries of more than a dozen AI fellows in key congressional offices, across federal agencies and at influential think tanks.

The fellows funded by Open Philanthropy, which is financed primarily by billionaire Facebook co-founder and Asana CEO Dustin Moskovitz and his wife Cari Tuna, are already involved in negotiations that will shape Capitol Hill’s accelerating plans to regulate AI. And they’re closely tied to a powerful influence network that’s pushing Washington to focus on the technology’s long-term risks — a focus critics fear will divert Congress from more immediate rules that would tie the hands of tech firms.

Acting through the little-known Horizon Institute for Public Service, a nonprofit that Open Philanthropy effectively created in 2022, the group is funding the salaries of tech fellowsin key Senate offices, according to documents and interviews.

Senate Majority Leader Chuck Schumer’s top three lieutenants on AI legislation — Sens. Martin Heinrich (D-N.M.), Mike Rounds (R-S.D.) and Todd Young (R-Ind.) — each have a Horizon fellow working on AI or biosecurity, a closely related issue. The office of Sen. Richard Blumenthal (D-Conn.), a powerful member of the Senate Judiciary Committee who recently unveiled plans for an AI licensing regime, includes a Horizon AI fellow who worked at OpenAI immediately before coming to Congress, according to his bio on Horizon’s web site.


Current and former Horizon AI fellows with salaries funded by Open Philanthropy are now working at the Department of Defense, the Department of Homeland Security and the State Department, as well as in the House Science Committee and Senate Commerce Committee, two crucial bodies in the development of AI rules. They also populate key think tanks shaping AI policy, including the RAND Corporation and Georgetown University’s Center for Security and Emerging Technology, according to the Horizon web site. In 2022, Open Philanthropy set aside nearly $3 million to pay for what ultimately became the initial cohort of Horizon fellows.

Horizon is one piece of a sprawling web of AI influence that Open Philanthropy has built across Washington’s power centers. The organization — which is closely aligned with “effective altruism,” a movement made famous by disgraced FTX founder Sam Bankman-Fried that emphasizes a data-driven approach to philanthropy — has also spent tens of millions of dollars on direct contributions to AI and biosecurity researchers at RAND, Georgetown’s CSET, the Center for a New American Security and other influential think tanks guiding Washington on AI.

In the high-stakes Washington debate over AI rules, Open Philanthropy has long been focused on one slice of the problem — the long-term threats that future AI systems might pose to human survival. Many AI thinkers see those as science-fiction concerns far removed from the current AI harms that Washington should address. And they worry that Open Philanthropy, in concert with its web of affiliated organizations and experts, is shifting the policy conversation away from more pressing issues — including topics some leading AI firms might prefer to keep off the policy agenda.

The network’s fixation on speculative harms is “almost like a caricature of the reality that we're experiencing,” said Deborah Raji, an AI researcher at the University of California, Berkeley, who attended last month’s AI Insight Forum in the Senate. She worries that the focus on existential dangers will steer lawmakers away from addressing risks that today’s AI systems already pose, including their tendency to inject bias, spread misinformation, threaten copyright protections and weaken personal privacy.

“It's going to lead to solutions or policies that are fundamentally inappropriate,” said Raji.

One key issue that has already emerged is licensing — the idea, now part of a legislative framework from Blumenthal and Sen. Josh Hawley (R-Mo.), that the government should require licenses for companies to work on advanced AI. Raji worries that Open Philanthropy-funded experts could help lock in the advantages of existing tech giants by pushing for a licensing regime. She said that would likely cement the importance of a few leading AI companies – including OpenAI and Anthropic, two firms with significant financial and personal links to Moskovitz and Open Philanthropy.

“There will only be a subset of companies positioned to accommodate a licensing regime,” Raji said. “It concentrates existing monopolies and entrenches them even further.”

Mike Levine, a spokesperson for Open Philanthropy, stressed the group’s separation from Horizon. He said Horizon “originally started the fellowship as consultants to Open Phil until they could launch their own legal entity and pay fellows’ salaries directly,” but that even during that period Open Philanthropy “did not play an active role in screening, training, or placement of fellows.”

Levine would not say how much money Open Philanthropy has spent on Horizon fellows since its initial $3 million grant — he said that number remains “in flux” and will depend on how many fellows and host organizations choose to extend for another year.

The Intergovernmental Personnel Act of 1970 lets nonprofits like Horizon cover the salaries of fellows working on Capitol Hill or in the federal government.


But Tim Stretton, director of the congressional oversight initiative at the Project On Government Oversight, said congressional fellows should not be allowed to work on issues where their funding organization has specific policy interests at play. He added that fellows should not draft legislation or educate lawmakers on topics where their backers conceivably stand to gain — a dynamic apparently at play in the case of Horizon’s fellowship program, given Open Philanthropy’s ties to OpenAI and Anthropic.

“We have [the] AI [industry] inserting its staffers into Congress to potentially write new laws and regulations around this emerging field,” Stretton said. “That is a conflict of interest.”

When asked about the ethical and conflict-of-interest issue, Horizon co-founder and executive director Remco Zwetsloot said the fellowship program is “not for the pursuit of particular policy goals,” does not screen applicants for belief in long-term AI risks and includes fellows with a diverse set of views on AI’s existential dangers. He said Horizon does not direct fellows to particular congressional offices. Zwetsloot said his nonprofit now operates independently from Open Philanthropy but did not answer when asked what portion of Horizon’s current funding comes from its initial patron.

Both Zwetsloot and Levine pointed to other groups that fund congressional tech fellows, including TechCongress and the American Association for the Advancement of Science.

Levine said Open Philanthropy funds “a variety of grantees working on AI policy, including many who don’t particularly share our concerns around catastrophic risks.” And he rejected the notion that the group’s ties to top AI firms represent a conflict. “The idea that we, our funders, or our grantees are motivated by pecuniary interests is misguided,” he said.

But David Skaggs, a former Democratic congressman from Colorado and longtime former chair of the board of directors at the U.S. Office of Congressional Ethics, said Open Philanthropy’s use of Horizon to fund congressional AI fellows suggests an attempt to mask the program’s ties to Open Philanthropy, the effective altruism movement or leading AI firms.

“There should be full disclosure of where these folks are coming from,” Skaggs said. “One could imagine a parallel situation in which [the American Israel Public Affairs Committee] had some ostensibly arm’s-length foundation funding foreign policy experts for Hill offices.”

A focus on existential threats

Open Philanthropy's expanding web of AI influence stems from its long-running effort to steer Washington toward rules that address long-term catastrophic risks raised by the technology. Its apocalyptic focus is reinforced by the group’s alignment with effective altruism, a movement that’s fixated on existential risks posed by advanced AI.

Effective altruism, or EA, has become a popular approach in Silicon Valley circles, and counts among its adherents key figures at companies like OpenAI, Anthropic and DeepMind. Some of those individuals signed onto a May letter warning that humanity is at “risk of extinction from AI.” That letter was organized by the Center for AI Safety, a group that received a $5.2 million grant from Open Philanthropy late last year.


Researchers affiliated with effective altruism and focused on long-term risks now largely dominate the United Kingdom’s efforts to regulate advanced AI. The movement has also spread across Stanford University and other colleges serving as key incubators for AI experts.

“It's a smart move on their part to get at students early on in the process because these are viewpoints that get locked in place for a long time,” said Suresh Venkatasubramanian, a professor of computer science at Brown University who co-authored last year’s White House Blueprint for an AI Bill of Rights.

Venkatasubramanian called the effective-altruist focus on cataclysmic AI risk “speculative science fiction” that borders on “fearmongering.” And like Raji, he fears Open Philanthropy’s influence will steer Congress away from problems caused by AI systems now in use.

“There’s a push being made that the only thing we should care about is long-term risk because ‘It’s going to take over the world, Terminator, blah blah blah,’” Venkatasubramanian said. “I think it's important to ask, what is the basis for these claims? What is the likelihood of these claims coming to pass? And how certain are we about all this?”

Venkatasubramanian compared Open Philanthropy’s growing AI network to the Washington influence web recently built by former Google executive Eric Schmidt. “It’s the same playbook that’s being run right now, planting various people in various places with appropriate funding,” he said.

As with Schmidt’s network, Open Philanthropy's influence effort involves many of the outside policy shops that Washington relies on for technical expertise.

RAND, the influential Washington think tank, received a $5.5 million grant from Open Philanthropy in April to research “potential risks from advanced AI” and another $10 million in May to study biosecurity, which overlaps closely with concerns around the use of AI models to develop bioweapons. Both grants are to be spent at the discretion of RAND CEO Jason Matheny, a luminary in the effective altruist community who in September became one of five members on Anthropic’s new Long-Term Benefit Trust. Matheny previously oversaw the Biden administration’s policy on technology and national security at the National Security Council and Office of Science and Technology Policy.

Before joining the White House, Matheny served as the founding director at Georgetown’s CSET, a technology think tank whose experts have testified on Capitol Hill about existential AI risk. CSET is funded almost entirely by Open Philanthropy — it received a $55 million grant ahead of its 2019 launch and an additional $42 million in 2021.

By seeding Washington with voices that stress AI’s apocalyptic potential, Venkatasubramanian said Open Philanthropy could ultimately push Congress toward policies without lawmakers being aware of the conflicts of interest at play.

“It's all kind of hidden,” Venkatasubramanian said. “And that's, I think, a problem.”

A new industry with a deep web

Though AI is a far younger industry than other major lobbies in Washington, its network of connections already runs deep. And there are significant links between Open Philanthropy and leading AI firms OpenAI and Anthropic.

In 2016, OpenAI CEO Sam Altman led a $50 million venture-capital investment in Asana, a software company founded and led by Moskovitz. In 2017, Moskovitz’s Open Philanthropy provided a $30 million grant to OpenAI. Asana and OpenAI also share a board member in Adam D’Angelo, a former Facebook executive.


A spokesperson for OpenAI declined to comment on the company’s ties to Open Philanthropy or the effective-altruist approach to AI. Spokespeople for Asana did not respond to a request for comment.

Altman has been personally active in giving Washington advice on AI and has previously urged Congress to impose licensing regimes on companies developing advanced AI. That proposal aligns with effective-altruist concerns about the technology’s cataclysmic potential, and critics see it as a way to also protect OpenAI from competitors. OpenAI, founded in 2015, is now one of the most powerful and well-financed firms in the AI ecosystem — largely because of a lucrative corporate partnership with Microsoft, which has so far invested $13 billion in the company.

Moskovitz was also an early backer of Anthropic, a two-year-old AI company founded by a former OpenAI executive, participating in a $124 million investment in the company in 2021. Luke Muehlhauser, Open Philanthropy’s senior program officer for AI governance and policy, is one of Anthropic’s four board members. And Holden Karnofsky, Open Philanthropy’s former CEO and current director of AI strategy, is married to the president of Anthropic, Daniela Amodei. Anthropic’s CEO, Dario Amodei, is Karnofsky’s former roommate.

When asked about the company’s ties to Open Philanthropy or its policy goals in Washington, spokespeople for Anthropic declined to comment on the record. In June, Anthropic co-founder Jack Clark tweeted that he opposed AI licensing regimes because it “looks like picking winners.”

Anthropic is fast emerging as another titan of the young industry: Amazon is expected to invest up to $4 billion in a partnership with Anthropic, and the company is now in talks with Google and other investors about an additional $2 billion investment.

In a statement to POLITICO, Moskovitz pledged that any monetary returns from his investment in Anthropic “will be entirely redirected back into our philanthropic work.”

“My goal in investing [in Anthropic] stems from the exact same place as our non-profit work: addressing safety and security issues around transformative AI coming to market,” said Moskovitz, whose net worth is estimated at nearly $19 billion. “Strategic investments to promote safety and security can elevate attention and help make progress in dealing with these issues.”


Levine, the Open Philanthropy spokesperson, said the group’s 2017 grant to OpenAI was “to support work on AI safety, not an equity investment.” He said Open Philanthropy’s Muehlhauser “holds no financial interest” in Anthropic despite sitting on its board. And he pushed back on the notion that OpenAI, Anthropic or other leading AI firms would benefit from licensing regimes, reporting requirements or other rules meant to limit AI’s existential risks.

“Regulations targeting these frontier models would create unique hurdles and costs specifically for companies that already have vast resources, like OpenAI and Anthropic, thus giving an advantage to less-resourced start-ups and independent researchers who need not be subject to such requirements (because they are building less dangerous systems),” Levine wrote (emphasis original).

Many AI experts dispute Levine’s claim that well-resourced AI firms will be hardest hit by licensing rules. Venkatasubramanian said the message to lawmakers from researchers, companies and organizations aligned with Open Philanthropy’s approach to AI is simple — “‘You should be scared out of your mind, and only I can help you.’” And he said any rules placing limits on who can work on "risky" AI would put today’s leading companies in the pole position.

“There is an agenda to control the development of large language models — and more broadly, generative AI technology,” Venkatasubramanian said.

A network spreads across Washington

In April, the same month Open Philanthropy granted RAND more than $5 million to research existential AI risk, Jeff Alstott, a well-known effective altruist and top information scientist at RAND, sketched out a plan to convince Congress to pass licensing requirements that would “constrain the proliferation” of advanced AI systems.

In an April 19 email sent to several members of the Omidyar Network, a network of policy groups established by billionaire eBay founder Pierre Omidyar, Alstott attached a detailed AI licensing proposal which he claimed to have shared with approximately “40 Hill staffers of both parties.”

The RAND researcher stressed that the proposal was “not a RAND report,” and asked recipients to “keep this document and attribution off the public internet.”

RAND spokesperson Jeffrey Hiday confirmed the contents of the email and said Alstott had asked that the document be kept under wraps “because it was not yet final.” And he said this year’s influx of more than $15 million from Open Philanthropy to study AI and biosecurity risks had no bearing on RAND’s recommendations for addressing long-term AI dangers.

“RAND has strict guidelines that prevent any funder from influencing our analysis or outcomes,” Hiday said.

In September, Alstott, who previously worked in the Biden White House alongside Matheny, testified before a Senate Homeland Security and Governmental Affairs subpanel about existential risks posed by AI, especially when it came to biosecurity.

The message landed with its intended audience. Sen. Mitt Romney (R-Utah) said testimony from Alstott, among others, “underscored the fright that exists in my soul, that this is a very dangerous development.”

Matheny also recently recommended that Congress adopt policies to address long-term AI risks, including licensing, in front of the House Science Committee and in the pages of the Washington Post. Hiday said the RAND CEO’s seat on Anthropic’s Long-Term Benefit Trust was reviewed for potential conflicts of interest and called Matheny a “financially disinterested trustee.”

Hiday rejected the idea that RAND’s work on AI’s long-term risks would distract lawmakers from more immediate harms. “The analytic community can, and should, discuss catastrophic AI risks at the same time it considers other potential impacts from the emerging technology,” Hiday said.

But Venkatasubramanian said Capitol Hill has proven it has an extremely limited attention span on tech issues. “There’s a sense that we get one shot and then Congress is going to stop paying attention, because that’s all the time they can spend on this,” he said. “And if you only have one shot, what do you do?”

RAND is just one of several Washington think tanks with ties to Open Philanthropy. In addition to Georgetown’s CSET — whose executive director Dewey Murdick testified alongside Alstott on AI’s long-term risks — the Center for a New American Security has also received several Open Philanthropy grants to study potential dangers posed by advanced AI. One such grant, in 2020, went specifically to CNAS researcher Paul Scharre, an influential voice who has since advocated for the licensing of advanced AI.


CSET spokesperson Tessa Baker said that while Open Philanthropy remains its largest donor, it retains “complete and independent discretion over the research projects we conduct and the recommendations we make.” Baker also noted that some CSET researchers have emphasized a focus on AI’s real-world harms in addition to long-term risks.

Alexa Whaley, a CNAS spokesperson, said the think tank "does not take institutional positions on policy issues" and "accepts funds from a broad range of sources provided they are for purposes that are in keeping with its mission."

Through Horizon, Open Philanthropy also funds the salaries of AI fellows at RAND, CSET and several other prominent Washington think tanks.

Horizon shows up on the Hill

Despite concerns raised by ethics experts, Horizon fellows on Capitol Hill appear to be taking direct roles in writing AI bills and helping lawmakers understand the technology. An Open Philanthropy web page says its fellows will be involved in “drafting legislation” and “educating members and colleagues on technology issues.” Pictures taken inside September’s Senate AI Insight Forum — a meeting of top tech CEOs, AI researchers and senators that was closed to journalists and the public — show at least two Horizon AI fellows in attendance. Those fellows work in Rounds and Heinrich’s Senate offices.

Spokespeople for Horizon and Open Philanthropy say neither group screens potential fellows for their beliefs about long-term AI risks. But a Google document last updated by Open Philanthropy in August 2022 said the fellowship would consider candidates “who share Open Philanthropy’s interests in global catastrophic risks and technology’s long-term impacts.” It goes on to say that the program values “ideological diversity” and does not expect candidates “to share any particular worldview or affiliation.” The language is no longer present on a Horizon web page with details about its fellowship program.

A spokesperson for Blumenthal would not say whether his Horizon AI fellow is helping flesh out the senator’s plans for AI licensing. “Sen. Blumenthal makes his own, independent decisions on policy — which, as anyone who has followed his work on tech knows, unsparingly prioritizes consumer interest over industry interests,” the spokesperson said.

A Heinrich spokesperson said the senator “is the decider of the policy and the work he pursues,” and that his office “benefits from the input of fellows with a wide range of experiences and expertise.” The spokesperson added that Heinrich’s AI fellow from Horizon “always [works] under the supervision of senior aides and legislative staff.”

A spokesperson for Young said the Horizon biosecurity fellow in his office “has limited involvement in AI policymaking” and is currently more focused on Young’s position as commissioner on the National Security Commission on Emerging Biotechnology.

A spokesperson for the House Science Committee said the committee’s Horizon AI fellow “works under the direction of policy staff to advance our priorities on AI, which include promoting innovation and American competitiveness while developing the technology in a transparent, trustworthy and fair manner.”

An aide at the Senate Commerce Committee said the body “has a long history of supporting and benefiting from various fellowship programs,” and said that its AI fellow — who has since been hired full-time at the committee — worked largely on policy related to the CHIPS and Science Act (which includes several AI research provisions).

Spokespeople for Rounds did not respond to multiple requests for comment.

Filling a vacuum

Both supporters and critics of the effective-altruist influence on AI policy say Open Philanthropy’s burgeoning network is largely a product of Washington’s acute lack of staffers with tech expertise.

“In an ideal world, all the relevant government offices would have permanent in-house staff with critical subject-matter expertise on emerging technologies — which they need in no small part to not be overpowered by corporate lobbying,” said Zwetsloot, Horizon’s co-founder. He said he hoped for a day when fellowship programs like the one run by Horizon “are no longer necessary.”

As a counterweight to the growing influence of Open Philanthropy and effective altruists, AI experts who want Washington to focus on a different set of risks are slowly building their own network in the capital.

In late September, a number of civil society groups — including Public Citizen, the Algorithmic Justice League, Data for Black Lives and the American Civil Liberties Union — convened a Capitol Hill briefing attended by Sen. Cory Booker (D-N.J.) that emphasized the challenges existing AI systems pose to civil rights.

But Raji said that loose alliance is so far outgunned by the money, influence and ideological zeal driving Open Philanthropy and the tight-knit network of effective altruists that it finances.

“They're so organized, they're so homogenous in their ideology,” Raji said. “And so they're much more coordinated.”