The government is unable to guarantee the safety of artificial intelligence. This man claims he has the solution.

Brian Anderson is poised to influence the future of AI in health care, provided he has Donald Trump's support.

The government is unable to guarantee the safety of artificial intelligence. This man claims he has the solution.
Artificial intelligence has the potential to transform health care by predicting diseases, expediting diagnosis, choosing effective treatments, and reducing the administrative burden on doctors — but this can only succeed if physicians trust that AI won't jeopardize patient safety.

The government faces challenges in regulating this rapidly advancing technology. However, Dr. Brian Anderson, motivated by his experiences as a family doctor treating low-income immigrants in Massachusetts, believes he has a solution.

Anderson is spearheading the Coalition for Health AI (CHAI), a consortium that includes tech giants and leading hospital systems. This coalition plans to establish quality assurance labs by 2025, essentially allowing the private sector to evaluate AI tools in the absence of government action.

Officials from the Biden administration have expressed support for this initiative. The administration’s leading health tech official, who previously served on CHAI’s board, endorsed the idea at a PMG event in September. The coalition has attracted nearly three thousand industry partners, including notable names like the Mayo Clinic, Duke Health, Microsoft, Amazon, and Google. After his tenure as a family doctor, Anderson became a consultant to federal regulators on health technology and is now trying to persuade President-elect Donald Trump that the health AI sector should regulate itself.

Anderson refers to the existing regulatory gaps as "a wonderful opportunity for industry to lead a bottoms-up effort."

If Trump’s administration backs Anderson's initiative, it could significantly shape the future of AI regulation in healthcare, entrusting primary oversight to the private sector. Critics, however, caution that this could favor larger companies and health systems at the expense of startups, and they express concerns that Anderson’s labs might not truly guarantee safety.

Skeptics argue that the certification process proposed by Anderson may not adequately address how AI could misguide doctors while simultaneously encouraging faster adoption of potentially unsafe tools. They worry that CHAI might prioritize industry interests over patient safety and advocate for increased governmental oversight.

Dr. Robert Califf, who oversees health software regulation at the FDA, mentioned that his agency would struggle to monitor all advanced AI tools without a significant increase in staffing. As it stands, many new AI applications, like chatbots, remain completely unregulated.

Anderson’s proposal has been well-received. Shortly after launching CHAI, he successfully recruited hundreds of industry members and convinced key figures, such as the deputy to Califf and Biden’s health technology coordinator Micky Tripathi, to join CHAI’s board as non-voting members.

CHAI’s initiatives are designed to complement the federal regulations finalized in 2023 that require AI developers to disclose more about their tools. They propose "model cards," akin to nutrition labels, which help companies meet this requirement.

However, securing Trump’s support is uncertain, and without it, the entire plan may be at risk.

Trump has committed to revoking Biden’s 2023 executive order on AI, which directed agencies to establish safety measures. His campaign criticized "Radical Leftwing ideas on the development of this technology" while promoting "AI Development rooted in Free Speech and Human Flourishing."

Conversely, Trump receives counsel from billionaire AI expert Elon Musk, who shares concerns about the technology’s risks and backed a regulatory bill in California for 2024. If Musk wields significant influence, Trump might lean towards stronger government oversight.

In November, Anderson presented his plan to a bipartisan audience on Capitol Hill, attended by technologists aligned with Trump, including Kev Coleman from the Paragon Health Institute, and prominent Democrats, including outgoing Senate Majority Leader Chuck Schumer. Schumer had previously formed a bipartisan task force on AI in 2023, and Tripathi now serves as the chief AI officer at HHS.

Coleman has authored two reports on AI aimed at shaping Trump’s perspective, stating that the FDA should hold the primary regulatory role and be sufficiently fortified with skilled personnel, as suggested by Califf. Simultaneously, he cautions against hasty government regulations that might hinder technological progress and opposes outsourcing AI oversight to self-serving developers.

Permitting bureaucrats easily swayed by "emotionally compelling and sensationalistic claims" regarding AI could limit its potential to reduce healthcare costs, Coleman noted. He also explained that providing oversight to the private sector could lead to conflicts of interest since "many institutions qualified to evaluate medical AI are themselves AI developers."

Several leading House Republicans echo these concerns.

Rep. Jay Obernolte, chair of a bipartisan House task force on AI, has sent multiple letters to top Biden officials warning that Anderson’s assurance labs could put startups at a disadvantage. Rep. Brett Guthrie, the incoming chair of the health policy panel, co-signed these letters.

"Regulatory capture will only drive consolidation and lead to costlier health care for patients across the country," Obernolte and Guthrie asserted.

Unfazed, Anderson went on to meet with Obernolte and other influential Republicans, including Sen. Mike Rounds and Mike Crapo, who is set to lead a Senate committee with significant authority over health care in 2025.

CHAI aims to finalize the certification of its first assurance labs in early 2025.

"AI is moving incredibly fast," Anderson stated. "We need to develop these frameworks at the pace of this kind of innovation."

Anderson has become a prominent figure at HHS through his involvement with MITRE, which operates federal research and development centers. This nonprofit has historically been tasked with advising government agencies.

"We are required by law to not have commercial entanglements," Anderson remarked, referencing MITRE’s original mandate as a Pentagon-supported think tank.

Due to its unique standing, MITRE employees often participate in the early stages of policymaking and frequently collaborate with private-sector entities to address public challenges. As the former chief digital health physician for MITRE, Anderson fostered private-sector partnerships during the pandemic.

In 2020, he led the Covid-19 Health Care Coalition, a collaboration among tech and pharmaceutical companies and health systems that facilitated the collection and distribution of donated plasma for severe COVID-19 cases. Anderson also created the Vaccine Credential Initiative, which developed a digital tool to verify vaccinations, involving major firms and academic medical centers.

As the pandemic began to recede and AI's computing power surged, he contemplated how AI could reshape disease monitoring and drug development.

"This could be used to save real lives," he emphasized. "That was the motivation."

Anderson consulted with his mentor, Dr. John Halamka, head of Mayo Clinic’s tech innovation center, to assemble a team of academics and tech firms to develop AI guidelines in medicine. They also invited government officials, including HHS’s Tripathi, to join their discussions, leading to the formation of CHAI.

Upon the White House's release of its AI Bill of Rights template in fall 2022, health systems and tech companies turned to CHAI. Anderson remarked, "There were clear articulations about things like fairness, independent evaluations of models — those things were clearly called out."

By late 2022, CHAI had introduced a blueprint for trustworthy AI, which encompassed model cards and assurance labs designed to evaluate AI tools.

A year later, the coalition's membership swelled to 1,500 organizations, prompting Anderson to leave MITRE to focus full-time on CHAI, as managing it had grown increasingly complex.

CHAI’s annual membership fees range from $5,000 for smaller companies to $250,000 for larger enterprises. Anderson clarified that while CHAI does not lobby or set standards, policymakers remain interested in its work.

"One of my fears, one of the fears, I think, of many folks within CHAI, is that the development, the balloting, the approval — the normative process of creating a standard can take a long time," he said. "If we rely on that process alone to develop some of these guidelines or guardrails within the innovation community, they won’t be able to keep up with the pace of innovation and AI."

In 2024, CHAI released its model card to help tech developers inform clients about their algorithms’ strengths and weaknesses, intended to complement new HHS transparency rules requiring certified electronic health record companies to disclose various attributes of their decision-support tools by the end of 2024.

Access to CHAI’s model card is free for new members and available for licensing by others. Anderson mentioned that CHAI would also be providing a free, open-source version on GitHub.

The organization is currently certifying seven assurance labs focused on algorithm evaluation. By June, approximately 30 organizations had expressed interest in launching their labs.

Anderson aims to create a national registry to house all model card information and has sought government funding from the Advanced Research Projects Agency for Health to develop AI evaluation tools.

CHAI’s influence is growing, drawing members who appreciate its perceived connections with regulators, a legacy of MITRE’s initial involvement.

Federal regulators are part of CHAI working groups, and officials like Tripathi and Melanie Fontes Rainer frequently participate in CHAI events as featured speakers. Many regulators, including Califf and HHS Deputy Secretary Andrea Palm, have supported assurance labs as complementary to federal oversight.

Although Tripathi endorsed assurance labs at a PMG event last September, he stopped short of endorsing CHAI specifically, stating that HHS is keeping an eye on various AI initiatives.

Both Tripathi and Tazbaz resigned from their nonvoting federal liaison roles on CHAI’s board in 2024 to avoid potential conflicts with HHS policymaking, yet they continue to be featured prominently on CHAI’s website under "Our Purpose."

Regulators continue to commend CHAI; in November, a deputy of Tripathi, Jeff Smith, highlighted CHAI’s model card as compliant with the new transparency rules.

However, the incoming Trump administration with its new appointees may present challenges for Anderson.

Some Republican legislators have voiced concerns about the perceived ties between Biden administration regulators and CHAI. In June, four House lawmakers wrote to Jeff Shuren, then the head of the FDA Center for Devices and Radiological Health, to express their objections.

"While we are ardent supporters of the use of third-party expertise for regulatory review, CHAI comprises legacy tech companies like Microsoft and Google in addition to large health care systems, which all have AI incubator businesses. Their inclusion presents a clear conflict of interest," they stated.

The group sent another letter to Tripathi in November, seeking clarity on how HHS would mitigate the risk of assurance labs run by developers influencing market access.

Anderson is working to address these concerns by establishing operational guidelines for the labs.

"It might be that the final certification framework is not just disclosure [of a conflict of interest], but you cannot certify or you cannot evaluate a model with a CHAI report card where you have a commercial opportunity to benefit," he stated.

CHAI’s extensive membership conceals internal debates regarding the operations of the labs and whether they will instill the confidence health systems need to embrace AI tools.

During an October CHAI meeting in Las Vegas, members raised concerns over whether the certification process would raise AI product costs and strain hospital budgets.

Some members questioned if one-time assurance lab certifications would effectively capture the potential for patient harm, given the varying technologies and workflows across health institutions and AI’s propensity to degrade.

"Our view of AI assurance is it’s got to be context-driven, meaning: I’m assuring an AI for a particular purpose, used by a set of people," remarked Doug Robbins, vice president of engineering and prototyping at MITRE Labs.

MITRE and UMass Chan Medical School are setting up an assurance lab independent of CHAI to ensure AI’s compatibility with specific hospital environments. Meanwhile, CHAI is certifying assurance labs for Mayo Clinic Platform, UMass Memorial Medical Center, and five startups that will assess the algorithms' fundamental functions.

Nigam Shah, chief data scientist for Stanford Health Care and a member of CHAI’s board, believes these labs must evolve into tools capable of rapidly assessing AI within health systems.

"In the beginning, it’s a place," he explained. "After a while, the lab starts releasing parts of what they do as software."

This direction is likely to resonate with Coleman from the Paragon Health Institute, who argues that collaboration with assurance labs should be voluntary and agrees that software solutions are the future.

"This is actually an opportunity for AI to police themselves," he added.

Despite a lack of consensus on validating AI, Anderson is forging ahead with his plans while engaging with Washington. In December, he spoke at HHS’ annual Assistant Secretary for Technology Policy Conference.

The ultimate beneficiaries of the assurance lab model remain uncertain, and sources close to CHAI, Microsoft, Amazon, and Google indicate that the companies are unsure whether the model card concept will be effective. Nonetheless, while some major tech companies within CHAI are not among the first to launch assurance labs, certain members are working on competing platforms. In 2024, Duke Health and Avanade, a Microsoft and Accenture joint venture, announced a platform to help health systems account for their AI applications, eventually evaluating and monitoring AI performance.

Anderson is confident that the market will determine the best means of vetting AI, whether through assurance labs or alternative methods: "If the value that we’re trying to drive, in this case, is creating trust and transparency for safe and effective AI tools, I’m OK with there being winners and losers."

Mark B Thomas contributed to this report for TROIB News