Why Elon Musk Reacted Strongly to Trump's AI Strategy

A feel-good photo opportunity has sparked a billionaire rivalry. Here's what the underlying issues are.

Why Elon Musk Reacted Strongly to Trump's AI Strategy
At the White House on Tuesday, SoftBank CEO Masayoshi Son expressed strong optimism, predicting that “artificial superintelligence” will usher in America’s “golden age.” President Donald Trump smiled as Son, Sam Altman of OpenAI, and Larry Ellison from Oracle revealed a $500 billion investment aimed at harnessing the capabilities of advanced AI.

However, the positivity dissipated quickly.

On Wednesday, Elon Musk, a close adviser to Trump and a competitor in the AI field who was notably absent from the press event, launched a tirade of online mockery. “They don’t actually have the money,” he posted on X.

Musk particularly targeted Altman, whom he co-founded OpenAI with and is currently suing. He shared an image of a crack pipe, humorously suggesting that Altman and his team were engaged in reckless behavior. After several hours of Musk’s barbs, Altman retaliated, stating, “I realize what is great for the country isn’t always what’s optimal for your companies, but in your new role I hope you’ll mostly put first.”

This incident showcased the underlying tensions beyond a simple photo op. Clearly, there are significant stakes involved beyond a few minutes in front of the new president.

This development aligns with Trump’s commitment to enhancing American tech capabilities and establishing dominance over China. The aim was to promote a new partnership between SoftBank, OpenAI, Oracle, and MGX to create AI data centers valued at up to $500 billion over the next four years. Dubbed “Stargate,” this initiative hopes to dramatically expand current operations, leading to the rapid development of advanced AI systems, with Ellison asserting the potential for instant, tailor-made cancer vaccines.

However, Stargate quickly transitioned from a political triumph for Trump to a rather humorous portrayal of the public disputes among billionaires. Altman and Musk, once friends, are now rivals in the high-stakes competition to develop cutting-edge AI technologies. Each has, at times, served as a prominent figure in Washington's conversation about the future of humanity.

Ultimately, what they are truly vying for are accolades pertaining to the most sophisticated AI systems—those claiming to approach “artificial general intelligence,” a concept that encompasses AI capable of matching or exceeding human abilities.

To delve deeper into this intense rivalry and its broader implications, DFD consulted Ethan Mollick, a professor at the University of Pennsylvania Wharton School and the author of “Co-Intelligence: Living and Working With AI.”

Mollick shared a note of skepticism regarding the project on X, questioning the ultimate goals behind this competition. “For those convinced they are making AGI soon,” he asked, “what does daily life look like 5-10 years later?”

In an interview with DFD, he explored the motivations of each party involved in this substantial investment and the implications for America's standing in the AI sector.

Additionally, Mollick addressed Musk's skepticism toward this specific project, the role of China in the ambiguous race toward AGI, and the beliefs within the AI community about creating a “machine god” through initiatives like Stargate. An edited and condensed version of their discussion follows:

This Stargate project seems to meet numerous objectives for the involved parties. Does it exhibit a “too good to be true” aspect?

Part of this involves a unique R&D investment into something that is vaguely defined but could genuinely transform society and the economy. Everyone wants their stake in this emerging world. Yet, it’s challenging to see how it checks every box, especially in the absence of a policy framework. The investment is one facet, but what about the geopolitical context? It aims to accelerate U.S. company growth, but with such a multifaceted approach, it remains uncertain if it effectively addresses all existing challenges. Questions linger about the push for open versus closed models and the government’s role in influencing these dynamics. The rescinded Biden AI executive order has left a void in national direction.

What are the risks of investing so heavily when “AGI” lacks a concrete definition?

AGI is vaguely defined as an entity that outperforms human experts at all tasks or the average individual at most tasks, but clarity is lacking. The narrative from AI companies increasingly suggests that this is a step towards ASI, a super-intelligent machine, akin to a machine god. The future vision behind this initiative remains uncertain. While those working in these labs have sincere beliefs in the potential of AGI, perspectives vary significantly.

Is China approaching AGI from a parallel perspective?

It’s perplexing to navigate the strategies across the board. China has released impressive models like DeepSeek and has been publishing significant research. The rapid developments in the field can be attributed to entities like Meta making their LLaMA models open-source. The geopolitical implications surrounding AGI remain elusive since the definition itself is not clear. Is AGI the ultimate goal, or will it simply contribute to an ongoing evolution of smarter technologies over the next few years?

Some have likened Stargate to the Manhattan Project or the Apollo Program, yet the lack of a definitive goal seems to weaken those comparisons. Is there a more suitable analogy?

That’s hard to pinpoint since, in many respects, this initiative isn’t truly a research project. Those historical programs were comparably significant in funding; however, they peaked at about 0.5 percent of U.S. GDP. In fact, Meta spent more last year on H100 chips than was inflation-adjusted for the Manhattan Project’s entire budget. Therefore, equating funding amounts with the scale of ambition can be misleading.

What complicates matters is that it’s not evident that this funding is aimed at groundbreaking research; it leans more toward facility construction—effectively an infrastructure project. However, larger infrastructure typically correlates with larger models, which raises questions about the wise integration of scale and technology.

We are faced with various competing labs at a similar developmental stage. Does this project give OpenAI an edge? How does that affect Google, X, and other firms scaling their work?

It's challenging to draw comparisons because it’s not an overarching national effort. The endpoint remains ambiguous and it’s uncertain whether the funding favors fundamental research or merely scaling high-level commercial projects, with profit benefits at every juncture.

Why do you believe Musk is particularly critical of this project?

Certainly, there is a history of tension between Musk and OpenAI. However, it's also evident that Musk is currently leading in scaling efforts. From what I've heard, he has managed to get more chips operational faster than any of his competitors, with Grok models showing rapid growth. His critiques of OpenAI could stem from his desire to dominate this race, but once again, the ultimate goal of the race appears undefined.

What’s particularly striking is the absence of a vision for the future. While many tout the potential for supercharged scientific research, this reality entails reevaluating how we conduct science to leverage these models effectively. That process warrants significant consideration of associated social elements.

Aarav Patel for TROIB News