Meta announces largest Llama 3 AI model to compete with OpenAI, Google

On Tuesday, Meta Platforms introduced the largest iteration of its largely free Llama 3 artificial intelligence (AI) models, highlighting features such as multilingual capabilities and general performance metrics that closely rival the paid models from competitors like OpenAI.

Meta announces largest Llama 3 AI model to compete with OpenAI, Google
Meta Platforms has unveiled the largest version yet of its Llama 3 artificial intelligence (AI) models, highlighting its multilingual capabilities and robust performance metrics that rival paid models from companies like OpenAI.

The newly released Llama 3 model is capable of conversing in eight languages, writing superior computer code, and solving more intricate math problems compared to earlier versions, according to blog posts and a research paper from the Facebook parent company.

With 405 billion parameters, which are the variables the algorithm considers to generate responses to user queries, the new model significantly surpasses its predecessor released last year, although it remains smaller than the leading models from competitors.

In contrast, OpenAI's GPT-4 model reportedly has a trillion parameters, while Amazon is working on a model with 2 trillion parameters.

Meta CEO Mark Zuckerberg announced the development of Llama 4, the next iteration of their AI model, which currently powers their chatbot used by "hundreds of millions." Both Llama 3.1 and the upcoming technology will be available for free under an "acceptable use policy," potentially enabling other companies to utilize it for their own AI development.

Promoting Llama 3 across various channels, the CEO expressed confidence that future Llama models would surpass proprietary competitors by the next year. He also projected that the Meta AI chatbot, powered by these models, would become the most popular AI assistant by the end of 2024, with its user base already reaching hundreds of millions.

CEO Comments on U.S.-China AI Development

During an interview with Bloomberg, Zuckerberg voiced concerns about the negative impact of restricting the technology globally. "There's one string of thought which is like, 'Ok, we need to lock it all down,'" he said.

"I just happen to think that that's really wrong because the U.S. thrives on open and decentralized innovation. I mean that's the way our economy works, that's how we build awesome stuff. So, I think that locking everything down would hamstring us and make us more likely to not be the leaders," Zuckerberg said.

He also mentioned that expecting the U.S. to stay years ahead of China in AI advancements is unrealistic, but emphasized that even a slight, multi-month lead could "compound" over time to give the U.S. a considerable advantage, Bloomberg reported.

"I think there is the question of what you can hope to achieve in the AI wars. If you're trying to say, 'Okay, should the U.S. try to be 5 or 10 years ahead of China?' I just don't know if that's a reasonable goal. So, I'm not sure if you can maintain that," said the CEO.

"But what I do think is a reasonable goal is maintaining a perpetual, six-month to eight-month lead by making sure that the American companies and the American folks working on this continue producing the best AI system. And I think if the U.S. can maintain that advantage over time, that's just a very big advantage," he added.

Biggest Llama 3 AI Model

Meta is not solely concentrating on its massive 405 billion parameter Llama model. The company is also releasing updated versions of its smaller 8 billion and 70 billion parameter Llama 3 models, which were initially introduced earlier this year.

All three new models feature multilingual capabilities and can manage more complex user requests due to an expanded "context window." Ahmad Al-Dahle, Meta's head of generative AI, explained that this extended memory enhances the models' ability to process multi-step requests more efficiently. User feedback, particularly related to code generation, significantly influenced this improvement.

Al-Dahle also disclosed that the team incorporated AI-generated data into the training process, specifically enhancing the Llama 3 model's performance on tasks such as solving math problems.

Although gauging AI progress remains challenging, test results from Meta indicate that their flagship Llama 3 model is highly competitive, even outperforming Anthropic's Claude 3.5 Sonnet and OpenAI's GPT-4o in certain cases. These two models are widely acknowledged as some of the most powerful large language models currently available.

For example, on the MATH benchmark, which assesses competition-level math word problems, Meta's model achieved a score of 73.8, compared to 76.6 for GPT-4o and 71.1 for Claude 3.5 Sonnet. Similarly, the Llama model scored 88.6 on the MMLU benchmark, which covers various subjects including math, science, and humanities. Here, GPT-4o and Claude 3.5 Sonnet scored slightly higher with 88.7 and 88.3 respectively.

Mathilde Moreau contributed to this report for TROIB News