Meta unveils new AI technology for creating videos with audio

On Friday, Facebook's parent company Meta revealed the development of a new artificial intelligence model named Movie Gen. This innovative model is designed to generate realistic video and audio clips based on user prompts. According to the company, it has the potential to compete with offerings from prominent media generation startups such as OpenAI and ElevenLabs.

Meta unveils new AI technology for creating videos with audio
Meta, the owner of Facebook, revealed on Friday the launch of a new artificial intelligence model named Movie Gen, designed to produce realistic video and audio clips based on user prompts. The company asserts that this model could compete with offerings from major media generation startups such as OpenAI and ElevenLabs.

Meta provided samples displaying Movie Gen's capabilities, which included videos of animals engaging in activities like swimming and surfing, as well as clips that featured real photographs of individuals depicted in actions such as painting on a canvas.

The capabilities of Movie Gen extend beyond video creation; it can generate background music and sound effects that are synchronized with visual content, and it offers video editing features. The videos produced can last up to 16 seconds, while audio clips can be as long as 45 seconds. Meta presented data from blind tests showing that Movie Gen performs well compared to solutions from startups like Runway, OpenAI, ElevenLabs, and Kling.

This announcement arrives during a time when Hollywood is exploring the potential of generative AI video technology, following a demonstration by Microsoft-backed OpenAI in February of its product, Sora, which can generate feature film-like videos from text prompts.

There is a strong interest from technologists in the entertainment sector to leverage such technologies to enhance and accelerate the filmmaking process. However, concerns persist regarding systems that may have been trained on copyrighted materials without authorization.

Lawmakers are also voicing concerns about the implications of AI-generated fakes, or deepfakes, being deployed in elections across various countries, including the U.S., Pakistan, India, and Indonesia.

Meta representatives have indicated that the company does not have plans to make Movie Gen available for open use by developers, unlike the Llama series of large-language models. This decision reflects Meta’s cautious approach, as they evaluate the risks associated with each model on a case-by-case basis. They opted not to elaborate on the specific evaluation of Movie Gen.

Instead, Meta has stated that it is collaborating with the entertainment community and other content creators to explore applications for Movie Gen, with plans to integrate it into Meta's products by next year.

In a blog post and accompanying research paper, Meta disclosed that a mix of licensed and publicly available datasets were utilized in the development of Movie Gen.

This year, OpenAI has also been in discussions with Hollywood executives and agents regarding potential collaborations involving Sora; however, no agreements have surfaced so far. Tensions surrounding the company's approach heightened in May when actress Scarlett Johansson accused OpenAI of using an imitation of her voice without consent for its chatbot.

In September, Lionsgate Entertainment, known for franchises like "The Hunger Games" and "Twilight," revealed it had granted AI startup Runway access to its extensive film and television library, enabling the training of an AI model. In exchange, the studio mentioned that it and its filmmakers could leverage the model to enhance their projects.

Jessica Kline contributed to this report for TROIB News