Warning: exif_imagetype(https://payspacemagazine.com/wp-content/uploads/2024/06/meta-releases-new-ai-research-models.jpg): failed to open stream: Connection refused in /home/deploy/sites/payspacemagazine.com/wp-includes/functions.php on line 3314

Warning: file_get_contents(https://payspacemagazine.com/wp-content/uploads/2024/06/meta-releases-new-ai-research-models.jpg): failed to open stream: Connection refused in /home/deploy/sites/payspacemagazine.com/wp-includes/functions.php on line 3336

Warning: exif_imagetype(https://payspacemagazine.com/wp-content/uploads/2024/06/meta-releases-new-ai-research-models.jpg): failed to open stream: Connection refused in /home/deploy/sites/payspacemagazine.com/wp-includes/functions.php on line 3314

Warning: file_get_contents(https://payspacemagazine.com/wp-content/uploads/2024/06/meta-releases-new-ai-research-models.jpg): failed to open stream: Connection refused in /home/deploy/sites/payspacemagazine.com/wp-includes/functions.php on line 3336
News

Meta Releases New AI Research Models

The new AI models from Meta can generate both text and images and detect AI-generated speech within larger audio snippets.

Meta Releases New AI Research Models

Meta’s Fundamental AI Research (FAIR) team has released several new AI models to accelerate future research, facilitate innovation and allow others to apply AI at scale.

The five models released for public use include image-to-text and text-to-music generation models, a multi-token prediction model and a technique for detecting AI-generated speech.

Meta’s Chameleon models, which can process and deliver both image and text simultaneously, will be available under a research-only license. Besides, the team is releasing a geographic disparities evaluation code to help the community improve diversity across their text-to-image generative models.

The FAIR Multi-Token Prediction approach enables the training of large language models (LLMs) to predict multiple future words at once instead of one at a time. The pre-trained models using this approach for code completion are also released under a non-commercial, research-only license.

Another component of the released research tools is the Generative AI solution JASCO, which can accept various inputs, such as chords or beats, to improve control over generated music outputs. This text-to-music generation model allows the incorporation of symbols and audio in the same context.

In addition, FAIR presented AudioSeal, an audio watermarking technique designed specifically for the localized detection of AI-generated speech. The tool enhances the detection speed and allows for pinpointing AI-generated segments within a longer audio snippet. Unlike other AI research models, AudioSeal is released under a commercial license.

Meta’s Fundamental AI Research (FAIR) team was created eleven years ago. Last year, the FAIR released Llama, an open, pre-trained large language model. This was followed by open-source Llama 2 and the latest iteration Llama3, available for research and commercial use. Meta uses Llama 3 to run its own in-app virtual assistant called MetaAI.

It was earlier reported that Meta Platforms postponed the launch of its artificial intelligence models in Europe.

Nina Bobro

1190 Posts 0 Comments

https://payspacemagazine.com/

Nina is passionate about financial technologies and environmental issues, reporting on the industry news and the most exciting projects that build their offerings around the intersection of fintech and sustainability.