Meta’s large language models Llama 3.2 have become generally available on Amazon Bedrock, Amazon SageMaker, and via Amazon Elastic Compute Cloud (Amazon EC2) using AWS Trainium and AWS Inferentia.
The mentioned solution will allow AWS to offer its customers more opportunities for building, deploying, and scaling generative artificial intelligence apps. The corresponding statement was made by Amazon, which is currently an e-commerce giant, one of the largest players in the global technology sector, and the owner of the best American delivery service.
The Llama 3.2 collection builds on the success of previous products of the corresponding line of large language models. Meta offers users new, updated, and highly differentiated models, including small and medium-sized vision LLMs that support image reasoning and lightweight, text-only models optimized for on-device use cases.
The new Llama products are designed to be more affordable and effective. In this case, the focus was on responsible innovation and safety.
The collection includes Llama 3.2 11B Vision and Llama 3.2 90B Vision, which are the first multi-modal Meta’s vision models. In this case, also provides access to Llama 3.2 1B and Llama 3.2 3B, which are optimized for edge and mobile devices. Moreover, the collection includes Llama Guard 3 11B Vision, which is optimized for classifying content by security.
Meta stated that Llama 3.2 has been evaluated on over 150 benchmark datasets, demonstrating competitive performance with leading foundation models. In this case, 128 KB length context support is provided, and multilingual dialogues in eight languages, including English, German, French, Italian, Portuguese, Hindi, Spanish, and Thai.