New Update
/afaqs/media/media_files/2025/01/10/Wh9RJ5pO3rLAohRaVaPD.png)
0
By clicking the button, I accept the Terms of Use of the service and its Privacy Policy, as well as consent to the processing of personal data.
Don’t have an account? Signup
Meta Platforms has rolled out the newest iteration of its large language model—Llama 4—featuring two variants: Llama 4 Scout and Llama 4 Maverick.
According to Meta, Llama 4 is a multimodal AI model, meaning it can interpret and process multiple formats of data, including text, images, video, and audio. The model is designed to understand and convert content across these formats, enabling more dynamic and flexible AI applications.
Llama 4 Scout has been designed with efficiency in mind, capable of running on a single Nvidia H100 GPU and supporting a 10-million-token context window. It has outperformed rivals like Google’s Gemma 3 and Mistral 3.1 across multiple benchmark tests. Meanwhile, Llama 4 Maverick is focused on more complex use cases like coding and logical reasoning. Despite using fewer active parameters, it has shown better performance than models such as GPT-4o and DeepSeek-V3.
Meta is currently working on Llama 4 Behemoth, a large-scale AI model featuring 288 billion active parameters and a total of 2 trillion parameters. While still undergoing training, early expectations suggest it could outperform current models such as GPT-4.5 and Claude Sonnet 3.7, particularly in STEM-related evaluations.
The Llama 4 models are built on a “mixture of experts” (MoE) framework, which improves efficiency by activating only a subset of parameters for each task. While Meta promotes the models as open-source, the licensing terms limit usage by companies with more than 700 million users. This clause has sparked criticism from the Open Source Initiative, which questions the openness of the release.