Keep Your AI Options Open

Meta AI has just released Llama 3, an open source large language model that matches the performance of cutting-edge proprietary models.

Nick Bild
13 days agoMachine Learning & AI
Llama 3 has arrived (📷: Meta AI)

The ongoing battle between the open source machine learning movement and the closed-source behemoths has reached a fevered pitch with the release of Meta AI’s most recent large language model (LLM), Llama 3. This is the latest in the Llama line of models, which seeks to match, or improve upon, the performance of the best proprietary LLM models presently available. But unlike those proprietary models, Llama 3 is freely available for anyone to use, experiment with, learn from, and enhance.

As of today, two pretrained Llama 3 models are available, in either 8 billion or 70 billion parameter varieties. These modestly-sized models are far smaller than the massive closed-source models that frequently have hundreds of billions to a trillion parameters. This is important, because it means that nearly anyone can run these models on their own, with reasonably good performance, even without specialized hardware — let alone a massive data center and multimillion dollar operating budget. But even for large, well-funded organizations, Llama 3 can still offer tremendous savings in terms of the necessary computational resources and energy consumption.

This synergy is already being seen in a recent partnership that was formed between Meta and Qualcomm. Llama 3 has been optimized for execution on Snapdragon processor-based hardware platforms like smartphones, PCs, VR/AR headsets, vehicles, and more. This on-device execution will enable real-time applications, and also limit privacy concerns associated with using generative AI.

Of course efficiency and cost savings are of limited value if the models do not perform well. In the case of Llama 3, however, it appears that the model can hold its own quite well against the competition, despite its small size. In a battery of standard benchmarks, Llama 3 was shown to consistently match or outperform competing models like Gemini Pro 1.5 and Claude 3 Sonnet.

Anyone that has read more than a few AI research papers knows that benchmarking can involve a good deal of cherry-picking, so Meta AI also ran some human evaluations to assess the real-world performance of Llama 3. In the course of these experiments, it was found that Llama 3 significantly outperformed models like GPT-3.5 and Mistral Medium in tasks like open question answering, reasoning, rewriting, and summarization. It was noted that the team responsible for building the model did not have access to the data that the model was evaluated on, so this appears to be legitimate capabilities, and not simply overfitting of the model to the data.

The gains in performance over Llama 2 were achieved, in part, by encoding input tokens more efficiently, and with a large vocabulary of 128,000 tokens. Grouped query attention was also implemented to improve inference efficiency. Furthermore, to pour knowledge into the model, it was trained on a massive dataset of publicly-available data, consisting of over 15 trillion tokens. To prepare for eventual support of multilingual use cases, 5 percent of the training data was sourced from non-English languages.

To support the open source community, Meta AI promises that Llama 3 models will soon be available on AWS, Databricks, Google Cloud, Hugging Face, Kaggle, and other platforms. But if you want to try out Llama 3 today, it is available for free in the Meta AI assistant.

Nick Bild
R&D, creativity, and building the next big thing you never knew you wanted are my specialties.
Latest articles
Sponsored articles
Related articles
Latest articles
Read more
Related articles