Date:

Meta’s benchmarks for its new AI models are a bit misleading

Meta’s AI Model Maverick: A Tale of Two Versions

A Comparison of Maverick on LM Arena and the Publicly Available Version

One of the new flagship AI models released by Meta, Maverick, has ranked second on LM Arena, a test that has human raters compare the outputs of models and choose which they prefer. However, it appears that the version of Maverick deployed to LM Arena differs from the version that’s widely available to developers.

The Difference in Versions

As several AI researchers pointed out on X, Meta noted in its announcement that the Maverick on LM Arena is an "experimental chat version." A chart on the official Llama website discloses that Meta’s LM Arena testing was conducted using "Llama 4 Maverick optimized for conversationality."

The Problem with Customized Models

For various reasons, LM Arena has never been the most reliable measure of an AI model’s performance. However, AI companies generally haven’t customized or otherwise fine-tuned their models to score better on LM Arena – or haven’t admitted to doing so, at least. The problem with tailoring a model to a benchmark, withholding it, and then releasing a "vanilla" variant of that same model is that it makes it challenging for developers to predict exactly how well the model will perform in particular contexts. It’s also misleading. Ideally, benchmarks – woefully inadequate as they are – provide a snapshot of a single model’s strengths and weaknesses across a range of tasks.

Differences in Behavior

Researchers on X have observed stark differences in the behavior of the publicly downloadable Maverick compared with the model hosted on LM Arena. The LM Arena version seems to use a lot of emojis, and give incredibly long-winded answers.

A Comparison of the Two Versions

  • LM Arena Version: Uses a lot of emojis and provides long-winded answers
  • Publicly Available Version: No significant issues reported

Conclusion

The discrepancies between the two versions of Maverick raise questions about the transparency and accuracy of AI model testing. While Meta has not commented on the matter, it’s clear that the company has customized the model for LM Arena, which could impact the reliability of the benchmark.

FAQs

Q: What is LM Arena?

A: LM Arena is a test that has human raters compare the outputs of models and choose which they prefer.

Q: What is the difference between the Maverick on LM Arena and the publicly available version?

A: The Maverick on LM Arena is an "experimental chat version" that has been optimized for conversationality, whereas the publicly available version is a "vanilla" variant.

Q: Why is it a problem to customize a model for a benchmark and then release a "vanilla" variant?

A: It makes it challenging for developers to predict exactly how well the model will perform in particular contexts, and it’s also misleading.

Q: What are the differences in behavior between the two versions of Maverick?

A: The LM Arena version uses a lot of emojis and provides long-winded answers, whereas the publicly available version has no significant issues reported.

Latest stories

Read More

LEAVE A REPLY

Please enter your comment!
Please enter your name here