Date:

Scaling Laws Drive Smarter AI

Just as there are widely understood empirical laws of nature — for example, what goes up must come down, or every action has an equal and opposite reaction — the field of AI was long defined by a single idea: that more compute, more training data and more parameters makes a better AI model.

However, AI has since grown to need three distinct laws that describe how applying compute resources in different ways impacts model performance. Together, these AI scaling laws — pretraining scaling, post-training scaling and test-time scaling, also called long thinking — reflect how the field has evolved with techniques to use additional compute in a wide variety of increasingly complex AI use cases.

The recent rise of test-time scaling — applying more compute at inference time to improve accuracy — has enabled AI reasoning models, a new class of large language models (LLMs) that perform multiple inference passes to work through complex problems, while describing the steps required to solve a task. Test-time scaling requires intensive amounts of computational resources to support AI reasoning, which will drive further demand for accelerated computing.

What Is Pretraining Scaling?

Pretraining scaling is the original law of AI development. It demonstrated that by increasing training dataset size, model parameter count and computational resources, developers could expect predictable improvements in model intelligence and accuracy.

Each of these three elements — data, model size, compute — is interrelated. Per the pretraining scaling law, outlined in this research paper, when larger models are fed with more data, the overall performance of the models improves. To make this feasible, developers must scale up their compute — creating the need for powerful accelerated computing resources to run those larger training workloads.

This principle of pretraining scaling led to large models that achieved groundbreaking capabilities. It also spurred major innovations in model architecture, including the rise of billion- and trillion-parameter transformer models, mixture of experts models and new distributed training techniques — all demanding significant compute.

What Is Post-Training Scaling?

Post-training is the process of refining a model to make it more accurate and relevant for a specific use case. This can be done using various techniques such as fine-tuning, pruning, quantization, distillation, reinforcement learning and synthetic data augmentation.

Pretraining a large foundation model isn’t for everyone — it takes significant investment, skilled experts and datasets. But once an organization pretrains and releases a model, they lower the barrier to AI adoption by enabling others to use their pretrained model as a foundation to adapt for their own applications.

What Is Test-Time Scaling?

Test-time scaling, also known as long thinking, takes place during inference. Instead of traditional AI models that rapidly generate a one-shot answer to a user prompt, models using this technique allocate extra computational effort during inference, allowing them to reason through multiple potential responses before arriving at the best answer.

This AI reasoning process can take multiple minutes, or even hours — and can easily require over 100x compute for challenging queries compared to a single inference pass on a traditional LLM.

How Test-Time Scaling Enables AI Reasoning

The rise of test-time compute unlocks the ability for AI to offer well-reasoned, helpful and more accurate responses to complex, open-ended user queries. These capabilities will be critical for the detailed, multistep reasoning tasks expected of autonomous agentic AI and physical AI applications.

In healthcare, models could use test-time scaling to analyze vast amounts of data and infer how a disease will progress, as well as predict potential complications that could stem from new treatments based on the chemical structure of a drug molecule. Or, it could comb through a database of clinical trials to suggest options that match an individual’s disease profile, sharing its reasoning process about the pros and cons of different studies.

Conclusion

Just as there are widely understood empirical laws of nature, the field of AI has evolved to need three distinct laws that describe how applying compute resources in different ways impacts model performance. By understanding and applying these laws, developers can unlock the potential of AI to drive innovation and improve lives.

FAQs

Q: What is pretraining scaling?
A: Pretraining scaling is the original law of AI development, which demonstrates that increasing training dataset size, model parameter count and computational resources can lead to predictable improvements in model intelligence and accuracy.

Q: What is post-training scaling?
A: Post-training scaling is the process of refining a model to make it more accurate and relevant for a specific use case, using techniques such as fine-tuning, pruning, quantization, distillation, reinforcement learning and synthetic data augmentation.

Q: What is test-time scaling?
A: Test-time scaling, also known as long thinking, is the process of allocating extra computational effort during inference to reason through multiple potential responses before arriving at the best answer.

Q: How does test-time scaling enable AI reasoning?
A: Test-time scaling enables AI to offer well-reasoned, helpful and more accurate responses to complex, open-ended user queries, which will be critical for the detailed, multistep reasoning tasks expected of autonomous agentic AI and physical AI applications.

Latest stories

Read More

LEAVE A REPLY

Please enter your comment!
Please enter your name here