Date:

Ilya Sutskever Predicts End of AI Pre-training

AI Pioneer Warns of End of Pre-Training, Predicts Shift to "Agent" AI Models

Ilya Sutskever, the co-founder and former chief scientist of OpenAI, made a rare public appearance at the Conference on Neural Information Processing Systems (NeurIPS) in Vancouver, where he shared his views on the future of artificial intelligence. Sutskever, known for his work on large language models, warned that the era of pre-training is coming to an end and that AI systems will need to adapt to new approaches.

The End of Pre-Training

During his talk, Sutskever stated that "pre-training as we know it will unquestionably end." This refers to the initial phase of AI model development, where a large language model learns patterns from vast amounts of unlabeled data, typically text from the internet, books, and other sources.

Peak Data and the Fossil Fuel Analogy

Sutskever believes that the internet contains a finite amount of human-generated content, much like oil is a finite resource. He emphasized that the industry is tapping out on new data to train on, and this dynamic will eventually force a shift away from the way models are trained today. This will lead to a new era of AI development, where systems will have to deal with the data they have and make the most of it.

The Future of AI: "Agent" Models

Sutskever predicted that next-generation models will be "agentic in a real way," meaning they will be able to perform tasks, make decisions, and interact with software on their own. These agent models will be able to reason, working out problems step-by-step, rather than simply pattern-matching based on what they’ve seen before. He also mentioned that these systems will be more unpredictable, similar to advanced AI systems that play chess, which are unpredictable to the best human chess players.

Scaling and Evolutionary Biology

Sutskever drew a comparison between the scaling of AI systems and evolutionary biology, citing research on the relationship between brain and body mass across species. He noted that while most mammals follow one scaling pattern, hominids (human ancestors) show a distinctly different slope in their brain-to-body mass ratio on logarithmic scales. He suggested that AI might similarly discover new approaches to scaling beyond how pre-training works today.

Incentivizing AI Development

During the Q&A session, an audience member asked how researchers can create the right incentive mechanisms for humanity to develop AI that gives it "the freedoms that we have as homosapiens." Sutskever responded that he doesn’t feel confident answering questions like this, as it would require a "top-down government structure." He also mentioned the possibility of cryptocurrency, which elicited chuckles from the audience.

Conclusion

Ilya Sutskever’s predictions and insights on the future of AI offer a glimpse into a new era of AI development, where systems will need to adapt to new approaches and become more complex and unpredictable. As the field continues to evolve, it will be crucial to address the ethical and societal implications of these advancements.

FAQs

Q: What is the future of AI development?
A: According to Ilya Sutskever, the future of AI development will involve the use of "agent" models that can perform tasks, make decisions, and interact with software on their own.

Q: What is the significance of the end of pre-training?
A: The end of pre-training will mark a shift away from the initial phase of AI model development, where a large language model learns patterns from vast amounts of unlabeled data. This will lead to new approaches to AI development and the need for more sophisticated systems.

Q: How can we ensure the development of AI that is beneficial to humanity?
A: Sutskever suggests that creating the right incentive mechanisms will be crucial, but he is not confident in his ability to comment on this topic, as it would require a "top-down government structure."

Latest stories

Read More

LEAVE A REPLY

Please enter your comment!
Please enter your name here