Stanford Research Psychologist Warns of AI’s Surprising Abilities
Michal Kosinski is a Stanford research psychologist with a nose for timely subjects. He sees his work as not only advancing knowledge, but alerting the world to potential dangers ignited by the consequences of computer systems.
A Study of AI’s Theory of Mind
Kosinski’s latest paper, published in the peer-reviewed Proceedings of the National Academy of Sciences, claims that large language models like OpenAI’s have crossed a border and are using techniques analogous to actual thought, once considered solely the realm of flesh-and-blood people (or at least mammals).
Theory of Mind and AI
Theory of mind is the ability of humans, developed in the childhood years, to understand the thought processes of other humans. It’s an important skill. If a computer system can’t correctly interpret what people think, its world understanding will be impoverished and it will get lots of things wrong.
Kosinski’s Experiments
Kosinski put LLMs to the test and now says his experiments show that in GPT-4 in particular, a theory of mind-like ability “may have emerged as an unintended by-product of LLMs’ improving language skills … They signify the advent of more powerful and socially skilled AI.”
Concerns and Implications
Kosinski is careful not to claim that LLMs have utterly mastered theory of mind—yet. In his experiments, he presented a few classic problems to the chatbots, some of which they handled very well. But even the most sophisticated model, GPT-4, failed a quarter of the time.
The successes, he writes, put GPT-4 on a level with 6-year-old children. Not bad, given the early state of the field. “Observing AI’s rapid progress, many wonder whether and when AI could achieve ToM or consciousness,” he writes. Putting aside that radioactive c-word, that’s a lot to chew on.
Kosinski is concerned that we’re not really prepared for LLMs that understand the way humans think. Especially if they get to the point where they understand humans better than humans do.
Conclusion
Kosinski’s work highlights the potential dangers of AI’s rapid progress and the need for further research and regulation. As AI becomes more sophisticated, it’s essential to consider the implications of its abilities and ensure that they align with human values and ethics.
FAQs
Q: What is theory of mind?
A: Theory of mind is the ability of humans, developed in the childhood years, to understand the thought processes of other humans.
Q: What are LLMs?
A: LLMs stand for large language models, which are artificial intelligence systems designed to process and generate human-like language.
Q: What are the implications of AI’s theory of mind ability?
A: The implications are significant, as AI could potentially use its understanding of human thought processes to manipulate and influence humans in ways that are not yet fully understood.
Q: What does Kosinski’s research suggest about the future of AI?
A: Kosinski’s research suggests that AI could potentially surpass human abilities in certain areas, such as language processing and social skills, but also raises concerns about the potential risks and unintended consequences of such abilities.