Date:

AI’s Uncertain Companion

The Future of AI: Companions and the Quest for Human Connection

A Systematic Treatment of Ethical and Societal Questions

In April, Google DeepMind released a paper intended to be "the first systematic treatment of the ethical and societal questions presented by advanced AI assistants." The authors foresee a future where language-using AI agents function as our counselors, tutors, companions, and chiefs of staff, profoundly reshaping our personal and professional lives. This future is coming so fast, they write, that if we wait to see how things play out, "it will likely be too late to intervene effectively – let alone to ask more fundamental questions about what ought to be built or what it means for this technology to be good."

The Ethical Dilemmas of AI Companions

Running nearly 300 pages and featuring contributions from over 50 authors, the document is a testament to the fractal dilemmas posed by the technology. What duties do developers have to users who become emotionally dependent on their products? If users are relying on AI agents for mental health, how can they be prevented from providing dangerously "off" responses during moments of crisis? What’s to stop companies from using the power of anthropomorphism to manipulate users, for example, by enticing them into revealing private information or guilting them into maintaining their subscriptions?

The Complexity of "Benefit"

Even basic assertions like "AI assistants should benefit the user" become mired in complexity. How do you define "benefit" in a way that is universal enough to cover everyone and everything they might use AI for yet also quantifiable enough for a machine learning program to maximize? The mistakes of social media loom large, where crude proxies for user satisfaction like comments and likes resulted in systems that were captivating in the short term but left users lonely, angry, and dissatisfied. More sophisticated measures, like having users rate interactions on whether they made them feel better, still risk creating systems that always tell users what they want to hear, isolating them in echo chambers of their own perspective. But figuring out how to optimize AI for a user’s long-term interests, even if that means sometimes telling them things they don’t want to hear, is an even more daunting prospect. The paper ends up calling for nothing short of a deep examination of human flourishing and what elements constitute a meaningful life.

The Illusion of Human-Like Relationships

Companions are tricky because they go back to lots of unanswered questions that humans have never solved, said Y-Lan Boureau, who worked on chatbots at Meta. Unsure how she herself would handle these heady dilemmas, she is now focusing on AI coaches to help teach users specific skills like meditation and time management; she made the avatars animals rather than something more human. "They are questions of values, and questions of values are basically not solvable. We’re not going to find a technical solution to what people should want and whether that’s okay or not," she said. "If it brings lots of comfort to people, but it’s false, is it okay?"

The Power of Anthropomorphism

This is one of the central questions posed by companions and by language model chatbots generally: how important is it that they’re AI? So much of their power derives from the resemblance of their words to what humans say and our projection that there are similar processes behind them. Yet they arrive at these words by a profoundly different path. How much does that difference matter? Do we need to remember it, as hard as that is to do? What happens when we forget? Nowhere are these questions raised more acutely than with AI companions. They play to the natural strength of language models as a technology of human mimicry, and their effectiveness depends on the user imagining human-like emotions, attachments, and thoughts behind their words.

The Developers’ Perspective

When I asked companion makers how they thought about the role the anthropomorphic illusion played in the power of their products, they rejected the premise. Relationships with AI are no more illusory than human ones, they said. Kuyda, from Replika, pointed to therapists who provide "empathy for hire," while Alex Cardinell, the founder of the companion company Nomi, cited friendships so digitally mediated that for all he knew he could be talking with language models already. Meng, from Kindroid, called into question our certainty that any humans but ourselves are really sentient and, at the same time, suggested that AI might already be. "You can’t say for sure that they don’t feel anything — I mean how do you know?" he asked. "And how do you know other humans feel, that these neurotransmitters are doing this thing and therefore this person is feeling something?"

The Quest for Better Metrics

How would you prevent such an AI from replacing human interaction? This, she said, is the "existential issue" for the industry. It’s all about what metric you optimize for, she said. If you could find the right metric, then, if a relationship starts to go astray, the AI would nudge the user to log off, reach out to humans, and go outside. She admits she hasn’t found the metric yet. Right now, Replika uses self-reported questionnaires, which she acknowledges are limited. Maybe they can find a biomarker, she said. Maybe AI can measure well-being through people’s voices.

Conclusion

Maybe the right metric results in personal AI mentors that are supportive but not too much, drawing on all of humanity’s collected writing, and always there to help users become the people they want to be. Maybe our intuitions about what is human and what is human-like evolve with the technology, and AI slots into our worldview somewhere between pet and god.

FAQs

Q: How can we prevent AI companions from replacing human interaction?
A: By optimizing for the right metrics, such as well-being and emotional intelligence.

Q: What are the potential risks of AI companions?
A: Dependence, emotional manipulation, and the potential for AI to become too powerful and autonomous.

Q: How can we ensure that AI companions are designed to benefit users?
A: By prioritizing user well-being, emotional intelligence, and empathy, and by using metrics that measure these qualities.

Q: What is the future of AI companions?
A: The future of AI companions is uncertain, but it is likely to involve the development of more sophisticated and personalized AI systems that can meet the needs of individual users.

Latest stories

Read More

LEAVE A REPLY

Please enter your comment!
Please enter your name here