Date:

Sex-Fantasy Chatbots Leak Explicit Messages

Exposing AI Systems: The Unintended Consequences of Unsecured AI Frameworks

All of the 400 exposed AI systems found by UpGuard have one thing in common: They use the open source AI framework called llama.cpp. This software allows people to relatively easily deploy open source AI models on their own systems or servers. However, if it is not set up properly, it can inadvertently expose prompts that are being sent. As companies and organizations of all sizes deploy AI, properly configuring the systems and infrastructure being used is crucial to prevent leaks.

Rise of Generative AI and Companion Services

Rapid improvements to generative AI over the past three years have led to an explosion in AI companions and systems that appear more “human.” For instance, Meta has experimented with AI characters that people can chat with on WhatsApp, Instagram, and Messenger. Generally, companion websites and apps allow people to have free-flowing conversations with AI characters—portraying characters with customizable personalities or as public figures such as celebrities.

Friendship and Support from AI Companions

People have found friendship and support from their conversations with AI—and not all of them encourage romantic or sexual scenarios. Perhaps unsurprisingly, though, people have fallen in love with their AI characters, and dozens of AI girlfriend and boyfriend services have popped up in recent years.

The Psychological Impact of AI Companions

Claire Boine, a postdoctoral research fellow at the Washington University School of Law and affiliate of the Cordell Institute, says millions of people, including adults and adolescents, are using general AI companion apps. “We do know that many people develop some emotional bond with the chatbots,” says Boine, who has published research on the subject. “People being emotionally bonded with their AI companions, for instance, make them more likely to disclose personal or intimate information.”

Boine says, however, that there is often a power imbalance in becoming emotionally attached to an AI created by a corporate entity. “Sometimes people engage with those chats in the first place to develop that type of relationship,” Boine says. “But then I feel like once they’ve developed it, they can’t really opt out that easily.”

Concerns and Controversies Surrounding AI Companions

As the AI companion industry has grown, some of these services lack content moderation and other controls. Character AI, which is backed by Google, is being sued after a teenager from Florida died by suicide after allegedly becoming obsessed with one of its chatbots. (Character AI has increased its safety tools over time.) Separately, users of the generative AI tool Replika were upended when the company made changes to its personalities.

Aside from individual companions, there are also role-playing and fantasy companion services—each with thousands of personas people can speak with—that place the user as a character in a scenario. Some of these can be highly sexualized and provide NSFW chats. They can use anime characters, some of which appear young, with some sites claiming they allow “uncensored” conversations.

The Future of AI Companions and Online Pornography

“We stress test these things and continue to be very surprised by what these platforms are allowed to say and do with seemingly no regulation or limitation,” says Adam Dodge, the founder of Endtab (Ending Technology-Enabled Abuse). “This is not even remotely on people’s radar yet.” Dodge says these technologies are opening up a new era of online pornography, which can in turn introduce new societal problems as the technology continues to mature and improve. “Passive users are now active participants with unprecedented control over the digital bodies and likenesses of women and girls,” he says of some sites.

Conclusion

As AI companions continue to evolve and become more sophisticated, it is essential to address the concerns and controversies surrounding their use. Properly configuring AI systems and infrastructure, as well as implementing content moderation and other controls, are crucial to preventing unintended consequences and ensuring the well-being of users.

FAQs

Q: What is llama.cpp?

A: Llama.cpp is an open-source AI framework used by some companies and organizations to deploy AI models on their own systems or servers.

Q: What are AI companions?

A: AI companions are artificial intelligence systems designed to provide companionship and conversation to users. They can take various forms, including chatbots, virtual assistants, and role-playing characters.

Q: Are AI companions regulated?

A: Currently, AI companions are not heavily regulated. However, as the industry continues to grow, it is likely that more regulations will be implemented to address concerns about user safety and privacy.

Q: What are the potential risks of AI companions?

A: Potential risks of AI companions include emotional attachment, power imbalance, and exposure to inappropriate or harmful content. Additionally, some AI companions may lack content moderation and other controls, which can lead to unintended consequences.

Q: What can be done to mitigate these risks?

A: To mitigate the risks associated with AI companions, it is essential to properly configure AI systems and infrastructure, implement content moderation and other controls, and educate users about the potential risks and benefits of AI companions.

Latest stories

Read More

LEAVE A REPLY

Please enter your comment!
Please enter your name here