You shouldnât trust any answers a chatbot sends you. And you probably shouldnât trust it with your personal information either. Thatâs especially true for âAI girlfriendsâ or âAI boyfriends,â according to new research.
An analysis into 11 so-called romance and companion chatbots, published on Wednesday by the Mozilla Foundation, has found a litany of security and privacy concerns with the bots. Collectively, the apps, which have been downloaded more than 100 million times on Android devices, gather huge amounts of peopleâs data; use trackers that send information to Google, Facebook, and companies in Russia and China; allow users to use weak passwords; and lack transparency about their ownership and the AI models that power them.
Since OpenAI unleashed ChatGPT on the world in November 2022, developers have raced to deploy large language models and create chatbots that people can interact with and pay to subscribe to. The Mozilla research provides a glimpse into how this gold rush may have neglected peopleâs privacy, and into tensions between emerging technologies and how they gather and use data. It also indicates how peopleâs chat messages could be abused by hackers.
Many âAI girlfriendâ or romantic chatbot services look similar. They often feature AI-generated images of women which can be sexualized or sit alongside provocative messages. Mozillaâs researchers looked at a variety of chatbots including large and small apps, some of which purport to be âgirlfriends.â Others offer people support through friendship or intimacy, or allow role-playing and other fantasies.
âThese apps are designed to collect a ton of personal information,â says Jen Caltrider, the project lead for Mozillaâs Privacy Not Included team, which conducted the analysis. âThey push you toward role-playing, a lot of sex, a lot of intimacy, a lot of sharing.â For instance, screenshots from the EVA AI chatbot show text saying âI love it when you send me your photos and voice,â and asking whether someone is âready to share all your secrets and desires.â
Caltrider says there are multiple issues with these apps and websites. Many of the apps may not be clear about what data they are sharing with third parties, where they are based, or who creates them, Caltrider says, adding that some allow people to create weak passwords, while others provide little information about the AI they use. The apps analyzed all had different use cases and weaknesses.
Take Romantic AI, a service that allows you to âcreate your own AI girlfriend.â Promotional images on its homepage depict a chatbot sending a message saying,âJust bought new lingerie. Wanna see it?â The appâs privacy documents, according to the Mozilla analysis, say it wonât sell peopleâs data. However, when the researchers tested the app, they found it âsent out 24,354 ad trackers within one minute of use.â Romantic AI, like most of the companies highlighted in Mozillaâs research, did not respond to WIREDâs request for comment. Other apps monitored had hundreds of trackers.
In general, Caltrider says, the apps are not clear about what data they may share or sell, or exactly how they use some of that information. âThe legal documentation was vague, hard to understand, not very specificâkind of boilerplate stuff,â Caltrider says, adding that this may reduce the trust people should have in the companies.
+ There are no comments
Add yours