That perfect online girlfriend flirting with you might not be who you think. Everyone feels lonely, and wants that one perfect companion who understands you. One day you stumble upon the “AI girlfriend,” and she has all the love you want, but they might not be as good as the human companions, and they too are going to break your heart. Behind those AI-generated girlfriends lies massive privacy vulnerabilities that could affect millions of ‘boyfriends.’
Researchers from the Mozilla Foundation dissected these popular “AI girlfriend” chatbots, which have been downloaded over 100 million times.They analysed 11 romantic companion chatbots on Android, and surprise, surprise, what they found? Major privacy and security woes (yes, AI girlfriends can also gossip).
“These apps are designed to collect a ton of personal information,” said Jen Caltrider, the project lead on Mozilla’s study. “They push you toward role-playing, a lot of sex, a lot of intimacy, a lot of sharing.”
Once they have your data, it is not private anymore, even if they claim to keep it to themselves. The researchers say that these apps collect data systematically, have weak security controls allowing access to sensitive messages, and could be tracking your activity. No one really knows who’s really on the other side of that chat screen.
“These apps push intimacy, but give no clarity on how they use and secure highly personal information,” added Caltrider. “Consumers are falling for glossy AI profiles while opening themselves to privacy invasions.”
The apps provide little clarity on what user data they share and sell. According to Wired, the apps studied allow the creation of weak one-character passwords and contain hundreds of hidden trackers sending data to companies like Google and Facebook and firms in Russia and China. “The legal documentation was vague, hard to understand, not very specific – kind of boilerplate stuff,” said Caltrider.
Some apps like Mimico do not disclose ownership or location, listing only generic contact emails. “These were very small app developers that were nameless, faceless, placeless,” Caltrider told Wired.
Of particular concern is that apps like CrushOn.AI openly state in their privacy policies the ability to gather highly sensitive information on users’ sexual health, medications, gender transition status, and care. Other apps reference or condone fantasies around underage relationships, abuse, and dangerous behaviour.
While marketing mental health benefits, legal disclaimers in Romantic AI’s terms and conditions state, “Romantic AI MAKES NO CLAIMS, REPRESENTATIONS, WARRANTIES, OR GUARANTEES THAT THE SERVICE PROVIDE A THERAPEUTIC, MEDICAL, OR OTHER PROFESSIONAL HELP.”
“You shouldn’t have to pay for cool new technologies with your safety or your privacy,” the Mozilla report concluded.
Chris Gilliard, a professor at Macomb Community College who studies discrimination in AI systems, told Wired, “They’re essentially trying to negate any sense of responsibility while simultaneously making these grandiose claims about providing intimacy and care.”
As these AI chatbots gain popularity, privacy advocates advise people to exercise caution before trusting them. Some chatbots may claim to be personalised romantic partners, but it is important to check their data policies, security controls, and transparency around AI models first.
When an AI seems a little too perfect in connecting with you, it may be mining your vulnerabilities rather than caring for your heart. “You shouldn’t have to pay for cool new technologies with your safety or your privacy,” the Mozilla report aptly concludes.
Researchers from the Mozilla Foundation dissected these popular “AI girlfriend” chatbots, which have been downloaded over 100 million times.They analysed 11 romantic companion chatbots on Android, and surprise, surprise, what they found? Major privacy and security woes (yes, AI girlfriends can also gossip).
“These apps are designed to collect a ton of personal information,” said Jen Caltrider, the project lead on Mozilla’s study. “They push you toward role-playing, a lot of sex, a lot of intimacy, a lot of sharing.”
Once they have your data, it is not private anymore, even if they claim to keep it to themselves. The researchers say that these apps collect data systematically, have weak security controls allowing access to sensitive messages, and could be tracking your activity. No one really knows who’s really on the other side of that chat screen.
“These apps push intimacy, but give no clarity on how they use and secure highly personal information,” added Caltrider. “Consumers are falling for glossy AI profiles while opening themselves to privacy invasions.”
The apps provide little clarity on what user data they share and sell. According to Wired, the apps studied allow the creation of weak one-character passwords and contain hundreds of hidden trackers sending data to companies like Google and Facebook and firms in Russia and China. “The legal documentation was vague, hard to understand, not very specific – kind of boilerplate stuff,” said Caltrider.
Some apps like Mimico do not disclose ownership or location, listing only generic contact emails. “These were very small app developers that were nameless, faceless, placeless,” Caltrider told Wired.
Of particular concern is that apps like CrushOn.AI openly state in their privacy policies the ability to gather highly sensitive information on users’ sexual health, medications, gender transition status, and care. Other apps reference or condone fantasies around underage relationships, abuse, and dangerous behaviour.
While marketing mental health benefits, legal disclaimers in Romantic AI’s terms and conditions state, “Romantic AI MAKES NO CLAIMS, REPRESENTATIONS, WARRANTIES, OR GUARANTEES THAT THE SERVICE PROVIDE A THERAPEUTIC, MEDICAL, OR OTHER PROFESSIONAL HELP.”
“You shouldn’t have to pay for cool new technologies with your safety or your privacy,” the Mozilla report concluded.
Chris Gilliard, a professor at Macomb Community College who studies discrimination in AI systems, told Wired, “They’re essentially trying to negate any sense of responsibility while simultaneously making these grandiose claims about providing intimacy and care.”
As these AI chatbots gain popularity, privacy advocates advise people to exercise caution before trusting them. Some chatbots may claim to be personalised romantic partners, but it is important to check their data policies, security controls, and transparency around AI models first.
When an AI seems a little too perfect in connecting with you, it may be mining your vulnerabilities rather than caring for your heart. “You shouldn’t have to pay for cool new technologies with your safety or your privacy,” the Mozilla report aptly concludes.