MyLovely.ai Data Breach Exposes Thousands of Private User Conversations
A major vulnerability in a prominent companion AI platform has led to the public exposure of sensitive prompts and account identifiers, highlighting the privacy risks of generative AI data retention.
SAN FRANCISCO — Security researchers have confirmed a significant data breach at MyLovely.ai, a popular "AI companion" platform. The incident has resulted in the exposure of over 70,000 private conversation logs, many of which contain highly personal and NSFW (Not Safe For Work) interactions. The leaked dataset reportedly links specific prompts to individual user accounts, creating a significant risk of de-anonymization and targeted extortion.
Initial analysis suggests the breach occurred due to an improperly secured database that allowed unauthorized access to the platform’s backend. Beyond conversation history, the exposed data includes user email addresses, account creation dates, and metadata that could be used to identify users’ geographic locations. MyLovely.ai has since secured the vulnerability, but the leaked information has already begun circulating on specialized data-leak forums.
| Who is affected | |
|---|---|
|
Platform Users Individuals who engaged in private chats now face potential public exposure and extortion. |
AI Service Providers Platforms managing sensitive user prompts face increased regulatory scrutiny over data retention. |
|
Enterprise IT Teams Organizations must monitor for corporate email addresses used on compromised "Shadow AI" sites. |
Data Privacy Regulators Authorities are assessing potential GDPR and CCPA violations related to the mishandling of personal logs. |
The threat of de-anonymization and extortion
The primary concern for victims of the MyLovely.ai breach is the lack of anonymity in the leaked logs. While many users believe their interactions with AI are ephemeral or private, the storage of these logs in plaintext — associated with email addresses — makes them a goldmine for "doxing" campaigns. Threat actors often use such datasets to cross-reference leaked emails with social media profiles, leading to instances of "sextortion" where users are threatened with the release of their private prompts to family or employers.
Security firms monitoring the leak have noted that the data was accessible for an extended period before being remediated. This exposure window allowed multiple scrapers to index the database. Even as the platform moves toward better encryption, the permanence of the existing leak underscores the "data is a liability" principle, particularly when dealing with generative AI inputs that are uniquely identifiable to a user's personality or personal life.
Broader implications for AI data governance
The incident at MyLovely.ai serves as a warning for the wider AI industry. Unlike traditional data breaches that involve passwords or credit cards — which can be reset — conversation history is permanent and irreplaceable. The breach highlights a lack of standardized security frameworks for "companion" or "entertainment" AI, which often operate with less oversight than enterprise-grade AI assistants.
As more users integrate AI into their daily lives for companionship, therapy-adjacent talk, or creative writing, the volume of sensitive "unstructured data" being held by these companies is growing exponentially. Experts suggest that the next phase of AI security will require mandatory end-to-end encryption for prompt history and a "zero-retention" policy by default, ensuring that even if a server is compromised, the content of the user's thoughts remains private.
The CyberSignal analysis
Signal 01 — Prompt history as the new "PII"
This breach proves that AI prompts should be classified as highly sensitive Personal Identifiable Information (PII). Because prompts often contain unique cadence, personal anecdotes, and specific desires, they are functionally as identifiable as a biometric marker. Organizations must stop viewing LLM logs as "disposable" and start treating them as high-value targets for exfiltration.
Signal 02 — The rise of "Shadow AI" in the workplace
Practitioners should check their logs for corporate domains used on consumer-grade AI sites. The MyLovely.ai breach likely includes corporate email addresses from employees who used work credentials for personal accounts. This creates a bridge for threat actors to pivot from a personal leak to a corporate Account Takeover (ATO) or spear-phishing campaign using the leaked personal context as leverage.
Signal 03 — Retention is a security failure
The fact that 70,000 logs were available to be leaked suggests a failure in data lifecycle management. In the AI sector, "keeping everything" to train future models is a common but dangerous practice. Security-first platforms must move toward decentralized storage or local-only processing to eliminate the centralized database risk that led to this event.
What to do this week
- Scan for "Shadow AI" registrations. Use your email security gateway or CASB to search for outbound registrations to MyLovely.ai and similar companion platforms using company email addresses. Force password resets for any matches.
- Update your AI Acceptable Use Policy (AUP). Explicitly define which AI platforms are authorized for work use and prohibit the use of personal or sensitive corporate data in any consumer-grade AI prompt field.
- Monitor extortion-related phishing. Brief your executive protection teams on the possibility of spear-phishing attempts that leverage leaked "companion" AI logs. Attackers may use the sensitive nature of these logs to bypass traditional social engineering defenses.
Sources
| Type | Source |
|---|---|
| Reporting | Malwarebytes Labs |
| Reporting | Help Net Security |
| Reporting | CyberNews |
| Reporting | HookPhish |