"Public by Intent": Lovable AI Faces Scrutiny Over Massive Project Exposure

Share
Minimalist flat vector illustration: a white cloud folder icon with neon purple binary code leaking on a charcoal background.

A reported Broken Object Level Authorization (BOLA) flaw on the Lovable AI platform allegedly exposed thousands of private coding projects, prompts, and credentials, sparking a debate over "vibe coding" security standards.

Stockholm, SwedenLovable, the rapidly growing AI "app builder" platform, has found itself at the center of a major security controversy. Independent researchers reported an API vulnerability that allegedly allowed anyone to access private user projects, AI prompts, and even hard-coded credentials. While the company initially denied a "mass data breach," claiming many public settings were intentional, it has since admitted to an error in chat visibility settings and issued a public apology.

The incident highlights the growing pains of the "vibe coding" movement, where rapid AI-assisted development often outpaces traditional security guardrails.

Post-Mortem: Lovable Project Visibility Issue
Risk Vector Details & Impact
API Vulnerability Broken Object Level Authorization (BOLA) allowed unauthorized access to project metadata via ID manipulation.
Exposed Assets AI chat history, system prompts, generated source code, and potentially embedded environment variables.
Current Status Reported "Chat Visibility" flaw patched; platform security documentation updated to reflect strict privacy defaults.

The Mechanism: BOLA and "Public-by-Default" Logic

The core of the exposure stems from a Broken Object Level Authorization (BOLA) vulnerability within Lovable’s API. Researchers found that by manipulating project IDs in API requests, they could retrieve data from projects that users believed were private.

According to reports from The Register and Sifted, the exposed data included:

  • AI Prompts & Logic: The specific instructions users gave to the AI to build their applications.
  • Sensitive Assets: Source code, environment variables, and in some cases, API keys hard-coded into generated apps.
  • User Identity: Metadata linking specific projects to high-profile corporate accounts.

The Denial and The Correction

Lovable's initial response on X (formerly Twitter) and via The Economic Times was a firm denial of a breach, stating that "public projects are a feature, not a bug." However, as Business Insider and CyberNews highlighted, the distinction between "intentionally public" and "accidentally exposed" was blurred by the API's failure to enforce strict permissions.

Following a wave of criticism from the security community on Reddit and Hacker News, Lovable CEO Anton Osika admitted that a misconfiguration in the "Chat Visibility" settings allowed internal project discussions to be viewed without authorization. The company has since patched the flaw and updated its security documentation to clarify project privacy tiers.


The CyberSignal Analysis

Signal 01 — The "Vibe Coding" Security Debt

This incident is a definitive signal for vulnerability management. The "vibe coding" trend prioritizes the speed of "idea to app," often bypassing the Security Development Lifecycle (SDL). The signal for 2026 is that AI-generated code is only as secure as the platform hosting it. Enterprises using these builders must treat them as high-risk third-party dependencies. To understand how to manage these rapid-deployment risks, see our deep dive on supply chain attacks.

Signal 02 — API Authorization is the New Perimeter

This is a high-fidelity signal for application security. As we move toward API-first AI platforms, BOLA (formerly IDOR) remains the #1 threat. The signal is that "functional correctness" in an AI app builder does not equal "security correctness." Security teams must implement automated API security testing that specifically looks for authorization bypasses in AI-orchestrated environments.

Signal 03 — The Transparency Trap

The "Public by Intent" defense used by Lovable is a signal for data governance. There is a widening gap between what a platform considers "public" and what a corporate user expects to be "private." The signal for 2026 is that "Privacy by Design" must be the default for AI B2B tools; any "Public" feature must require a conscious, multi-step opt-in to avoid catastrophic metadata leaks.


Sources

Type Source
Investigative News The Register: Denial & Analysis
Business Press Sifted: Start-up Scrutiny
Cyber Security CyberNews: Official Apology

Read more