WIRED Found 380,000 Vibe-Coded Apps on the Open Web — and 5,000 of Them Were Leaking Corporate Data

Share
Minimalist white line art on maroon: an open vault door with files spilling out onto a curved globe-grid, with an unlocked padlock above.

WIRED's Andy Greenberg published an investigation on May 7, 2026 reporting that the Israeli cybersecurity firm RedAccess found roughly 380,000 publicly accessible web assets built with AI "vibe-coding" platforms — Lovable, Base44, Replit, and Netlify — of which around 5,000 exposed sensitive corporate data and another 5,000 had little to no authentication, with about 40 percent of those leaking sensitive data. RedAccess CEO Dor Zvi told Axios his team found the apps while researching "shadow AI" usage for customers; Axios independently verified examples including a shipping company's vessel-schedule app and a UK clinical-trial tracker. The findings continue a pattern that the historical CVE-2025-48757 case (170+ Lovable apps with missing Supabase Row Level Security) made first visible. The systemic root cause — AI code generators that ship insecure database access defaults — is the same.

On May 7, 2026, WIRED published Andy Greenberg's investigation "Thousands of Vibe-Coded Apps Expose Corporate and Personal Data on the Open Web," anchored on research from Israeli cybersecurity firm RedAccess. The numbers are the headline: RedAccess identified approximately 380,000 publicly accessible apps built on Lovable, Base44, Replit, and Netlify, including roughly 5,000 that contained sensitive corporate data. A separate count found about 5,000 web apps with little or no authentication; approximately 40 percent of those exposed sensitive data. Axios's parallel coverage of the same RedAccess research independently verified multiple exposed apps, including one for a shipping company detailing which vessels were expected at which ports, and an internal application for a health company tracking active clinical trials in the UK. RedAccess CEO Dor Zvi told Axios his team found the apps while researching "shadow AI" — unauthorized employee use of AI tools — for customers. The exposure is, in many cases, a default-settings problem: privacy settings on some vibe-coding platforms make apps publicly accessible unless the creator manually changes them, and the apps are indexed by Google.

The single most consequential element is the structural root cause. The same architectural pattern — AI code generators producing client-side applications backed by Supabase or Firebase, with insecure-by-default access controls — has now produced two separate quantified disclosures: WIRED's 380,000-asset RedAccess survey announced May 7, 2026, and the historical CVE-2025-48757 disclosure from researcher Matt Palmer in May 2025, which documented 170+ Lovable applications with missing Supabase Row Level Security policies. The CVE — which received a CVSS 9.3 critical rating — was the first public quantification of the problem; WIRED and RedAccess have now made it visible at orders-of-magnitude greater scale. The intervening 12 months produced platform statements, security scanners, and changes to defaults at Lovable and other vendors, but the underlying generation pattern persists. AI code generators are still shipping apps where the developer-or-prompter does not realize that "the database is unlocked" and "the database is locked" both look identical from the inside of the AI builder. The vendors disagree on RedAccess's methodology and disclosure timeline; the empirical exposure is being independently verified by multiple journalists.

WIRED / RedAccess Vibe-Coded Apps Profile
DetailInformation
PublicationWIRED (Andy Greenberg), May 7, 2026; parallel coverage in Axios (Sara Fischer)
Primary researchRedAccess (Israeli cybersecurity firm); CEO Dor Zvi; research originated as part of "shadow AI" customer engagements
Platforms surveyedLovable, Base44 (owned by Wix), Replit, Netlify
Total exposed assetsApproximately 380,000 publicly accessible vibe-coded apps identified by RedAccess
Sensitive data exposure~5,000 apps containing sensitive corporate data; ~5,000 web apps with little/no authentication; ~40% of those exposed sensitive data
Independently verified examplesPer Axios's verification: a shipping company's vessel-schedule app; an internal health company UK clinical-trial tracker; phishing sites built on Lovable that the company is now reviewing and removing
Default-settings issuePrivacy settings on some vibe-coding tools default to public unless creator manually changes; apps indexed by Google and similar search engines
Vendor responses (per Axios)Replit CEO Amjad Masad: claimed RedAccess gave 24 hours' notice and no impacted-user list. Lovable spokesperson Samyutha Reddy: investigating; report had no URLs or technical specifics. Wix (owns Base44) spokesperson Blake Brodie: RedAccess withheld URLs; two allegedly exposed apps were "deliberately set to public by their owners." Netlify: did not respond
Historical precedentCVE-2025-48757 (CVSS 9.3) — disclosed May 29, 2025 by researcher Matt Palmer after 45-day disclosure window; 170+ Lovable production apps with missing/insufficient Supabase Row Level Security policies; 303 vulnerable endpoints across 10.3% of analyzed Lovable projects
CVE root causeLovable AI generated Supabase database schemas without enabling Row Level Security; client-side public anon key gave full read/write to unprotected tables; first identified by Palmer March 20, 2025 on Linkable (linkable.site, now offline)
Lovable post-CVE responsePer Palmer's published statement: Lovable confirmed receipt but never responded; later introduced a "security scanner" that checks only for the existence of any RLS policy, not its correctness
Common architectural patternMost vibe-coded apps are client-side React/TypeScript with Supabase Postgres backend; client-side public anon key is normal — security depends entirely on RLS policy correctness; AI generators frequently skip or misconfigure RLS

How Vibe-Coded Apps Leak Data

The architectural pattern that produces these exposures is consistent across the major vibe-coding platforms. A non-developer describes an application — "build me a customer feedback tool that stores responses in a database" — and the AI generates a React or TypeScript front end backed by a Supabase Postgres database (or, less commonly, Firebase). Supabase's design intentionally exposes a public API key called the anon key in the client-side code; that is normal and supported. What makes the architecture safe in expert hands is Row Level Security: declarative database policies that restrict which rows can be read, modified, or deleted depending on the requesting user's authentication state. With proper RLS, the public anon key gives only the access the policies allow. Without RLS, the public anon key gives full read and write access to every table in the database. The AI generator's job, when producing this kind of app, is to generate both the schema and the RLS policies that match the application logic. Across multiple independent investigations, that second step is where AI generators fail.

The failure mode is the most dangerous kind: it does not break anything visible. The application works perfectly for the user who built it. They prompt for a customer-feedback form; they get a customer-feedback form; they submit a test entry; it appears in the database. They publish the app. What they do not see is that an attacker who opens browser developer tools, copies the anon key, and queries the database directly — using the documented Supabase SDK — can read every other entry, modify any entry, or delete the entire table. The RLS policies that would prevent this either don't exist or only check for the simplest case ("is the user logged in?") rather than enforcing application-specific access logic ("is this user the owner of this row?"). The user has no way to know this from the app builder UI; the AI does not warn them; the platform's automated security scanner, where one exists, only checks whether some RLS policy is enabled, not whether it matches the application's actual access requirements. CyberSignal's application security coverage tracks the broader pattern of AI-generated code introducing systemic vulnerability classes.

The CVE-2025-48757 Precedent — and Why It Did Not Fix the Problem

Researcher Matt Palmer's CVE-2025-48757 disclosure in May 2025 was the first public quantification of this exact pattern at scale. Palmer initially identified the vulnerability on March 20, 2025 while examining Linkable (linkable.site, now offline), a Lovable-built site for generating LinkedIn-derived websites. He found that modifying a single network request granted access to the project's entire users table. He flagged this on Lovable's Twitter account; Lovable initially denied the issue, then deleted their tweets and the site. Palmer then conducted a broader scan of Lovable-generated production applications and found 170+ apps with fully accessible databases, with 303 vulnerable endpoints across roughly 10.3 percent of analyzed Lovable projects. Exposed data included emails, addresses, API keys, and financial records. The CVE — assigned a CVSS score of 9.3 — was published May 29, 2025 after Palmer's 45-day disclosure window expired. Per Palmer's published statement: Lovable confirmed receipt but never responded with meaningful remediation or user notification.

Lovable's subsequent response — introducing a "security scanner" that checks for the existence of any RLS policy on a table — illustrates why CVE-2025-48757 did not fix the problem. The scanner verifies that some RLS policy is enabled. It does not verify that the policy is correct. The two are very different. A table can have an RLS policy that effectively grants public read access (a permissive policy that always returns true) while satisfying the scanner's existence check. The scanner provides false reassurance: developers who run it and see "all tables have RLS enabled" believe their app is secure, when in fact the policies may not match the application's actual access logic. Palmer's published statement notes this directly: the scanner "merely checks for the existence of any RLS policy, not its correctness or alignment with application logic. This provides a false sense of security, failing to detect the misconfigurations that expose data." That failure mode is part of what RedAccess's broader survey now captures at scale.

What the Vendor Responses Tell Defenders

Per Axios's coverage of the RedAccess research, the four named vendors had different and informative responses. Replit's CEO Amjad Masad publicly claimed via X that RedAccess gave the company only 24 hours before going to the press and did not share a list of impacted users. Lovable spokesperson Samyutha Reddy said the company is still investigating but that RedAccess's report "did not include any URLs or technical specifics that would allow us to verify, investigate or act on the findings described," while also noting Lovable has started reviewing and removing phishing sites RedAccess identified. Wix spokesperson Blake Brodie (Wix owns Base44) told Axios that RedAccess "deliberately withheld the URLs that would have allowed us to identify and examine the applications in question," and that two of the allegedly exposed applications were "deliberately set to public by their owners." Netlify did not respond.

Two readings of the vendor pushback are reasonable. First, RedAccess's disclosure timeline appears compressed compared to typical responsible-disclosure windows, and vendor frustration about not receiving impacted-user lists is a legitimate concern. Second, however, the vendor responses do not dispute the core RedAccess and WIRED finding: that hundreds of thousands of apps built on these platforms are publicly accessible by default, that thousands contain sensitive corporate data, and that Axios was able to independently verify multiple specific exposed apps. The disagreement is largely procedural rather than substantive. From a CISO's standpoint, the operational implication is the same regardless of which side is right about disclosure timing: if your organization has employees building apps on Lovable, Base44, Replit, or Netlify, those apps may be exposing your organization's data right now, and the vendors are not yet positioned to give you a confident answer about whether they are.

Defender Actions for Organizations Whose Staff Are Vibe-Coding

  • Survey vibe-coding tool usage in your organization. Issue a confidential employee survey or audit cloud-platform billing records (Lovable, Base44, Replit, Bolt.new, Cursor, v0, Netlify charges to corporate credit cards or expensed personal cards). The 380,000-app figure indicates broad usage across the working population; assume your organization is part of it.
  • Establish an approved-platform list and require security review before any vibe-coded app handles customer or corporate data. Pick one or two platforms that meet your security baseline; document configuration requirements; communicate this to engineering and business teams. The base rate of critical vulnerabilities — 10.3 percent of Lovable apps in Palmer's CVE survey, similar order-of-magnitude in RedAccess's — means review is non-negotiable for anything handling sensitive data.
  • Verify Supabase Row Level Security on every table that holds user or corporate data — and verify the policies are correct, not just that they exist. The default scanner check on most platforms verifies existence only. Test the unhappy path: send unauthenticated requests to authenticated endpoints, send authenticated requests with manipulated user IDs, attempt sequential-ID enumeration on resource-fetching endpoints, attempt direct database queries using the public anon key. If any of those return data the user shouldn't have access to, the RLS policies are insufficient regardless of whether the platform's scanner reports green.
  • Audit for hardcoded API keys in client-side bundles. Independent scans of vibe-coded apps have found OpenAI sk-proj keys, Anthropic API keys, and other production credentials shipped in /assets/index-*.js bundles. Use git-secrets, gitleaks, or platform-specific scanners. The base rate is high enough — independent surveys have reported roughly 1 in 15 Bolt.host apps and 1 in 4 Vercel apps with hardcoded credentials — that this is worth a one-time scan even if you have no specific reason to believe your apps are affected.
  • For AppSec leaders: brief the CISO and CIO that AI-generated code is now a recurring vulnerability source with measurable failure rates. Your application security program needs a vibe-coding-specific module: pre-deployment scanning, post-deployment monitoring, and developer education on the specific failure modes (RLS, hardcoded keys, agentic autonomy without bounds). Treat this as a category, not as one-off incidents.

The CyberSignal Analysis

Signal 01 — RLS misconfiguration is now an enterprise-class vulnerability category

Across two independent quantified disclosures separated by 12 months — Palmer's CVE-2025-48757 with 170+ Lovable apps in May 2025, RedAccess's WIRED-anchored survey with ~5,000 apps containing sensitive data in May 2026 — the dominant root cause is missing or insufficient Supabase Row Level Security policies. This is no longer a vendor-specific issue or a single-incident pattern; it is a systemic vulnerability category with a well-defined failure mode (AI generates client-side React with Supabase backend; AI fails to generate correct RLS; public anon key in client gives full database access). For application security leaders, this means "check Supabase RLS policies on every table holding sensitive data" should now sit on the standard AppSec checklist alongside SQL injection, XSS, and SSRF — not as an emerging threat but as a baseline configuration audit. The defensive controls are well-understood: enable RLS, write policies that match application-specific access logic (not just "is logged in"), and verify behavior with adversarial testing. The challenge is process — getting these checks into the build pipeline for apps that originated outside the engineering organization's normal SDLC.

Signal 02 — Shadow AI IT is the dominant deployment surface

RedAccess's CEO told Axios his team found these apps while researching shadow AI usage for customers — meaning the 380,000-app figure was an incidental finding from a different research program. That framing matters because it tells us where these exposures are coming from. They are not from sanctioned engineering teams running formal SDLCs. They are from sales operations, marketing teams, customer success groups, business analysts, and product managers using vibe-coding platforms to build internal tools, prototypes, customer-facing demos, and side projects without involving IT or AppSec. The shadow-AI-IT pattern was always going to grow given the time-saving promise of these platforms; the data exposures are the empirical evidence that it has grown faster than enterprise governance has adapted. CISOs whose policies treat AI coding tools as analogous to GitHub Copilot — a developer-productivity tool used by people who already understand security implications — are mismatched against the actual usage profile. The policy framework needs to address non-developer use cases explicitly: who can build apps, what data they can include, what review they need before publication, and what continuous monitoring applies after.

Signal 03 — Default-public is the new default-insecure

WIRED and RedAccess specifically note that some vibe-coding platforms ship with privacy settings defaulted to public — meaning apps are accessible to anyone, indexed by Google, and discoverable through normal web searches unless the creator manually changes them. This is a deliberate product decision (the platforms want apps to be shareable by default; default-private would create friction in the typical "build something, send a link to a friend" flow). It is also why the exposure surface is so large: most users do not change defaults, most users do not realize their app's data is publicly accessible by virtue of being publicly visible, and most users do not have the security-architecture mental model to translate "my app is public" into "my Supabase database is queryable by any visitor with the public anon key." The closest historical analogy is the AWS S3 bucket misconfiguration era, which produced years of incidents and regulatory fines until AWS finally changed bucket defaults to private. Vibe-coding platforms are early in that arc. The defensive response for organizations is not to wait for vendors to change defaults — that may take years — but to make "is this app public" and "are RLS policies correct" mandatory pre-publication checks as part of the AppSec program. The rest of 2026 will produce more of these exposures; the question for CISOs is whether their organizations are part of the count.


Sources

TypeSource
PrimaryWIRED (Andy Greenberg): Thousands of Vibe-Coded Apps Expose Corporate and Personal Data on the Open Web
ReportingAxios: AI Vibe-Coding Apps Leak Sensitive Data (parallel coverage of RedAccess research with vendor responses)
Primary (historical)Matt Palmer: Statement on CVE-2025-48757 (the original Lovable RLS disclosure)
Primary (CVE)Matt Palmer: CVE-2025-48757 Technical Details (Supabase RLS Misconfiguration on Lovable Projects)
AnalysisSecurity Online: CVE-2025-48757 Lovable's Row-Level Security Breakdown
AnalysisOODA Loop: AI Vibe-Coding Apps Leak Sensitive Data (analysis of RedAccess findings)

Read more