Pushpaganda: How AI-Generated Bait is Poisoning Google Discover Feeds

Share
Minimalist flat vector illustration: a white smartphone showing a news feed with a neon purple notification bell icon bursting out on an amber orange background.

A sophisticated ad-fraud and scareware campaign dubbed "Pushpaganda" is successfully exploiting Google Discover’s recommendation algorithm, using AI-generated fake news to trick millions into enabling malicious push notifications.

MOUNTAIN VIEW, UNITED STATES — Security researchers have uncovered a novel exploitation of the Google Discover ecosystem, where threat actors are using "hyper-personalized" AI content to bypass traditional quality filters. Dubbed "Pushpaganda" by experts at NetManageIT and The Hacker News, the scheme weaponizes Google’s own recommendation engine to deliver malicious notifications directly to Android and iOS devices.

Unlike traditional phishing, which relies on email or SMS, Pushpaganda lives entirely within the trusted "Discover" feed on mobile browsers and home screens. By flooding the platform with AI-generated articles on trending topics, scammers are finding "the perfect home" for ad fraud and scareware.

Pushpaganda Incident Timeline

Timeline Event & Strategic Significance
Late 2025 Regional Testing: Early variants detected in the Indian financial sector, utilizing AI-generated tax and investment bait.
Feb 2026 Global Expansion: Campaign shifts to US and UK markets, incorporating "deepfake" celebrity news to bypass Google’s E-E-A-T filters.
April 7-14, 2026 Peak Activity: Human Security observes a zenith of 240 million ad bid requests across 113 malicious domains in one week.
April 17, 2026 Platform Mitigation: Google confirms deployment of algorithmic fixes to Discover and Search to purge Pushpaganda AI clusters.

The Mechanism: Exploiting the Feed for Notification Access

The attack relies on a two-stage psychological trick: the "Bait" and the "Hook." The goal is not just a single click, but the permanent permission to send push notifications.

Attack Phase Methodology
Discovery (Bait) AI-generated news articles designed to rank high in Google Discover’s recommendation algorithm.
Conversion (Hook) Deceptive browser prompts requiring "Allow" clicks to "verify identity" or access content.
Exploitation Persistent delivery of ad fraud, scareware alerts, and credential phishing via push notifications.

According to technical breakdowns from TechRadar and SC Media, the operation follows a distinct pattern:

  • The AI Bait: Attackers use Large Language Models (LLMs) to generate thousands of "news" articles about celebrities, financial market shifts, or urgent security warnings. Because this content is optimized for Google’s E-E-A-T (Experience, Expertise, Authoritativeness, and Trustworthiness) signals, it is frequently promoted to the top of users' Discover feeds.
  • The Permission Prompt: Once a user clicks, they are redirected to a low-quality site that immediately triggers a browser prompt: "Click 'Allow' to verify you are not a robot" or "Allow notifications to read the full story."
  • The Pushpaganda Flood: Once "Allow" is clicked, the user's device is flooded with a constant stream of malicious notifications. These range from "System Virus Detected" scareware to fraudulent financial "opportunities" and aggressive ad-tracking scripts.

PYMNTS reports that the campaign is particularly effective because it hijacks the user's habitual trust in the Google app ecosystem, leading to significantly higher "permission grant" rates than traditional malicious websites.

A Regional Focus: Targeting Financial Sectors

While the campaign is global, researchers at FyntraLink have noted a significant spike in Pushpaganda articles tailored to the Saudi Arabian and Middle Eastern financial sectors. Scammers are using AI to translate bait into perfect local dialects, making the "fake news" indistinguishable from legitimate local reporting.

Google has not yet issued a formal statement on specific changes to the Discover algorithm, but security leaders warn that the "arms race" between AI-generated spam and algorithmic filters is reaching a breaking point.


The CyberSignal Analysis

Signal 01 — The Algorithmic Attack Surface

This incident is a definitive signal for social engineering. Threat actors have realized they no longer need to find vulnerabilities in code if they can manipulate the algorithms that curate our reality. The rapid scaling of this campaign — peaking at 240 million bid requests in early April — proves that "algorithmic reputation" is a security asset that can be poisoned.

Signal 02 — The Rise of Generative Ad Fraud

This is a high-fidelity signal for shadow ai. Much like the SystemBC botnet integration by "The Gentlemen", the Pushpaganda campaign uses AI to achieve a level of industrial scale that was previously impossible. When an attacker can generate content faster than a platform can moderate it, they don't need a technical exploit; they simply overwhelm the system with volume.

Signal 03 — Managing Digital Notifications

The "Hook" of this attack is the browser's push notification system. To help your team identify and disable these malicious permissions, see our guide on most common cybersecurity threats for organizations in 2026, which includes a section on browser hardening and "Notification Fatigue."


Sources

Type Source
Threat Intel The Hacker News: AI-Driven Pushpaganda
Strategic Analysis TechRadar: Algorithmic Social Engineering
Regional Brief FyntraLink: Saudi Financial Sector Targets
Financial Fraud PYMNTS: The "Perfect Home" for Fake News

Read more