Categories: Cyber Security News

AI Chat App Data Breach Exposes 300 Million Messages from 25 Million Users

In a major privacy blunder, the popular mobile app “Chat & Ask AI” exposed 300 million private messages from 25 million users.

Available on Google Play and Apple’s App Store, this app lets people chat with AI models like ChatGPT, Claude, and Gemini.

An independent security researcher, known as Harry, uncovered the flaw and shared details with 404 Media.

The problem stemmed from a basic setup error, not a hacker attack. The app uses Google Firebase, a cloud service for storing app data.

Firebase databases start secure, but developers must set “rules” to control access. Here, those rules were left wide open, like leaving your front door unlocked.

Anyone with a simple Firebase login could act as an “authenticated” user and read the entire backend database.

Harry accessed chat histories for millions of users. Exposed data included timestamps, user settings, chosen AI models, and custom chatbot names.

Sampling 60,000 users and 1 million messages confirmed the breach hit at least half of the app’s claimed 50 million users. No passwords or financial info leaked, but the messages were deeply personal.

Users treat these AI bots like trusted friends, sharing secrets freely. Leaked logs showed shocking queries: how to write suicide notes, painless self-harm methods, recipes for methamphetamine, and hacking tips.

This highlights risks in “wrapper” apps’ simple interfaces, reselling big AI tech from OpenAI or Google without matching their security.

Firebase works like this: It stores data in real-time databases or Cloud Storage. Default rules require authentication and restrict reads/writes. Developers write rules in a simple language, like:

textrules_version = '2';
service cloud.firestore {
  match /databases/{database}/documents {
    match /{document=**} {
      allow read, write: if request.auth != null;
    }
  }
}

In “Chat & Ask AI,” rules allowed public reads: allow read: if true;. A quick curl command or Firebase SDK could dump data. Harry fixed it by alerting the developers, who later secured the database.

This isn’t rare. Firebase misconfigurations have leaked data from apps like Fortnite trackers before. Wrapper apps rush to market, skipping audits.

  • Always test Firebase rules in production mode.
  • Use tools like Firebase Security Rules Simulator.
  • Run regular scans with services like CloudQuery or Firebase’s audit logs.
  • Encrypt sensitive data at rest.

Lessons for Users

  • Avoid sharing secrets with third-party AI apps.
  • Check app privacy policies and reviews.
  • Use official apps from OpenAI or Google for sensitive chats.

This breach warns the AI boom: Convenience can’t trump security. Developers must prioritize configs; users, think twice before confiding in apps.

Follow us on Google News , LinkedIn and X to Get More Instant UpdatesSet Cyberpress as a Preferred Source in Google.

The post AI Chat App Data Breach Exposes 300 Million Messages from 25 Million Users appeared first on Cyber Security News.

rssfeeds-admin

Recent Posts

All of the Sarah J. Maas Romantasy Books Are on Sale at Amazon Ahead of Mother’s Day

I first got into reading romantasy books after a friend of mine recommended Fourth Wing…

31 minutes ago

IO Interactive “Can’t Wait” to Return to Hitman Series

IO Interactive has assured fans that there will be more Hitman adventures, and the team…

2 hours ago

Diablo 4’s Secret Cow Level Has Finally Been Discovered, But Some Players Are Disappointed

Diablo 4 fans have finally discovered the game's secret cow level, though some fans are…

2 hours ago

Federal Judge Sides With Church

INDIANAPOLIS, Ind. (WOWO) — A federal judge has cleared the way for a religious freedom…

2 hours ago

1,500 Pounds Of Food Donated

INDIANAPOLIS, Ind. (WOWO) — The generosity of FOX59/CBS4 viewers will help provide thousands of meals…

2 hours ago

A heavenly conversation with Tennessee Republican statesmen

American politician and diplomat Howard Baker (1925-2014), United States Senator from Tennessee, during the Select…

2 hours ago

This website uses cookies.