Available on Google Play and Apple’s App Store, this app lets people chat with AI models like ChatGPT, Claude, and Gemini.
An independent security researcher, known as Harry, uncovered the flaw and shared details with 404 Media.
The problem stemmed from a basic setup error, not a hacker attack. The app uses Google Firebase, a cloud service for storing app data.
Firebase databases start secure, but developers must set “rules” to control access. Here, those rules were left wide open, like leaving your front door unlocked.
Anyone with a simple Firebase login could act as an “authenticated” user and read the entire backend database.
Harry accessed chat histories for millions of users. Exposed data included timestamps, user settings, chosen AI models, and custom chatbot names.
Sampling 60,000 users and 1 million messages confirmed the breach hit at least half of the app’s claimed 50 million users. No passwords or financial info leaked, but the messages were deeply personal.
Users treat these AI bots like trusted friends, sharing secrets freely. Leaked logs showed shocking queries: how to write suicide notes, painless self-harm methods, recipes for methamphetamine, and hacking tips.
This highlights risks in “wrapper” apps’ simple interfaces, reselling big AI tech from OpenAI or Google without matching their security.
Firebase works like this: It stores data in real-time databases or Cloud Storage. Default rules require authentication and restrict reads/writes. Developers write rules in a simple language, like:
textrules_version = '2';
service cloud.firestore {
match /databases/{database}/documents {
match /{document=**} {
allow read, write: if request.auth != null;
}
}
}
In “Chat & Ask AI,” rules allowed public reads: allow read: if true;. A quick curl command or Firebase SDK could dump data. Harry fixed it by alerting the developers, who later secured the database.
This isn’t rare. Firebase misconfigurations have leaked data from apps like Fortnite trackers before. Wrapper apps rush to market, skipping audits.
Lessons for Users
This breach warns the AI boom: Convenience can’t trump security. Developers must prioritize configs; users, think twice before confiding in apps.
Follow us on Google News , LinkedIn and X to Get More Instant Updates. Set Cyberpress as a Preferred Source in Google.
The post AI Chat App Data Breach Exposes 300 Million Messages from 25 Million Users appeared first on Cyber Security News.
I first got into reading romantasy books after a friend of mine recommended Fourth Wing…
IO Interactive has assured fans that there will be more Hitman adventures, and the team…
Diablo 4 fans have finally discovered the game's secret cow level, though some fans are…
INDIANAPOLIS, Ind. (WOWO) — A federal judge has cleared the way for a religious freedom…
INDIANAPOLIS, Ind. (WOWO) — The generosity of FOX59/CBS4 viewers will help provide thousands of meals…
American politician and diplomat Howard Baker (1925-2014), United States Senator from Tennessee, during the Select…
This website uses cookies.