
Hosted on Bugcrowd, the new initiative marks a significant step in the company’s efforts to address vulnerabilities that fall outside the scope of traditional security flaws but still pose real-world harm potential.
The Safety Bug Bounty program is designed to complement OpenAI’s existing Security Bug Bounty program by accepting submissions that carry meaningful abuse and safety risks even when those issues don’t qualify as conventional security vulnerabilities.
Submissions will be triaged jointly by OpenAI’s Safety and Security Bug Bounty teams and may be rerouted between the two programs depending on scope and ownership.
AI-Specific Risk Categories in Focus
The program targets several distinct categories of AI-specific safety scenarios:
Agentic Risks Including MCP — This covers third-party prompt injection and data exfiltration scenarios where attacker-controlled text can reliably hijack a victim’s AI agent, including Browser, ChatGPT Agent, and similar agentic products, to perform harmful actions or leak sensitive user data.
To qualify, the behavior must be reproducible at least 50% of the time. Reports involving agentic products performing disallowed or potentially harmful actions at scale are also in scope.
OpenAI Proprietary Information — Researchers can report model generations that inadvertently expose reasoning-related proprietary information, as well as vulnerabilities that leak other confidential OpenAI data.
Account and Platform Integrity — This category targets weaknesses in account and platform integrity signals, including bypassing anti-automation controls, manipulating account trust signals, and evading account restrictions, suspensions, or bans.
OpenAI has been explicit about what is out of scope: generic jailbreaks that result in rude language or surface publicly available information will not be considered.
General content-policy bypasses without demonstrable safety or abuse impact are also excluded. However, OpenAI periodically runs private bug bounty campaigns targeting specific harm types, such as Biorisk content issues in ChatGPT Agent and GPT-5, and invites researchers to apply when those programs become available.
For vulnerabilities enabling unauthorized access to features, data, or functionality beyond permitted permissions, researchers are directed to the existing Security Bug Bounty program instead.
The launch signals a growing recognition that AI systems introduce an entirely new attack surface, one that traditional security frameworks weren’t built to address.
By incentivizing safety-focused research alongside conventional vulnerability disclosure, OpenAI is effectively establishing a structured framework for AI-specific threat modeling.
Researchers interested in participating can apply directly through OpenAI’s Safety Bug Bounty page on Bugcrowd.
Follow us on Google News, LinkedIn, and X for daily cybersecurity updates. Contact us to feature your stories.
The post OpenAI Launches AI Safety Bug Bounty to Detect AI-Specific Vulnerabilities appeared first on Cyber Security News.
Discover more from RSS Feeds Cloud
Subscribe to get the latest posts sent to your email.
