Categories: Cyber Security News

OpenAI Sora 2 Vulnerability Exposes System Prompts Through Audio Transcripts

OpenAI’s Sora 2 represents a significant leap forward in video generation technology. Yet recent security research has uncovered a critical vulnerability that exposes its hidden system prompt via multimodal extraction techniques.

Researchers successfully demonstrated that carefully crafted requests across different output modalities,such as audio transcripts, encoded video frames, and text renderings,can systematically extract sensitive instructions that guide the model’s behavior, raising essential questions about the security posture of production AI systems.

Multi-Modal Extraction Attack Surface

The vulnerability exploits Sora 2’s ability to generate content across multiple modalities: text, images, video, and audio.

Researchers discovered that while traditional text-to-text prompt injection defenses are relatively robust, the model’s cross-modal capabilities create unexpected weaknesses.

The attack leverages the principle that information can be progressively recovered by requesting that the system render or speak the target content in different formats.

Audio transcription proved remarkably effective, as speech-to-text conversion maintains higher fidelity than image-based text rendering, which suffers from character distortion and semantic drift.

The extraction process involved fragmentary requests spread across multiple 15-second video clips, with researchers iteratively refining their approach based on successfully recovered portions.

This stepwise methodology transformed seemingly impossible extraction into a practical attack, demonstrating how temporal and format constraints can be circumvented through persistence and multi-modal chaining.

OpenAI acknowledged the vulnerability on November 4, 2025, noting that system prompt extraction was already a known possibility across multimodal systems.

The research team responsibly coordinated with OpenAI’s security team before publication, with full disclosure occurring on November 12, 2025.

While Sora 2’s exposed system prompt itself contains no highly sensitive data, researchers emphasize that system prompts function as security boundaries equivalent to firewall rules and should be protected as confidential configuration, not harmless metadata.

Vulnerability Type Attack Vector Severity Status
System Prompt Extraction Multi-Modal Input (Audio/Video/Image) Medium Acknowledged
Audio Transcript Leakage Speech-to-Text Transcription Medium Acknowledged
Cross-Modal Data Exfiltration Encoded Image/Video Generation Low-Medium Acknowledged

This research highlights an emerging gap in AI security: while text-based safeguards have matured through years of red-teaming, multi-modal systems remain vulnerable to creative circumvention strategies.

The vulnerability demonstrates how duplicate semantic content, when transformed across different output formats, can expose protected information.

As AI systems become increasingly complex and multi-modal, security teams must evolve their threat models beyond single-modality assumptions to account for cross-channel information leakage and indirect exfiltration pathways.

Find this Story Interesting! Follow us on Google NewsLinkedIn and X to Get More Instant Updates

The post OpenAI Sora 2 Vulnerability Exposes System Prompts Through Audio Transcripts appeared first on Cyber Security News.

rssfeeds-admin

Recent Posts

The LEGO Pretty Pink Flower Bouquet Drops to Its Lowest Price Ever on Amazon Before Mother’s Day

There are a lot of really cool LEGO sets for adults these days, but none…

2 minutes ago

The Warcraft Movie Is Getting a 10th Anniversary Limited Edition Steelbook – Here’s Where To Preorder It

It's been 10 years since the release of the Warcraft movie, and in celebration of…

2 minutes ago

Save Nearly 50% Off “Like New” Sonos Ace Noise Canceling Headphone

Fancy yourself a pair of high-end noise canceling headphones at a budget-friendly price? Best Buy…

2 minutes ago

ID@Xbox: Everything Announced at the 2026 Showcase

IGN has partnered with ID@Xbox for another year to bring all the exciting indie game…

3 minutes ago

NPM Menace Exposes Hugging Face As Backend For Data Theft and Malware Delivery

The malicious npm package js-logger-pack (versions up to 1.1.27) has evolved, turning Hugging Face into…

47 minutes ago

NPM Menace Exposes Hugging Face As Backend For Data Theft and Malware Delivery

The malicious npm package js-logger-pack (versions up to 1.1.27) has evolved, turning Hugging Face into…

47 minutes ago

This website uses cookies.