The initiative, announced by Microsoft Security Response, aims to strengthen security in enterprise AI by incentivizing ethical hackers to uncover potential weaknesses before malicious actors can exploit them.
The program leverages Microsoft’s newly developed Vulnerability Severity Classification for AI Systems, which categorizes AI-specific security risks into three primary vulnerability types:
This category addresses vulnerabilities that could be exploited to manipulate a model’s response to individual inference requests without modifying the model itself. Key vulnerability types include:
Prompt Injection: Attacks where injected instructions cause the model to generate unintended output, potentially allowing attackers to exfiltrate user data or perform privileged actions.
Critical severity prompt injections requiring no user interaction can earn the highest bounties.
Input Perturbation: Vulnerabilities where attackers perturb valid inputs to produce incorrect outputs, also known as model evasion or adversarial examples.
These vulnerabilities target the training phase of AI systems, including:
Model Poisoning: Attacks where the model architecture, training code, hyperparameters, or training data are tampered with.
Data Poisoning: When attackers add poisoned data records to datasets used to train or fine-tune models, potentially introducing backdoors that can be triggered by specific inputs.
Inferential Information Disclosure
This category encompasses vulnerabilities that could expose sensitive information about the model’s training data, architecture, or weights:
Bounty awards range from $500 to $30,000, with the highest rewards reserved for critical severity vulnerabilities accompanied by high-quality reports.
The program specifically targets AI integrations in PowerApps, model-driven applications, Dataverse, AI Builder, and Microsoft Copilot Studio.
The severity classification system considers both the vulnerability type and the security impact, with the highest rewards for vulnerabilities that could allow attackers to exfiltrate another user’s data or perform privileged actions without user interaction.
Security researchers interested in participating can begin by signing up for free trials of Dynamics 365 or Power Platform services.
Microsoft provides detailed documentation for each product to assist researchers in understanding the systems they’re testing.
Microsoft’s Security Response team announced, “Your research could help us strengthen the security of enterprise AI. “
The program forms part of Microsoft’s broader security initiative, which includes bounty programs for various Microsoft products and services.
All submissions are reviewed for bounty eligibility, and researchers are recognized even when they don’t qualify for monetary rewards but lead to security improvements.
Through this initiative, Microsoft continues to emphasize collaborative security efforts as AI integration deepens across its enterprise solutions.
Malware Trends Report Based on 15000 SOC Teams Incidents, Q1 2025 out!-> Get Your Free Copy
The post Microsoft to Offer Rewards Up to $30,000 for AI Vulnerabilities appeared first on Cyber Security News.
According to Reuters, Meta is looking to offset spending on AI and data centers with…
Hulu has decided to scrap Buffy the Vampire Slayer: New Sunnydale, its planned continuation series…
Jostling a folded piece of paper, holding it marooned in the air, selectman Beth Blair…
Boscawen voters cruised through a speedy town meeting Friday night, one with so little controversy…
Happy Saturday, all! This week, we found a number of deals that should help you…
Though it was weird to see the Golden Globes partner with Polymarket for its most…
This website uses cookies.