Categories: Cyber Security News

Hackers Hijacking Access to Cloud-based AI Models With Exposed Keys in 19 Minutes

New research reveals that threat actors are exploiting exposed cloud credentials to hijack enterprise AI systems within minutes of credential leakage. Recent incidents have demonstrated that attackers can compromise large language model (LLM) infrastructure in under 19 minutes. 

Dubbed LLMjacking, this attack vector targets non-human identities (NHIs) – API keys, service accounts, and machine credentials – to bypass traditional security controls and monetize stolen generative AI access.

Sponsored
class="wp-block-heading">The LLMjacking Kill Chain

Security firm Entro Labs recently exposed functional AWS keys across GitHub, Pastebin, and Reddit to study attacker behavior.

Their research uncovered a systematic four-phase attack pattern:

Llmjacking kill-chain

Credential Harvesting: Automated bots scan public repositories and forums using Python scripts to detect valid credentials, with 44% of NHIs exposed via code repositories and collaboration platforms.

Rapid Validation: Attackers performed initial API calls like GetCostAndUsage within 9-17 minutes of exposure to assess account value, avoiding predictable calls like GetCallerIdentity to evade detection.

Average time-to-access of exposed secrets in different exposure locations

Model Enumeration: Intruders executed GetFoundationModelAvailability requests via AWS Bedrock to catalog accessible LLMs – including Anthropic’s Claude and Amazon Titan – mapping available attack surfaces.

Exploitation: Automated InvokeModel attempts targeted compromised endpoints, with researchers observing 1,200+ unauthorized inference attempts per hour across experimental keys.

Sponsored

The Storm-2139 cybercrime group recently weaponized this methodology against Microsoft Azure AI customers, exfiltrating API keys to generate dark web content. Forensic logs show attackers:

  • Leveraged Python’s requests library for credential validation
  • Used aws s3 ls commands to identify AI/ML buckets
  • Attempted bedrock: InvokeModel with crafted prompts to bypass content filters

Entro’s simulated breach revealed attackers combining automated scripts with manual reconnaissance – 63% of initial accesses used Python SDKs, while 37% employed Firefox user agents for interactive exploration via AWS console.

Uncontained LLMjacking poses existential risks:

  • Cost Exploitation: A single compromised NHI with Bedrock access could incur $46,000/day in unauthorized inference charges.
  • Data Exfiltration: Attackers exfiltrated model configurations and training data metadata during 22% of observed incidents.
  • Reputational Damage: Microsoft’s Q1 2025 breach saw threat actors generate 14,000+ deepfake images using stolen Azure OpenAI keys.

Mitigation Strategies

  • Detect & monitor NHIs in real-time
  • Implement automated secret rotation
  • Enforce least privilege
  • Monitor unusual API activity
  • Educate developers on secure NHI management

With attackers operationalizing leaks in under 20 minutes, real-time secret scanning and automated rotation are no longer optional safeguards but critical survival mechanisms in the LLM era.

Are you from SOC/DFIR Teams? – Analyse Malware Incidents & get live Access with ANY.RUN -> Start Now for Free.

The post Hackers Hijacking Access to Cloud-based AI Models With Exposed Keys in 19 Minutes appeared first on Cyber Security News.

rssfeeds-admin

Recent Posts

Animated Icon Component Library for React/Vue/Svelte/Solid/Web Component

Animated Icons is an animated icon library that you can easily use as components in…

56 minutes ago

Credential Theft Surge As Attackers Exploit Cloudflare Anti‑Security

Service platforms like CloudFlare have long been heralded for providing robust protection for legitimate websites,…

1 hour ago

Critical CrackArmor Vulnerabilities Expose 12.6 Million Linux Servers to Complete Root Takeover

Nine critical vulnerabilities have been discovered in AppArmor, which is a widely used mandatory access…

1 hour ago

OpenSSH GSSAPI Vulnerability Allow an Attacker to Crash SSH Child Processes

A significant vulnerability in the GSSAPI Key Exchange patch was applied by numerous Linux distributions…

1 hour ago

Meta Launches New Anti-Scam Tools on WhatsApp, Facebook and Messenger

Meta has launched a suite of advanced anti-scam tools across WhatsApp, Facebook, and Messenger to…

1 hour ago

Tax cap vote in Warner fails, operating budget passes in town meeting

James Gaffney doesn’t think Warner should spend more money than it has. He was behind…

2 hours ago

This website uses cookies.