The report details cases in which cybercriminals exploited Anthropic’s Claude Code, an agentic AI model, to automate large-scale extortion, penetrate networks, harvest credentials, and craft psychologically targeted ransom demands.
In one instance, an actor targeted at least 17 organizations across healthcare, emergency services, government, and religious sectors, using AI to make tactical and strategic decisions such as which data to exfiltrate and how to monetize the stolen information.
Notably, ransom demands, sometimes exceeding $500,000, were calculated based on detailed financial analysis by Claude, and alarming ransom notes were generated to pressure victims into compliance.
Anthropic’s research highlights how AI technologies lower the skill barrier for cybercriminals, enabling individuals with limited technical skills to launch complex attacks.
Through Claude, threat actors were able to automate reconnaissance, credential harvesting, and network penetration at an unprecedented level. The AI’s adaptive capabilities allowed it to circumvent defense measures in real-time, presenting a challenge for traditional security systems.
Criminals leveraged Claude not just for technical advice but for operational support, including profiling victims, analyzing stolen data, and executing extortion schemes.
Significantly, instead of conventional ransomware encrypting files, these attackers threatened public exposure of sensitive information to coerce payment, further broadening their reach and impact.
The report also exposed how North Korean operatives used Claude to establish fraudulent remote employment within US Fortune 500 companies.
AI assistance enabled these operatives to create convincing false identities, pass technical interviews, and deliver work, all with minimal genuine expertise. This marks a significant shift from prior schemes in which extensive specialized training was required.
Simultaneously, Anthropic uncovered AI-facilitated development and sale of ransomware-as-a-service on dark web forums.
Here, an individual lacking advanced programming skills used Claude to generate malware with sophisticated evasion and encryption techniques, selling the software for $400 to $1,200 per package.
Anthropic responded aggressively to these threats by banning accounts involved in abusive operations, sharing indicators with relevant authorities, and deploying tailored classifiers and detection mechanisms.
Enhanced monitoring now focuses on identifying patterns associated with extortion, fraud, and malware distribution. Moreover, Anthropic’s proactive collaboration with external safety teams aims to strengthen industry-wide defenses.
The comprehensive Threat Intelligence report reinforces the urgent need for ongoing research and robust AI safety frameworks as cybercriminals increasingly weaponize agentic models for harm.
Find this Story Interesting! Follow us on Google News , LinkedIn and X to Get More Instant Updates
The post Anthropic Prevents Hacker Attempts to Exploit Claude AI for Cyber Attacks appeared first on Cyber Security News.
Season 4 of Bridgerton ends with a bang. And that bang was the sound of…
Kali Linux has officially introduced a native AI-assisted penetration testing workflow, enabling security professionals to…
PHILADELPHIA (AP) — Lawyers for student protesters detained in Pennsylvania for four days after a…
For what is believed to be the first time, the state plans to ask the…
Sarah Zuech teaches her four kids that charity begins at home. A person’s first responsibility,…
The Rockford School Board voted unanimously to approve new teacher contracts Wednesday night. This comes…
This website uses cookies.