Categories: Cyber Security News

Hackers Can Exploit Default ServiceNow AI Assistants Configurations to Launch Prompt Injection Attacks

A dangerous vulnerability in ServiceNow’s Now Assist AI platform allows attackers to execute second-order prompt injection attacks via default agent configuration settings.

The flaw enables unauthorized actions, including data theft, privilege escalation, and exfiltration of external email, even with ServiceNow’s built-in prompt injection protection enabled.

The vulnerability stems from three default configurations that, when combined, create a dangerous attack surface. ServiceNow Assist agents are automatically assigned to the same team and marked as discoverable by default.

This enables inter-agent communication through the AiA ReAct Engine and Orchestrator components, which manage information flow and task delegation between agents.

ServiceNow AI Prompt Injection Attacks

Attackers exploit this by injecting malicious prompts into data fields that other agents will read when a safe agent encounters the compromised data.

It can be tricked into recruiting more powerful agents to execute unauthorized tasks on behalf of the highly privileged user who triggered the initial interaction.

In proof-of-concept demonstrations, Appomni researchers successfully performed Create, Read, Update, and Delete (CRUD) operations.

On sensitive records and sent external emails containing confidential data, all while avoiding existing security protections.

The attack succeeds primarily because agents execute with the privileges of the user who initiated the interaction, not the user who inserted the malicious prompt.

A low-privileged attacker can therefore leverage administrative agents to bypass access controls and access data they would otherwise be unable to reach.

Appomni advises organizations using ServiceNow to immediately implement these protective measures: Enable Supervised Execution Mode: Configure powerful agents performing CRUD operations or email sending to require human approval before executing actions.

Disable Autonomous Overrides: Ensure the sn_aia.The enable_usecase_tool_execution_mode_override system property remains set to false.

Segment Agent Teams: Separate agents into distinct teams based on function, preventing low-privilege agents from accessing powerful ones.

Monitor Agent Behavior: Deploy real-time monitoring solutions to detect suspicious agent interactions and deviations from expected workflows.

ServiceNow confirmed that these behaviors align with the intended functionality but updated the documentation to clarify configuration risks. Security teams must prioritize auditing their AI agent deployments immediately to prevent exploitation of these default settings.

Follow us on Google News, LinkedIn, and X for daily cybersecurity updates. Contact us to feature your stories.

The post Hackers Can Exploit Default ServiceNow AI Assistants Configurations to Launch Prompt Injection Attacks appeared first on Cyber Security News.

rssfeeds-admin

Recent Posts

Halo’s Kiki Wolfkill Reveals She’s Left Microsoft After 28 Years

More big corporate shakeups are happening inside Microsoft. Kiki Wolfkill, art director, producer, and veteran…

15 minutes ago

Cybercriminals Exploit French Fintech Accounts to Move Stolen Money Before Detection

Organized fraud networks are now using a new method to move stolen money in France.…

20 minutes ago

Hackers Use Lotus Wiper to Destroy Drives and Delete Files in Energy Sector Attack

A newly discovered malware called Lotus Wiper has been used in a targeted destructive attack…

20 minutes ago

Microsoft Warns Jasper Sleet Uses Fake IT Worker Identities to Infiltrate Cloud Environments

A North Korea-linked threat group is quietly getting hired by real companies. Jasper Sleet, a…

20 minutes ago

Dusty Turner Back In Prison

STAUNTON, Va. (WOWO) — Former Navy SEAL trainee and Indiana native Dustin “Dusty” Turner is…

30 minutes ago

Indy 500 Street Signs

INDIANAPOLIS, Ind. (WOWO) — The month of May is nearly here and a few IndyCar…

31 minutes ago

This website uses cookies.