Security researchers find AI abuse method in ServiceNow platform

Security researchers have discovered a vulnerability in ServiceNow’s Now Assist platform. The flaw involves second-order prompt injection, which can transform AI into a malicious insider. This finding highlights potential risks in AI-assisted enterprise tools.

In a recent report, security researchers identified a method to exploit ServiceNow’s Now Assist platform through second-order prompt injection. This technique allows attackers to manipulate AI responses in ways that turn the system into an unwitting accomplice, potentially leaking sensitive information or performing unauthorized actions.

The vulnerability stems from how the AI processes user inputs indirectly, enabling injected prompts to influence outputs beyond the immediate interaction. ServiceNow’s Now Assist, an AI-powered feature designed to enhance workflow efficiency, becomes a vector for insider threats when compromised.

Experts emphasize the need for robust safeguards in AI integrations within enterprise software. While details on the exact exploitation steps remain limited in public disclosures, the discovery underscores ongoing challenges in securing generative AI models against prompt-based attacks.

This issue was detailed in a TechRadar article published on November 21, 2025, drawing attention to the evolving landscape of AI security in professional environments.

이 웹사이트는 쿠키를 사용합니다

사이트를 개선하기 위해 분석을 위한 쿠키를 사용합니다. 자세한 내용은 개인정보 보호 정책을 읽으세요.
거부