IBM's artificial intelligence tool, known as Bob, has been found susceptible to manipulation that could lead to downloading and executing malware. Researchers highlight its vulnerability to indirect prompt injection attacks. The findings were reported by TechRadar on January 9, 2026.
Security experts have identified a significant vulnerability in IBM's AI system called Bob, which could allow attackers to manipulate it into downloading and executing malicious software. According to a TechRadar article published on January 9, 2026, this flaw makes Bob particularly prone to indirect prompt injection, a technique where harmful instructions are embedded in seemingly innocuous inputs.
The report underscores the risks associated with AI tools in handling potentially dangerous tasks, such as interacting with external systems or processing user commands. While specific details on how the manipulation occurs were not elaborated in the available information, the potential for malware execution raises concerns about the security of enterprise AI deployments.
IBM has not yet issued a public response to these findings, but the vulnerability highlights ongoing challenges in securing AI models against sophisticated attacks. As AI adoption grows, such issues emphasize the need for robust safeguards to prevent exploitation.