Rogue AI Agents In Your SOCs and SIEMs – Indirect Prompt Injection via Log Files
AI agents (utilizing LLMs and RAG) are being used within SOCs and SIEMS to both help identify attacks and assist analysts with working more efficiently; however, I’ve done a little bit of research one sunny British afternoon and found that these agents can be abused by attackers and made to go rogue. They can be made to modify the details of an attack, hide attacks altogether, or create fictitious events to cause a distraction while the real target is attacked instead. Furthermore, if the LLM has access to additional tools or excessive privileges, then an attacker can venture out into other attacks.
React to this headline: