Securing AI systems against evasion, poisoning, and abuse
Adversaries can intentionally mislead or “poison” AI systems, causing them to malfunction, and developers have yet to find an infallible defense against this. In their latest publication, NIST researchers and their partners highlight these AI and machine learning vulnerabilities. Taxonomy of attacks on Generative AI systems Understanding potential attacks on AI systems The publication, “Adversarial Machine Learning: A Taxonomy and Terminology of Attacks and Mitigations (NIST.AI.100-2),” is a key component of NIST’s broader initiative to … More
The post Securing AI systems against evasion, poisoning, and abuse appeared first on Help Net Security.
React to this headline: