Securing AI systems against evasion, poisoning, and abuse
Securing AI systems against evasion, poisoning, and abuse 2024-01-09 at 06:32 By Mirko Zorz Adversaries can intentionally mislead or “poison” AI systems, causing them to malfunction, and developers have yet to find an infallible defense against this. In their latest publication, NIST researchers and their partners highlight these AI and machine learning vulnerabilities. Taxonomy of […]
React to this headline:
Securing AI systems against evasion, poisoning, and abuse Read More »