New AI Jailbreak Method ‘Bad Likert Judge’ Boosts Attack Success Rates by Over 60%
New AI Jailbreak Method ‘Bad Likert Judge’ Boosts Attack Success Rates by Over 60% 2025-01-03 at 13:19 By Cybersecurity researchers have shed light on a new jailbreak technique that could be used to get past a large language model’s (LLM) safety guardrails and produce potentially harmful or malicious responses. The multi-turn (aka many-shot) attack strategy […]
React to this headline:
New AI Jailbreak Method ‘Bad Likert Judge’ Boosts Attack Success Rates by Over 60% Read More »