Cato Networks discovers a new LLM jailbreak technique that relies on creating a fictional world to bypass a model’s security controls.

The post New Jailbreak Technique Uses Fictional World to Manipulate AI appeared first on SecurityWeek.