Meta’s AI safety system defeated by the space bar
Meta’s AI safety system defeated by the space bar 2024-07-30 at 00:16 By Thomas Claburn ‘Ignore previous instructions’ thwarts Prompt-Guard model if you just add some good ol’ ASCII code 32 Meta’s machine-learning model for detecting prompt injection attacks – special prompts to make neural networks behave inappropriately – is itself vulnerable to, you guessed […]
React to this headline:
Meta’s AI safety system defeated by the space bar Read More »