Just 250 malicious training documents can poison a 13B parameter model – that’s 0.00016% of a whole dataset

Poisoning AI models might be way easier than previously thought if an Anthropic study is anything to go on. …