System prompt engineering turns benign AI assistants into ‘investigator’ and ‘detective’ roles that bypass privacy guardrails

A team of boffins is warning that AI chatbots built on large language models (LLM) can be tuned into malicious agents to autonomously harvest users’ personal data, even by attackers with “minimal technical expertise”, thanks to “system prompt” customization tools from OpenAI and others.…