OpenAI Bans Accounts Using ChatGPT To Edit Surveillance Tool Code
OpenAI has confirmed it recently identified and banned a group of accounts using ChatGPT to write sales pitches and debug code for a suspected social media surveillance tool.
These accounts were involved in two campaigns: “Peer Review” and “Sponsored Discontent” — which the company believes are likely originating from China.
According to OpenAI, the now-banned accounts involved in the “Peer Review” campaign
used or attempted to use models from OpenAI to promote and improve an AI assistant, which they claimed could collect real-time data and reports about anti-China protests in the US, UK, and other Western countries. This information was then supposedly sent to Chinese authorities.
Accounts involved in the “Sponsored Discontent” campaign used ChatGPT to generate English-language comments and Spanish news articles, displaying typical “spamouflage” behavior. These materials, heavily laced with anti-American rhetoric, were likely aimed at stoking discontent in Latin American countries, particularly Peru, Mexico, and Ecuador.
“The operators used our models to proofread claims that their insights had been sent to Chinese embassies abroad, and to intelligence agents monitoring protests in countries including the United States, Germany and the United Kingdom,” said OpenAI.
“Our policies prohibit the use of AI for communications surveillance or unauthorized monitoring of individuals. This includes activities by or on behalf of governments and authoritarian regimes that seek to suppress personal freedoms and rights,” the company said in the report.
OpenAI said the accounts involved in the campaigns used other AI tools, too, to help create their code, including Llama, an open-source model from Meta.
Meta responded by saying that if their model was used, it was just one of many tools available, and other AI models, including those from China, might also have been used. OpenAI added that they can’t confirm if the code was put to use.
“This is a pretty troubling glimpse into the way one non-democratic actor tried to use democratic or US-based AI for non-democratic purposes, according to the materials they were generating themselves,” Ben Nimmo, OpenAI’s principal investigator on the company’s intelligence and investigations team, said in a statement.
React to this headline: