OpenAI shuts down undert operative activities associated with China and other countries as they are taken offline by China

NPR

In the last three months, OpenAI says it disrupted 10 operations using its AI tools in malicious ways, and banned accounts connected to them.
Some of them combined elements of influence operations, social engineering, surveillance.
One Chinese operation, which OpenAI dubbed “Sneer Review,” used ChatGPT to generate short comments that were posted across TikTok, X, Reddit, Facebook and other websites, in English, Chinese and Urdu.
“The social media behaviors we observed across the network closely mirrored the procedures described in this review.”
Another operation that OpenAI tied to China focused on collecting intelligence by posing as journalists and geopolitical analysts.

NONE

According to OpenAI researchers, Chinese propagandists are using ChatGPT to write comments and posts on social media platforms as well as to compile performance evaluations that show how well they perform for their employers.

Benjamin Nimmo, principal investigator on OpenAI’s intelligence and investigations team, told reporters during a call regarding the company’s most recent threat report, “What we’re seeing from China is a growing range of covert operations using a growing range of tactics.”.

According to OpenAI, it has maliciously disrupted ten operations with its AI tools in the past three months and banned accounts associated with them. China is probably where four of the operations started, the company stated.

A strategy game was among the many nations and subjects that the China-affiliated operations “targeted.”. In certain cases, they integrated aspects of social engineering, surveillance, and influence operations. According to Nimmo, they worked on a variety of websites and platforms.

One Chinese operation, called “Sneer Review” by OpenAI, used ChatGPT to create brief comments in English, Chinese, and Urdu that were posted on the websites TikTok, X, Reddit, Facebook, and others. Among the topics discussed were the Trump administration’s deconstruction of the U. A. Both positive and negative posts were made by the Agency for International Development. Additionally, a Taiwanese game in which players try to defeat the Chinese Communist Party was criticized.

In numerous instances, the process produced both a post and comments in response to it; according to OpenAI’s report, this behavior “seemed to be intended to give the false impression of natural engagement.”. The operation created a lengthy article claiming the game received widespread backlash after using ChatGPT to generate critical comments about it.

Sneer Review’s actors also created “a performance review describing, in detail, the steps taken to establish and run the operation” using OpenAI’s tools, according to OpenAI. “The social media practices outlined in this review were very similar to those we saw throughout the network. “.”.

Purposing as journalists and geopolitical analysts, OpenAI linked another operation to China that was centered on gathering intelligence. It translated emails and messages from Chinese to English, wrote posts and biographies for accounts on X, and conducted data analysis using ChatGPT. “Correspondence addressed to a US Senator regarding the nomination of an Administration official” was among them, according to OpenAI, but it was not possible for it to independently verify if the letter was sent.

“They also created what appeared to be marketing materials using our models,” Nimmo stated. In line with its online activity, the operation allegedly carried out “fake social media campaigns and social engineering designed to recruit intelligence sources,” according to OpenAI’s reports.

A Chinese surveillance operation was named in OpenAI’s February threat report. The operation claimed to monitor social media “to feed real-time reports about protests in the West to the Chinese security services.”. In order to write descriptions for the social media monitoring tool’s sales pitches and debug code, the operation made use of OpenAI’s tools.

OpenAI said in its new report released on Wednesday that it had also stopped a deceptive employment campaign that resembled operations related to North Korea, a recruitment scam connected to Cambodia, a spam operation attributed to a commercial marketing company in the Philippines, and covert influence operations that were probably based in Russia and Iran.

According to Nimmo, “it is worth acknowledging the sheer range and variety of tactics and platforms that these operations use, all of them put together,”. But he said that the operations didn’t reach a lot of actual people and were mostly interrupted in their early phases.

According to Nimmo, “we didn’t generally see these operations getting more engagement because of their use of AI,”. For these operations, improved tools do not always translate into improved results. “.”.

scroll to top