OpenAI's report reveals that a Chinese law enforcement officer used ChatGPT to assist in influence operations, including drafting plans to smear Japanese Prime Minister Sanae Takaichi and targeting Chinese dissidents. The user leveraged the AI to craft and polish reports for smear campaigns, which combined traditional methods like fake social media accounts with AI tools. The activity provided insight into how state-linked actors use AI models alongside other platforms for politically motivated campaigns. The main topics covered are state-sponsored disinformation, the malicious use of AI chatbots like ChatGPT, and the methods of online influence operations.
Breaking cybersecurity news, news analysis, commentary, and other content from around the world, with an initial focus on the Middle East & Africa and the Asia Pacific
Chinese Police Use ChatGPT to Smear Japan PM Takaichi
A Chinese keyboard warrior inadvertently leaked information about politically motivated influence operations through a ChatGPT account.
Someone associated with Chinese Communist Party (CCP) law enforcement used ChatGPT to help manage smear campaigns against CCP critics, including the prime minister of Japan.
Every so often, OpenAI reports on recent attempts by threat actors to use ChatGPT maliciously. Besides scammy cybercriminal activity, its latest report, published Feb. 25, highlights nation-states using the chatbot to smooth out politically motivated campaigns — some targeted and petty, others widespread and geopolitical in nature.
For instance, OpenAI recently got a unique look into China's propaganda machine through one ChatGPT account linked to Chinese law enforcement. Too lazy to do their own work, the user regularly had ChatGPT draft and edit reports on active smear campaigns against Chinese dissidents, as well as Sanae Takaichi, the current prime minister of Japan.
This campaign, and other recent ones, "illustrate how threat actors typically use artificial intelligence (AI) in combination with other, more traditional tools, such as websites and social media accounts," OpenAI wrote. "Threat activity is seldom limited to one platform; as our report on a Chinese influence operator shows, it is not always limited to one AI model."
CCP Cop Targets China Critics
Last October, Takaichi was elected president of Japan's leading Liberal Democratic Party (LDP), and, in turn, prime minister of the country. Already a known China hawk, the new post did not inspire her to tone down her rhetoric. In early speeches and government events, she indicated her intention to lend military support to Taiwan, should China invade it. She also had the gall to reference China's checkered human rights history with ethnic Mongols in the Inner Mongolia Autonomous Region, in northern China, and so on.
As some small form of revenge, a keyboard warrior for the Chinese state took to ChatGPT. The user asked the bot to help craft a plan to discredit Takaichi, by posting and amplifying negative online comments about her, as well as using email accounts to impersonate Japanese citizens to send complaints to Japanese politicians about her stance on foreign immigration. Further, they used fake social media accounts and recruited existing Internet users to generate political pressure over the cost of living in Japan, stir up anger over US tariffs, and spread positive sentiments online about the conditions of oppressed peoples in Inner Mongolia.
OpenAI wouldn't have willingly published a report about this user had ChatGPT actually complied with their malicious requests. They weren't too dissuaded, though, and continued to use ChatGPT to draft and polish status reports, and other internal documentation less overtly malicious but nonetheless helpful to the operation.
The same individual also used ChatGPT as a writing assistant for campaigns against Chinese dissidents and one human rights organization. The documents they generated offered OpenAI a window into CCP's propaganda machine. For instance, one report that the user asked ChatGPT to draft included a claim that 300 people in their province were engaged in these influence operations. Other updates referred to equivalent operations in other Chinese provinces, and indications that the CCP rabblerousers also use other, more liberal AI chatbots, like Qwen and DeepSeek. The documents also referenced more than 100 different tactics used against enemies of the CCP, from basic trolling to hacking, and exploiting targets' mental health and their families.
A More Effective ChatGPT Influence Op
Elsewhere in its report, OpenAI describes a more effective model for how to weaponize mainstream chatbots in an influence operation.
In Operation "No Bell," a Russian threat actor used ChatGPT to generate and edit social media content and longform articles about geopolitical issues in sub-Saharan Africa. One article, for instance, advocated that the president of Angola win the Nobel Peace Prize — apparently, a distant attempt to rile Donald Trump. Another, ironically, accused Western leaders of targeting South Africa with disinformation.
ChatGPT got less in the way of this campaign, as the prompts were not in and of themselves objectively malicious. And this strategy appears to have borne fruit for the threat actor. Some 53 of their articles actually appeared on various African news sites, under the byline "Dr Manuel Godsin," a fake PhD from the University of Bergen. The threat actor took care to reduce suspicion of AI generation by asking ChatGPT to write in the style of a human journalist. They also eliminated all em dashes from the final generated text. Em dashes are widely, incorrectly, assumed to be an indication of AI-generated text.
After discovering both the Russian and Chinese influence operations, OpenAI banned the accounts, though this will slow the threat actors only for as many seconds as it takes them to register new accounts.
Far more than ChatGPT, though, the real concern comes when threat actors use open weight large language models (LLM). "Guarding against AI-driven malicious persuasion is exceptionally difficult with open-weight models because their internal safety training can be easily stripped away through minimal adversarial fine-tuning," says Ram Varadarajan, CEO at Acalvio. "Unlike proprietary systems, these models lack centralized control, making traditional guardrails, such as system prompts and classifiers, easy to bypass or overwrite. As research has repeatedly shown, LLMs are far more persuasive than humans. Therefore, a malicious actor leveraging LLMs to engage in individualized persuasion at scale is a problem -- a very big societal problem."
Read more about:
DR Global Asia Pacific