Friday, June 6, 2025
Friday, June 6, 2025
- Advertisement -

More Chinese groups using ChatGPT for covert operations: OpenAI

Threat actors exploiting AI’s power for influence campaigns, disinformation, and cyber operations, OpenAI says

Must Read

- Advertisement -
- Advertisement -

OpenAI, a leading artificial intelligence research organisation, has recently reported an increasing number of Chinese groups utilising its advanced AI technologies for covert operations.

The revelation, detailed in a report released on Thursday, underscores the growing sophistication and complexity of malicious actors employing generative AI tools such as ChatGPT for politically and geopolitically motivated activities.

Since the emergence of ChatGPT in late 2022, concerns have been mounting regarding the potential misuse of generative AI technologies, which are capable of swiftly producing highly realistic human-like text, imagery, and audio.

These capabilities, while transformative for many legitimate applications, have also attracted attention from various threat actors aiming to exploit AI’s power for influence campaigns, disinformation, and cyber operations.

OpenAI actively monitors its platform for such abuses and routinely publishes reports outlining detected malicious activities, ranging from malware creation to the dissemination of fake content on digital platforms.

False allegations

The report pointed out that although the scope and tactics of these operations have expanded over time, the scale remains relatively modest and targets are generally limited in audience.

Specific examples highlighted by OpenAI include the banning of ChatGPT accounts that generated politically sensitive social media posts related to China.

These posts encompassed criticism of a Taiwan-centered video game, false allegations against a Pakistani activist, and contentious material associated with the closure of USAID operations.

Furthermore, some posts openly criticised economic policies such as then US President Donald Trump’s tariffs, reflecting attempts to stoke public discontent through platforms like X (formerly Twitter).

Content generation

In addition to content generation, China-linked threat actors have been found leveraging AI assistance in various phases of cyber operations. These activities entail open-source intelligence gathering, modifying scripts, troubleshooting system configurations, and developing tools for password brute forcing and social media automation.

Particularly notable is an influence operation originating in China that employed AI not only to generate polarised textual content on divisive US political topics but also to create AI-generated profile images, thereby enhancing the perceived authenticity of the accounts involved.

The Chinese government has not provided an official response to OpenAI’s findings as of the time of the report’s release.

Meanwhile, OpenAI continues to solidify its stature within the technology sector, recently announcing a substantial $40 billion funding round that values the company at an impressive $300 billion, underscoring its pivotal role in shaping the future of artificial intelligence.

- Advertisement -

Latest News

Everest ransomware group publishes Jordan Kuwait Bank’s sensitive data

Negotiations between the bank and the attackers fail to meet a specified deadline

Hackers offer 4.4GB database of tradgo.in for sale online

Hackers invite interested parties to negotiate pricing privately

UAE launches Arabic language AI model to top the regional race

Falcon Arabic harnesses a high-quality, native Arabic dataset to better capture the richness and diversity of the language
- Advertisement -
- Advertisement -

More Articles

- Advertisement -