Tuesday, September 17, 2024
Tuesday, September 17, 2024
- Advertisement -

OpenAI bans fake Iran accounts generating false news content

Storm-2035 uses AI-powered tools to generate false narratives about the US elections and other global events

Must Read

- Advertisement -
- Advertisement -
  • Iranian actors, seeking to manipulate public opinion and sway political discourse, leveraged ChatGPT’s capabilities to produce seemingly credible content designed to sow discord and undermine trust.
  • The incident exemplifies how readily available and powerful AI tools can be co-opted for nefarious purposes, even if the initial impact appears limited.

The revelation of an Iranian influence operation dubbed “Storm-2035” using AI-powered tools like ChatGPT to generate false narratives about the US elections and other global events underscores the growing challenge of combating disinformation in the digital age.

The case highlights a concerning trend: the weaponisation of artificial intelligence (AI) for malicious purposes. Iranian actors, seeking to manipulate public opinion and sway political discourse, leveraged ChatGPT’s capabilities to produce seemingly credible content designed to sow discord and undermine trust.

The strategy, involving the creation of fabricated news articles, social media posts, and even the rewriting of existing comments, aimed to exploit the very mechanisms that govern online information dissemination.

While the scale and effectiveness of Storm-2035 remain under scrutiny, the very fact that such an operation was attempted raises serious concerns about the potential for AI-powered disinformation to escalate in the future.

Taking proactive steps

The incident exemplifies how readily available and powerful AI tools can be co-opted for nefarious purposes, even if the initial impact appears limited.

However, the response from OpenAI, the company behind ChatGPT, provides a glimmer of hope. By taking proactive steps to identify and ban accounts linked to Storm-2035, OpenAI demonstrates a commitment to safeguarding its technology from misuse.

The action signals a growing recognition within the AI community of the crucial responsibility to combat the weaponisation of these powerful tools.

Yet, the fight against AI-powered disinformation requires a multi-pronged approach. While companies like OpenAI can play a vital role in detecting and mitigating malicious activity, it is crucial to also foster greater public awareness and media literacy.

“Over the past several months, we have seen the emergence of significant influence activity by Iranian actors. Iranian cyber-enabled influence operations have been a consistent feature of at least the last three US election cycles,” Microsoft said in the most recent August 9th report.

The San Francisco AI company said it had banned several accounts linked to the campaign from its online services. The Iranian effort, OpenAI added, did not seem to reach a sizable audience.

Nefarious campaigns

“The operation doesn’t appear to have benefited from meaningfully increased audience engagement because of the use of AI,” Ben Nimmo, a principal investigator for OpenAI who has spent years tracking covert influence campaigns from positions at companies including OpenAI and Meta, said.

“We did not see signs that it was getting substantial engagement from real people at all.”

In May, the Microsoft-backed OpenAI said it had disrupted five other deceptive influence operations attempting to use AI-generated information to “manipulate public opinion or influence political outcomes.”

Those nefarious campaigns were said to have involved threat actors from Russia, China, Iran, and Israel.

Individuals must be empowered to discern credible information from fabricated content, particularly in the digital realm where information flows freely and rapidly.

Moreover, the incident necessitates a broader conversation about the ethical implications of AI development. As AI technologies become increasingly sophisticated and accessible, it is paramount that developers, policymakers, and researchers work collaboratively to establish ethical guidelines and safeguards to prevent misuse.


Discover more from TechChannel News

Subscribe to get the latest posts sent to your email.

- Advertisement -

Latest News

Microsoft in $60b share buyback programme

Microsoft raises quarterly dividend by 10%, from 75 cents to 83 cents per share

Nazara buys 47.7% stake in Moonshine Technology for Rs832cr

Partnership posits Nazara as a key player in shaping the future of gaming in India

Fortinet admits hackers steal 440GB of customers’ cloud data

Fortinet data breach affected customers primarily within the Asia-Pacific region
- Advertisement -
- Advertisement -

More Articles

- Advertisement -