Thursday, September 19, 2024
Thursday, September 19, 2024
- Advertisement -

It’s 2023, do you know where your truth is?

Must Read

- Advertisement -
- Advertisement -
  • The greatest challenge that humanity will face over the next decade is the ability to tell fact from fiction, reality from fantasy and information from disinformation; everything else is predicated on that.
  • It is inevitable that states, activists, and advanced threat actors will also leverage the power of AI to turbocharge disinformation campaigns.
  • Security leaders should initiate conversations across IT, OT, PR, Marketing and other internal teams to make sure they know how to collaborate effectively when disinformation is discovered.

Sometimes when you tell the truth, it is really hard to be believed. That’s why Large Language Models like ChatGPT play fast and loose with it.

We have awoken in the world of Generative Adversarial Networks (GAN), Large Language Models, and scientific crises of confidence (the proposed 6 -month moratorium on training new LLMs), almost as if we have no idea how we got here, or what the implications may be.

The central objective in a GAN learning model is one of manufacturing credibility. The “generator” learns to generate credible data; the “discriminator” attempts to distinguish the fake from the real. Truthfulness and accuracy are second order considerations if they even figure at all.

Rik Ferguson, VP of Security Intelligence, at Forescout
Rik Ferguson

In addition, as the public become more aware of the prevalence and possibilities of AI, it will become steadily easier to dismiss the truth as fake; something that runs with the grain of current social trends of scepticism and dismissal of “experts”.

In a paper entitled “Deep Fakes: A Looming Challenge for Privacy, Democracy and National Security”, Robert Chesney and Danielle Citron refer to this phenomenon as the “liar’s dividend”.

Greatest challenge

The greatest challenge that humanity will face over the next decade is the ability to tell fact from fiction, reality from fantasy and information from disinformation; everything else is predicated on that.

Information and the ways in which it is delivered, whether through social networks, social engineering, fake news, or more obvious propaganda, could just as easily be our downfall as our saviour.

Sam Altman, the CEO of OpenAI, has been outspoken about the inherent risks in sudden rise of AI, most recently calling for an “IAEA for superintelligence”, an international authority empowered to inspect systems, require audits and test compliance.

Regulatory and legislative efforts, focused primarily on data privacy and security, algorithmic transparency, accountability and permitted use-cases, are already well underway in the European Union, Canada, the United States and to a certain extent have already passed into law in China, although this regulation will not apply to the Chinese government.

Who will win the AI race?

Both China and Russia have made no secret of their desire to “win the AI race” with current and pledged investments ranging from hundreds of millions to billions of dollars in AI research and development.

While companies like OpenAI, IBM and Apple might be top of mind when asked to name the major players in artificial intelligence, we should not forget that for every Amazon there’s an Alibaba, for every Microsoft a Baidu, and for every Google a Yandex.

Many of the innovations in the global AI space share similar aims, methodologies, and training sets, but not all motivations are created equal. In February of 2023, a Belarussian hacker group called “Cyberpartisans” shared more than two terabytes of data leaked from Roskomnadzor, Russia’s media regulator.

This leak clearly demonstrates the extent to which AI is already being used to monitor, censor and shape public opinion and repress freedom of expression in Russia.

AI development has been on a relatively slow burn since 1951, when Marvin Minsky built the first randomly wired neural network learning machine (SNARC). Over the past 20 years, Machine Learning has seen constant innovation in cybersecurity, initially for detecting spam and classifying websites and later for the detection of exploits, malware and suspicious activity.

Recent innovations in AI have been focused particularly in the areas of Generative Adversarial Networks (GAN) and Natural Language Processing/Generation (NLP/NLG), meaning that AI can now synthesise faces, voices, moving images and text.

Through these media it can also create “knowledge”, emulate character traits, and even create physical objects through recently released text to 3D print generators.

Positive potential

All of this technology, aside from its positive potential will also hugely benefit the propagandist and the conspiracy theorist. At its most benign it will be used to fuel doubt and destroy credibility and at its worst it will be used to create, sustain and amplify an entirely false image of reality.

An image with an explicitly malicious agenda. Cybercriminals are already taking advantage of the abundance of, and ease of access to, these technologies to enable non-consensual sexual fakes, fraud and even kidnapping scams.

It is inevitable that states, activists, and advanced threat actors will also leverage the power of AI to turbocharge disinformation campaigns.

Imagine an exponential increase in the volume and quality of fake content, the creation and automation of armies of AI-driven digital personae replete with rich and innocent backstories to disseminate and amplify it, and predictive analytics to identify the most effective points of social leverage to exploit to create division and unrest.

The ability to spot and deter AI-powered disinformation campaigns necessitates active critical thinking skills from security teams, beyond those used in merely a technical lens to monitor networks and analyse collected data.

Disinformation operates in a technical and psychological way, which is why security leaders need to implement the following into their risk management programs:

Harness the power of AI

Investigate how your own defenses could benefit from the data collection, aggregation and mining possibilities offered by AI. Just as a would-be attacker begins with reconnaissance, so too can the defender.

Ongoing monitoring of the information space surrounding your organisation and industry could serve as a highly effective early warning system.

Empower employee mindsets

Most employees should be aware of the processes and regulations they need to be following, but attackers like to use social engineering, pretexting and “position authority” to persuade them to operate outside their normal constraints.

Because employees generally want to do what’s best for their company and please their bosses at the same time, it can be a real conflict of interest when an employee is asked to do something questionable.

Rather than rewarding successful shortcuts, security leaders and executives need to create a mindset of accountability in their employees that questions obscure data or directions and acts as the first line of defense against disinformation.

Employees need to have the power and confidence to say “no” to anyone when being asked to go outside the process— without fear of repercussion— even if they are talking to the CEO.

False news scenario

Part of disinformation’s effectiveness comes from its “shock factor.” The (false) news can be so critical, and the danger can seem so imminent, that it can cause people to react in less coordinated ways unless they prepared for the exact situation in advance.

This is where it can be incredibly helpful to do “pre-bunking” of the type of disinformation your company would most likely be targeted with.

This will psychologically pre-position your employees to expect certain anomalies and be more mentally prepared to act with the appropriate next steps, once they determine whether the threat is real or fake.

Coordinate incident response plans

Cyberattacks and breaches are already chaotic enough to analyse and mitigate. Uncoordinated efforts to respond to active threats, on top of that chaos, can leave one’s head spinning and result in mistakes or gaps in security responses.

Before letting it reach that point, security leaders should initiate conversations across IT, OT, PR, Marketing and other internal teams to make sure they know how to collaborate effectively when disinformation is discovered.

A simple example of this could be incorporating disinformation exercises into tabletop discussions or periodic team trainings.

  • Rik Ferguson is the Vice President of Security Intelligence at Forescout.

Related posts:


Discover more from TechChannel News

Subscribe to get the latest posts sent to your email.

- Advertisement -

Latest News

Sharjah and Microsoft to launch commercial AI licence

Initiative underscores Sharjah's vision to enhance its investment climate and serve as a hub for innovation

Tech giants seek to raise $100b to invest in AI-powered data centres

Partnership between tech giants aims to unlock $30b of private equity capital over time from investors, asset owners, and corporates.

Microsoft in $60b share buyback programme

Microsoft raises quarterly dividend by 10%, from 75 cents to 83 cents per share
- Advertisement -
- Advertisement -

More Articles

- Advertisement -