Thursday, November 28, 2024
Thursday, November 28, 2024
- Advertisement -

Regulations to rev up responsible AI initiatives globally

Responsible AI is just three years from reaching early majority adoption due to accelerated AI adoption

Must Read

- Advertisement -
- Advertisement -
  • Responsible AI operationalises organisational responsibilities and practices that ensure positive and accountable AI development and utilisation.
  • More regulated industries, such as financial services, healthcare, technology and government, will remain the early adopters of responsible AI.
  • Development of and use of responsible AI will not only be crucial for AI products and service developers, but for organisations that use AI tools as well.
  • Failure to comply will expose organisations to ethical scrutiny by citizens in general, leading to significant financial, reputational and legal risks for the organisation.
  • By 2026, Gartner predicts 50 per cent of governments worldwide will enforce use of responsible AI through regulations, policies and the need for data privacy.

Responsible AI is just three years from reaching early majority adoption due to accelerated AI adoption, particularly GenAI, and growing attention to associated regulatory implications, an industry expert said.

Anushree Verma, Director Analyst at Gartner.

“Responsible AI will impact virtually all applications of AI across industries. In the near term, more regulated industries, such as financial services, healthcare, technology and government, will remain the early adopters of responsible AI,” Anushree Verma, Director Analyst at Gartner, said.

However, she said that responsible AI will also play an important role in “less-regulated industries” by helping build consumer trust and foster adoption, as well as mitigate financial and legal risks.

Erecting geographic borders

By 2026, Gartner predicts 50 per cent of governments worldwide will enforce use of responsible AI through regulations, policies and the need for data privacy.

Regarding what can organisations do to implement responsible AI in their organisations, Verma said that responsible AI regulations will erect geographic borders in the digital world and create a web of competing regulations from different governments to protect nations and their populations from unethical or otherwise undesirable applications of AI and GenAI.

“This will constrain IT leaders’ ability to maximise foreign AI and GenAI products throughout their organisations. These regulations will require AI developers to focus on more AI ethics, transparency and privacy through responsible AI usage across organisations.”

How to future-proof GenAI projects

Responsible AI, she said, is an umbrella term for aspects of making the appropriate business and ethical choices when adopting AI in the organisation’s context.

Examples include being transparent with the use of AI, mitigating bias in algorithms, securing models against subversion and abuse, and protecting the privacy of customer information and regulatory compliance.

“Responsible AI operationalises organisational responsibilities and practices that ensure positive and accountable AI development and utilisation.

“Development of and use of responsible AI will not only be crucial for AI products and service developers, but for organisations that use AI tools as well. Failure to comply will expose organisations to ethical scrutiny by citizens in general, leading to significant financial, reputational and legal risks for the organisation.”

Verma outlined several actions organisations can consider when it comes to future-proofing their GenAI projects.

  • Monitor and incorporate the evolving compliance requirements of responsible AI from different governments by developing a framework that maps the organisation’s GenAI portfolio of products and services to the different nations’ AI regulatory requirements.
  • Understand, implement and utilise responsible AI practices contextualised to the organisation. This can be done by determining a curriculum for responsible AI and then establishing a structured approach to educate and create visibility across the organization, engage stakeholders and identify the appropriate use cases and solutions for implementation.
  • Operationalise AI trust, risk and security management (AI TRiSM) in user-centric solutions by integrating responsible AI to accelerate adoption and improve user experience.
  • Ensure service provider accountability for responsible AI governance by enforcing contractual obligations and mitigate the impact of risks arising out of unethical and noncompliant behaviors or outcomes from uncontrolled and unexplainable biases from AI solutions.



Sign up to receive top stories every day

- Advertisement -

Latest News

Locad raises $9m to spread wings into UAE and Saudi Arabia

Locad new funding will also be used to enhance Locad's AI-driven smart logistics capabilities.

UAE stands at helm of tech-driven banking revolution in Mideast

UAE commands major portion of region’s $3.2tr banking assets and aims at establishing a global benchmark.

India takes regulatory action against WhatsApp and fines $25.4m

CCI directes WhatsApp to cease sharing of user data with other applications owned by Meta Platforms
- Advertisement -
- Advertisement -

More Articles

- Advertisement -