Monday, April 29, 2024

Regulations to rev up responsible AI initiatives globally

Responsible AI is just three years from reaching early majority adoption due to accelerated AI adoption

Must Read

  • Responsible AI operationalises organisational responsibilities and practices that ensure positive and accountable AI development and utilisation.
  • More regulated industries, such as financial services, healthcare, technology and government, will remain the early adopters of responsible AI.
  • Development of and use of responsible AI will not only be crucial for AI products and service developers, but for organisations that use AI tools as well.
  • Failure to comply will expose organisations to ethical scrutiny by citizens in general, leading to significant financial, reputational and legal risks for the organisation.
  • By 2026, Gartner predicts 50 per cent of governments worldwide will enforce use of responsible AI through regulations, policies and the need for data privacy.

Responsible AI is just three years from reaching early majority adoption due to accelerated AI adoption, particularly GenAI, and growing attention to associated regulatory implications, an industry expert said.

Anushree Verma, Director Analyst at Gartner.

“Responsible AI will impact virtually all applications of AI across industries. In the near term, more regulated industries, such as financial services, healthcare, technology and government, will remain the early adopters of responsible AI,” Anushree Verma, Director Analyst at Gartner, said.

However, she said that responsible AI will also play an important role in “less-regulated industries” by helping build consumer trust and foster adoption, as well as mitigate financial and legal risks.

Erecting geographic borders

By 2026, Gartner predicts 50 per cent of governments worldwide will enforce use of responsible AI through regulations, policies and the need for data privacy.

Regarding what can organisations do to implement responsible AI in their organisations, Verma said that responsible AI regulations will erect geographic borders in the digital world and create a web of competing regulations from different governments to protect nations and their populations from unethical or otherwise undesirable applications of AI and GenAI.

“This will constrain IT leaders’ ability to maximise foreign AI and GenAI products throughout their organisations. These regulations will require AI developers to focus on more AI ethics, transparency and privacy through responsible AI usage across organisations.”

How to future-proof GenAI projects

Responsible AI, she said, is an umbrella term for aspects of making the appropriate business and ethical choices when adopting AI in the organisation’s context.

Examples include being transparent with the use of AI, mitigating bias in algorithms, securing models against subversion and abuse, and protecting the privacy of customer information and regulatory compliance.

“Responsible AI operationalises organisational responsibilities and practices that ensure positive and accountable AI development and utilisation.

“Development of and use of responsible AI will not only be crucial for AI products and service developers, but for organisations that use AI tools as well. Failure to comply will expose organisations to ethical scrutiny by citizens in general, leading to significant financial, reputational and legal risks for the organisation.”

Verma outlined several actions organisations can consider when it comes to future-proofing their GenAI projects.

  • Monitor and incorporate the evolving compliance requirements of responsible AI from different governments by developing a framework that maps the organisation’s GenAI portfolio of products and services to the different nations’ AI regulatory requirements.
  • Understand, implement and utilise responsible AI practices contextualised to the organisation. This can be done by determining a curriculum for responsible AI and then establishing a structured approach to educate and create visibility across the organization, engage stakeholders and identify the appropriate use cases and solutions for implementation.
  • Operationalise AI trust, risk and security management (AI TRiSM) in user-centric solutions by integrating responsible AI to accelerate adoption and improve user experience.
  • Ensure service provider accountability for responsible AI governance by enforcing contractual obligations and mitigate the impact of risks arising out of unethical and noncompliant behaviors or outcomes from uncontrolled and unexplainable biases from AI solutions.

Latest News

Hyundai and Kia collaborate with Baidu for connected cars

South Korean automakers utilise Baidu's smart cloud computing technology to address Beijing's enhancing data regulations

Healthify cuts 27% of its staff in restructuring move

Indian startup Healthify looks to make its India business EBITDA profitable and expand its offerings in the US market

HealthGenie exposes patients’ sensitive data for several months

Cybernews discovers HealthGenie left an open Amazon S3 bucket, exposing over 36GB of data or nearly 450,000 documents

Boult eyes Rs1,000cr turnover this fiscal year

Indian wearable brand Boult reports Rs750cr revenue in FY24

More Articles

Discover more from TechChannel News

Subscribe now to keep reading and get access to the full archive.

Continue reading