Monday, May 6, 2024
Monday, May 6, 2024

Researchers enhance the confidence levels of AI systems

The study by Bar-Ilan University and Gonda Multidisciplinary Brain Research Centre sets a new standard for AI performance and safety

Must Read

  • A new measure for distinguishing between high- and low- confidence AI decision making can significantly boost the safety and reliability of autonomous vehicles and other applications.

Can deep learning architectures achieve greatly above-average confidence for a significant portion of inputs while maintaining overall average confidence?

Findings by a new Bar-Ilan University study provide an emphatic “YES” to this question, marking a significant leap forward in AI’s ability to discern and respond to varying levels of confidence in classification tasks.

By leveraging insights into the confidence levels of deep architectures, the research team has opened new avenues for real-world applications, ranging from autonomous vehicles to healthcare.

The study was published by a team of researchers led by Prof. Ido Kanter from Bar-Ilan University’s Department of Physics and Gonda (Goldschmied) Multidisciplinary Brain Research Centre.

“Understanding the confidence levels of AI systems allow us to develop applications that prioritise safety and reliability,” Ella Koresh, an undergraduate student and a contributor to the research, said.

Confidence gap

The typical aim of classification tasks is to maximise the accuracy of the predicted label for a given input. This accuracy increases with the confidence, which is the maximal value of the output units, and when the accuracy equals confidence, calibration is achieved.

Herein, several methods are proposed to enhance the accuracy of inputs with similar confidence, extending significantly beyond calibration. Using the first gap between the maximal and second maximal output values, the accuracy of the inputs with similar confidence is enhanced.

The extension of the confidence or confidence gap to their minimal value among a set of augmented inputs further enhances the accuracy of inputs with similar confidence.

For instance, in the context of autonomous vehicles, when confidence in identifying a road sign is exceptionally high, Koresh said the system can autonomously make decisions.

However, in scenarios where confidence levels are lower, she said the system prompts for human intervention, ensuring cautious and informed decision-making.

Enhancing the confidence levels of AI systems holds profound implications across diverse domains, from AI-based writing and image classification to critical decision-making processes in healthcare and autonomous vehicles.

By enabling AI systems to make more nuanced and reliable decisions when faced with uncertainty, this research sets a new standard for AI performance and safety.


Discover more from TechChannel News

Subscribe to get the latest posts to your email.

Latest News

Msheireb to bring Qatar’s culture to limelight with Metahug

Roblox platform will host a series of mini-games to allow players to explore the richness of Qatari culture

Microsoft replaces passwords with passkeys on consumer accounts

Users can now create a passkey on their devices and use their face, fingerprint, PIN, or security key as a means of identification

Indian food-agri startup F3 eyes Rs100cr ARR this year

F3 raises $2m funding in its pre-Series A round and will be used to spread wings

Ola captures over 52% electric two wheeler market share

OLA records 33,934 electric two-wheelers, despite a broader 52% slump in overall sales to 64,013 units in March

More Articles

Discover more from TechChannel News

Subscribe now to keep reading and get access to the full archive.

Continue reading