Monday, December 23, 2024
Monday, December 23, 2024
- Advertisement -

Researchers uncover a scaling law for object recognition in AI world

Reducing the latency of AI systems holds profound implications for real-time decision-making processes

Must Read

- Advertisement -
- Advertisement -
  • Scaling law demonstrates how the identification error rate of such networks increases with the number of required recognisable objects
  • Research provides a crucial foundation for the development of more robust and responsive AI technologies.

The real world presents a myriad of challenges for artificial intelligence (AI) systems, and one of the most pressing is the requirement that machines be able to recognise and quickly learn new objects that they have not encountered before.

An AI system that is robust to changes and can quickly adapt to a dynamic reality would be invaluable, whether it’s a robot recognising new products at a grocery store or a self-driving car interacting with novel road signs or objects in its environment.

Researchers at Bar-Ilan University have made a significant breakthrough in understanding how artificial neural networks handle an increasing number of categories for identification.

They have uncovered a new universal law, known as the scaling law, which demonstrates how the identification error rate of such networks increases with the number of required recognisable objects.

Importantly, this scaling law was found to govern both shallow and deep neural network architectures, indicating that shallow networks, similar to the brain, can mimic the functionality of deeper ones.

The finding suggests that a shallow, wide architecture can perform just as well as a deep, narrow one; much like a wide, low-rise building can house the same number of inhabitants as a narrow skyscraper.

Practical implications

Ella Koresh, an undergraduate student and a key contributor to the research, highlights the practical implications of this discovery.

“This is a significant advancement because one of the most critical aspects of deep learning is latency – the time it takes for the network to process and identify an object. As networks become deeper, latency increases, leading to delays in the model’s response, while shallow brain-inspired networks have lower latency and faster response.”

Reducing the latency of AI systems holds profound implications for real-time decision-making processes. In scenarios where the number of labels is dynamic, such as in self-driving cars or robots navigating novel environments, the scaling law introduced in this research becomes vital.

By understanding the relationship between the number of recognisable objects and the identification error rate, developers can design more efficient and responsive AI systems capable of adapting to changing conditions.

The study, published in the journal Physica A, was led by Prof. Ido Kanter from Bar-Ilan University’s Department of Physics and Gonda (Goldschmied) Multidisciplinary Brain Research Centre.

- Advertisement -

Latest News

Apple adds ChatGPT to iPhone to bolster holiday sales

The feature aims to rejuvenate consumer interest in Apple's products, particularly the new iPhone series

Abu Dhabi moves closer to become a gaming hub with $150m fund

Beam Ventures to focus on early-stage startups specialising in web3 gaming and artificial intelligence

Oracle’s results spark further concerns among investors

Oracle's second-quarter revenue rises 9% to $14.1b, fuelled by a 52% surge in its cloud infrastructure revenue to $2.4b
- Advertisement -
- Advertisement -

More Articles

- Advertisement -