Sunday, April 27, 2025
Sunday, April 27, 2025
- Advertisement -

People prefer AI over humans in redistributive decisions

Study shows public may be more willing to accept AI decision-makers even in areas with significant moral implications

Must Read

- Advertisement -
- Advertisement -
  • Study challenges conventional notion that human decision-makers are favoured in decisions involving a โ€˜moralโ€™ component, such as fairness
  • Public may increasingly support algorithmic decision-makers if the technology can demonstrate consistency and adhere to established fairness principles.

As technology becomes increasingly integrated into various aspects of public and private decision-making, understanding public perception and ensuring the transparency and accountability of algorithms is crucial for their acceptance and effectiveness.

A study conducted by researchers from the University of Portsmouth and the Max Planck Institute for Innovation and Competition has revealed a surprising finding: people prefer Artificial Intelligence (AI) over humans when it comes to redistributive decisions.

The study utilised an online decision experiment to examine the preference for human or AI decision-makers in a scenario where the earnings of two people could be redistributed.

Importance of algorithm transparency

Contrary to previous findings, more than 60 per cent of the participants chose AI over a human to make the decision that would determine their earnings. The preference for algorithmic decision-making was observed regardless of the potential for discrimination.

However, despite the preference for algorithms, participants were less satisfied with the decisions made by AI and found them less fair than those made by humans.

Participantsโ€™ subjective ratings of the decisions were primarily driven by their own material interests and fairness ideals. While they were willing to tolerate reasonable deviations from their ideals, they reacted strongly and negatively to redistribution decisions that were not consistent with established fairness principles.

Dr. Wolfgang Luhan, the corresponding author of the study and an Associate Professor of Behavioural Economics at the University of Portsmouth, emphasised the importance of algorithm transparency and accountability, especially in moral decision-making contexts.

โ€œMany companies and public bodies are already using AI for various decisions, and the public may increasingly support algorithmic decision-makers if the technology can demonstrate consistency and adhere to established fairness principles.โ€

The findings of this study challenge the conventional notion that human decision-makers are favoured in decisions involving a โ€˜moralโ€™ component, such as fairness.

Instead, they suggest that with improvements in algorithm consistency and the ability to explain how decisions are made; the public may be more willing to accept algorithmic decision-makers even in areas with significant moral implications.

This could potentially lead to improved acceptance of policies and managerial choices, such as pay rises or bonus payments.

- Advertisement -

Latest News

7 best practices for effective Machine Identity Management

Productive machine identity management is essential to security, confidence, and indeed to the protection of confidential information of an organisation

Seven key differences between HITRUST vs HIPAA

Knowing the differences between HITRUST vs HIPAA can be of great help to healthcare organisations in safeguarding data

How can you build a honeypot to strengthen network security?

How to build a honeypot depends on an organisation to identify the risks in advance and to have a tool to evaluate security solutions
- Advertisement -
- Advertisement -

More Articles

- Advertisement -