From tailored Netflix recommendations to personalized Facebook feeds, artificial intelligence (AI) adeptly serves content that matches our preferences and past behaviors. But while a restaurant tip or two is handy, how comfortable would you be if AI-algorithms were in charge of your medical expert or new hire?
Now, a new study from the University of South Australia shows that most people are more likely to trust AI in situations where the stakes are low, such as music suggestions, but less likely to trust AI in high-stakes situations, such as medical decisions.
However, those with poor statistical literacy or little familiarity with AI were just as likely to trust algorithms for trivial choices as they were for critical decisions.
The study is published in the journal Frontiers in Artificial Intelligence.
Assessing responses from nearly 2,000 participants across 20 countries, researchers found that statistical literacy affects trust differently. People who understand that AI-algorithms work through pattern-based predictions (but also have risks and biases) were more skeptical of AI in high-stakes situations, but less so in low-stakes situations.
They also found that older people and men were generally more cautious of algorithms, as were people in highly industrialized nations like Japan, the US, and the UK.
Understanding how and when people trust AI-algorithms is essential, particularly as society continues to introduce and adopt machine-learning technologies. AI adoption rates have increased dramatically, with 72% of organizations now using AI in their business.
Lead author and human and artificial cognition expert Dr. Fernando Marmolejo-Ramos says the speed at which smart technologies are being used to outsource decisions is outpacing our understanding to successfully integrate them into society.
“Algorithms are becoming increasingly influential in our lives, impacting everything from minor choices about music or food, to major decisions about finances, health care, and even justice,” Dr. Marmolejo-Ramos says.
“But the use of algorithms to help make decisions implies that there should be some confidence in their reliability. That’s why it’s so important to understand what influences people’s trust in algorithmic decision-making. Our research found that in low-stakes scenarios, such as restaurant recommendations or music selection, people with higher levels of statistical literacy were more likely to trust algorithms.
“Yet, when the stakes were high, for things like health or employment, the opposite was true; those with better statistical understanding were less likely to place their faith in algorithms.”
UniSA’s Dr. Florence Gabriel says there should be a concentrated effort to promote statistical and AI literacy among the general population so that people can better judge when to trust algorithmic decisions.
“An AI-generated algorithm is only as good as the data and coding that it’s based on,” Dr. Gabriel says. “We only need to look at the recent banning of DeepSeek to grasp how algorithms can produce biased or risky data depending on the content that it was built upon.
“On the flip side, when an algorithm has been developed through a trusted and transparent source, such as the custom-build EdChat chatbot for South Australian schools, it’s more easily trusted. Learning these distinctions is important. People need to know more about how algorithms work, and we need to find ways to deliver this in clear, simple ways that are relevant to the user’s needs and concerns.
“People care about what the algorithm does and how it affects them. We need clear, jargon-free explanations that align with the user’s concerns and context. That way we can help people to responsibly engage with AI.”
More information:
Fernando Marmolejo-Ramos et al, Factors influencing trust in algorithmic decision-making: an indirect scenario-based experiment, Frontiers in Artificial Intelligence (2025). DOI: 10.3389/frai.2024.1465605
Citation:
How far would you trust AI to make important decisions? Research suggests statistical literacy shapes trust (2025, February 17)
retrieved 18 February 2025
from
This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no
part may be reproduced without the written permission. The content is provided for information purposes only.