Machine learning algorithms significantly outperform human judgment in detecting lying during high-stakes strategic interactions, according to new research from the University of California San Diego’s Rady School of Management.
The study can have major implications for the spread of misinformation, as machine learning could be used to bolster efforts to reduce fictitious content on major platforms like YouTube, Tik-Tok and Instagram.
The study, to be published in Management Science and available as a working paper, focused on participants’ ability to detect lying on the popular British TV show “Golden Balls,” which aired from 2007 to 2010. It finds that while humans struggle to predict contestants’ deception behavior, algorithms perform much better.
“We find that there are certain ‘tells’ when a person is being deceptive,” said Marta Serra-Garcia, lead author of the study and associate professor of behavioral economics at the UC San Diego Rady School of Management.
“For example, if someone is happier, they are telling the truth and there are other visual, verbal, vocal cues that we as humans all share when we are being honest and telling the truth. Algorithms work better at uncovering these correlations.”
The algorithms used in the research achieved an impressive accuracy rate, correctly predicting contestant behavior 74% of the time, compared to the 51%–53% accuracy rate achieved by the more than 600 humans who participated in the study.
In addition to comparing machine learning and human abilities to detect deception, the study also tested how algorithms could be leveraged to help people better tell apart those who lie and those who tell the truth.
In one experiment, two different groups of study participants watched the same set of “Golden Balls” episodes. One group had the videos flagged by machine learning before they viewed them. The flags indicated that the algorithm predicted the contestant was most likely lying.
Another group watched the same video and after they viewed it, they were told the algorithm flagged the video for deception. Participants were much more likely to trust the machine learnings’ insights and better predict lying, if they got the flag message before watching the video.
“Timing is crucial when it comes to the adoption of algorithmic advice,” said Serra-Garcia. “Our findings show that participants are far more likely to rely on algorithmic insights when these are presented early in the decision-making process. This has particular importance for online platforms like YouTube and TikTok, which can use algorithms to flag potentially deceptive content.”
Co-author Uri Gneezy, professor of behavioral economics at the Rady School added, “Our study suggests that these online platforms could improve the effectiveness of their flagging systems by presenting algorithmic warnings before users engage with the content, rather than after, which could lead to misinformation spreading less rapidly.”
Some of these social media websites are already using algorithms to detect suspicious content, but in many cases, a video has to be reported by a user and then investigated by staff who can flag the content or take it down. These processes can be drawn out, as employees at tech companies like TikTok get overburdened with investigations.
The authors conclude, “Our study shows how technology can enhance human decision making and it’s an example of how humans can interact with AI when AI can be helpful. We hope the findings can help organizations and platforms better design and deploy machine learning tools, especially in situations where accurate decision-making is critical.”
More information:
Paper: Improving Human Deception Detection Using Algorithmic Feedback
University of California – San Diego
How AI can help stop the spread of misinformation (2024, September 17)
retrieved 19 September 2024
from https://techxplore.com/news/2024-09-ai-misinformation.html
part may be reproduced without the written permission. The content is provided for information purposes only.
Machine learning algorithms significantly outperform human judgment in detecting lying during high-stakes strategic interactions, according to new research from the University of California San Diego’s Rady School of Management.
The study can have major implications for the spread of misinformation, as machine learning could be used to bolster efforts to reduce fictitious content on major platforms like YouTube, Tik-Tok and Instagram.
The study, to be published in Management Science and available as a working paper, focused on participants’ ability to detect lying on the popular British TV show “Golden Balls,” which aired from 2007 to 2010. It finds that while humans struggle to predict contestants’ deception behavior, algorithms perform much better.
“We find that there are certain ‘tells’ when a person is being deceptive,” said Marta Serra-Garcia, lead author of the study and associate professor of behavioral economics at the UC San Diego Rady School of Management.
“For example, if someone is happier, they are telling the truth and there are other visual, verbal, vocal cues that we as humans all share when we are being honest and telling the truth. Algorithms work better at uncovering these correlations.”
The algorithms used in the research achieved an impressive accuracy rate, correctly predicting contestant behavior 74% of the time, compared to the 51%–53% accuracy rate achieved by the more than 600 humans who participated in the study.
In addition to comparing machine learning and human abilities to detect deception, the study also tested how algorithms could be leveraged to help people better tell apart those who lie and those who tell the truth.
In one experiment, two different groups of study participants watched the same set of “Golden Balls” episodes. One group had the videos flagged by machine learning before they viewed them. The flags indicated that the algorithm predicted the contestant was most likely lying.
Another group watched the same video and after they viewed it, they were told the algorithm flagged the video for deception. Participants were much more likely to trust the machine learnings’ insights and better predict lying, if they got the flag message before watching the video.
“Timing is crucial when it comes to the adoption of algorithmic advice,” said Serra-Garcia. “Our findings show that participants are far more likely to rely on algorithmic insights when these are presented early in the decision-making process. This has particular importance for online platforms like YouTube and TikTok, which can use algorithms to flag potentially deceptive content.”
Co-author Uri Gneezy, professor of behavioral economics at the Rady School added, “Our study suggests that these online platforms could improve the effectiveness of their flagging systems by presenting algorithmic warnings before users engage with the content, rather than after, which could lead to misinformation spreading less rapidly.”
Some of these social media websites are already using algorithms to detect suspicious content, but in many cases, a video has to be reported by a user and then investigated by staff who can flag the content or take it down. These processes can be drawn out, as employees at tech companies like TikTok get overburdened with investigations.
The authors conclude, “Our study shows how technology can enhance human decision making and it’s an example of how humans can interact with AI when AI can be helpful. We hope the findings can help organizations and platforms better design and deploy machine learning tools, especially in situations where accurate decision-making is critical.”
More information:
Paper: Improving Human Deception Detection Using Algorithmic Feedback
University of California – San Diego
How AI can help stop the spread of misinformation (2024, September 17)
retrieved 19 September 2024
from https://techxplore.com/news/2024-09-ai-misinformation.html
part may be reproduced without the written permission. The content is provided for information purposes only.