Have you ever wondered how AI chatbots make decisions? This question arose in a classroom at the Cornell Tech campus and prompted a new study from Cornell SC Johnson College of Business that delves into the differences between decision-making processes in human and artificial intelligence.
In the working paper, “Do AI Chatbots Provide an Outside View?” Stephen Shu, professor of practice at the Charles H. Dyson School of Applied Economics and Management, part of the SC Johnson College, and his co-authors explore the decision-making characteristics of AI chatbots.
The research is published in the SSRN Electronic Journal.
“Surprisingly, our study revealed that AI chatbots, despite their computational prowess, exhibit decision-making patterns that are neither purely human nor entirely rational,” Shu said. “They possess what we term as an ‘inside view’ akin to humans, characterized by falling prey to cognitive biases such as the conjunction fallacy, overconfidence, and confirmation biases.”
Conjunction fallacy is a common error in reasoning where individuals assume that specific conditions are more probable than a single general one. Confirmation bias means people prefer information that supports what they already believe.
AI chatbots also offer what Shu termed an “outside view,” complementing human decision-making in certain aspects. They excel in considering base rates, remain less susceptible to biases stemming from limited memory recall, and demonstrate insensitivity to availability and endowment effect biases. For example, whereas humans tend to exhibit an endowment effect bias where they value items more when they possess an item (versus when they do not own them), AI chatbots do not seem to exhibit this bias.
The study encompassed various AI platforms, including ChatGPT, Google Bard, Bing Chat AI, ChatGLM Pro, and Ernie Bot. Researchers evaluated their decision-making processes against 17 principles derived from behavioral economics, shedding light on the connection between humans and AI.
One of the most intriguing findings was that AI chatbot decision-making doesn’t mirror human behavior as closely as the researchers expected. Despite being trained on vast datasets, AI chatbots exhibit decision-making tendencies that sometimes defy traditional human or rational logic. For example, whereas humans often tend to seek risk when facing losses (e.g., take a gamble to minimize loss), when making a choice AI chatbots may more often seek certainty in the face of losses (e.g., accept the losses as opposed to trying to take a gamble to minimize loss).
For business professionals leveraging AI chatbots, awareness of these decision-making dynamics is paramount. While AI can offer valuable insights and assistance, it’s essential to maintain a level of skepticism. Recognizing situations where AI provides an “inside view” can help mitigate the risks of overconfidence and confirmation biases. Conversely, embracing the “outside view” capabilities of AI can enhance decision-making by leveraging its strengths in considering base rates and mitigating biases stemming from human limitations.
As AI continues to permeate various aspects of society, understanding its decision-making dynamics becomes increasingly crucial. Shu’s research sheds light on AI’s capabilities, limitations, and potential to complement human decision-making processes.
Business managers relying on AI or advocating its usage may find this research of interest. Media outlets that focus on technology and behavioral science topics, such as Harvard Business Review (HBR), are also likely to take a keen interest in the findings.
“Exploring the unknown territory of AI decision-making has brought together diverse perspectives, paving the way for a deeper understanding of this rapidly evolving technology and its implications for society,” Shu said. “As we continue on this journey, we aim to foster responsible and informed usage of AI, ensuring that it serves as a tool for progress and empowerment in the hands of decision-makers.”
Shu’s co-authors include Sreyoshi Das, assistant professor of practice in the Cornell Ann S. Bowers College of Computing and Information Science, and independent researchers Daniela Hernandez Correal, Omar Fayaz, Junhui Lei, Prem Kumar Mullai Manavalan, Xinguo Peng, Nicholas Sakaguchi, and Sneha Suresh.
More information:
Stephen Shu et al, Do AI Chatbots Provide an Outside View?, SSRN Electronic Journal (2024). DOI: 10.2139/ssrn.4874756
Cornell University
AI chatbots exhibit unique decision-making biases, study finds (2024, July 30)
retrieved 30 July 2024
from https://techxplore.com/news/2024-07-decision-mystery-ai-chatbots.html
part may be reproduced without the written permission. The content is provided for information purposes only.
Have you ever wondered how AI chatbots make decisions? This question arose in a classroom at the Cornell Tech campus and prompted a new study from Cornell SC Johnson College of Business that delves into the differences between decision-making processes in human and artificial intelligence.
In the working paper, “Do AI Chatbots Provide an Outside View?” Stephen Shu, professor of practice at the Charles H. Dyson School of Applied Economics and Management, part of the SC Johnson College, and his co-authors explore the decision-making characteristics of AI chatbots.
The research is published in the SSRN Electronic Journal.
“Surprisingly, our study revealed that AI chatbots, despite their computational prowess, exhibit decision-making patterns that are neither purely human nor entirely rational,” Shu said. “They possess what we term as an ‘inside view’ akin to humans, characterized by falling prey to cognitive biases such as the conjunction fallacy, overconfidence, and confirmation biases.”
Conjunction fallacy is a common error in reasoning where individuals assume that specific conditions are more probable than a single general one. Confirmation bias means people prefer information that supports what they already believe.
AI chatbots also offer what Shu termed an “outside view,” complementing human decision-making in certain aspects. They excel in considering base rates, remain less susceptible to biases stemming from limited memory recall, and demonstrate insensitivity to availability and endowment effect biases. For example, whereas humans tend to exhibit an endowment effect bias where they value items more when they possess an item (versus when they do not own them), AI chatbots do not seem to exhibit this bias.
The study encompassed various AI platforms, including ChatGPT, Google Bard, Bing Chat AI, ChatGLM Pro, and Ernie Bot. Researchers evaluated their decision-making processes against 17 principles derived from behavioral economics, shedding light on the connection between humans and AI.
One of the most intriguing findings was that AI chatbot decision-making doesn’t mirror human behavior as closely as the researchers expected. Despite being trained on vast datasets, AI chatbots exhibit decision-making tendencies that sometimes defy traditional human or rational logic. For example, whereas humans often tend to seek risk when facing losses (e.g., take a gamble to minimize loss), when making a choice AI chatbots may more often seek certainty in the face of losses (e.g., accept the losses as opposed to trying to take a gamble to minimize loss).
For business professionals leveraging AI chatbots, awareness of these decision-making dynamics is paramount. While AI can offer valuable insights and assistance, it’s essential to maintain a level of skepticism. Recognizing situations where AI provides an “inside view” can help mitigate the risks of overconfidence and confirmation biases. Conversely, embracing the “outside view” capabilities of AI can enhance decision-making by leveraging its strengths in considering base rates and mitigating biases stemming from human limitations.
As AI continues to permeate various aspects of society, understanding its decision-making dynamics becomes increasingly crucial. Shu’s research sheds light on AI’s capabilities, limitations, and potential to complement human decision-making processes.
Business managers relying on AI or advocating its usage may find this research of interest. Media outlets that focus on technology and behavioral science topics, such as Harvard Business Review (HBR), are also likely to take a keen interest in the findings.
“Exploring the unknown territory of AI decision-making has brought together diverse perspectives, paving the way for a deeper understanding of this rapidly evolving technology and its implications for society,” Shu said. “As we continue on this journey, we aim to foster responsible and informed usage of AI, ensuring that it serves as a tool for progress and empowerment in the hands of decision-makers.”
Shu’s co-authors include Sreyoshi Das, assistant professor of practice in the Cornell Ann S. Bowers College of Computing and Information Science, and independent researchers Daniela Hernandez Correal, Omar Fayaz, Junhui Lei, Prem Kumar Mullai Manavalan, Xinguo Peng, Nicholas Sakaguchi, and Sneha Suresh.
More information:
Stephen Shu et al, Do AI Chatbots Provide an Outside View?, SSRN Electronic Journal (2024). DOI: 10.2139/ssrn.4874756
Cornell University
AI chatbots exhibit unique decision-making biases, study finds (2024, July 30)
retrieved 30 July 2024
from https://techxplore.com/news/2024-07-decision-mystery-ai-chatbots.html
part may be reproduced without the written permission. The content is provided for information purposes only.