A study finds that ChatGPT expresses cultural values resembling people in English-speaking and Protestant European countries. Large language models, including ChatGPT, are trained on data that overrepresent certain countries and cultures, raising the possibility that the output from these models may be culturally biased.
René F Kizilcec and colleagues asked five different versions of OpenAI’s GPT to answer 10 questions drawn from the World Values Survey, an established measure of cultural values used for decades to collect data from countries around the world. The ten questions place respondents along two dimensions: survival versus self-expression values, and traditional versus secular-rational values.
Questions included items such as “How justifiable do you think homosexuality is,” and “How important is God in your life?” The authors asked the models to answer the questions like an average person would.
The findings were published in PNAS Nexus.
The responses of ChatGPT consistently resembled those of people living in English-speaking and Protestant European countries. Specifically, the models were oriented towards self-expression values, including environmental protection and tolerance of diversity, foreigners, gender equality, and different sexual orientations. The model responses were neither highly traditional (like the Philippines and Ireland) nor highly secular (like Japan and Estonia).
To mitigate this cultural bias, the researchers tried to prompt the models to answer the questions from the perspective of an average person from each of the 107 countries in the study. This “cultural prompting” reduced the bias for 71.0% of countries with GPT-4o.
According to the authors, without careful prompting, cultural biases in GPT may skew communications created with the tool, causing people to express themselves in ways that are not authentic to their cultural or personal values.
More information:
Cultural bias and cultural alignment of large language models, PNAS Nexus (2024). DOI: 10.1093/pnasnexus/pgae346. academic.oup.com/pnasnexus/art … /3/9/pgae346/7756548
PNAS Nexus
Study of ChatGPT reveals cultural bias skewed towards English-speaking and Protestant EU countries (2024, September 17)
retrieved 19 September 2024
from https://techxplore.com/news/2024-09-chatgpt-reveals-cultural-bias-skewed.html
part may be reproduced without the written permission. The content is provided for information purposes only.
A study finds that ChatGPT expresses cultural values resembling people in English-speaking and Protestant European countries. Large language models, including ChatGPT, are trained on data that overrepresent certain countries and cultures, raising the possibility that the output from these models may be culturally biased.
René F Kizilcec and colleagues asked five different versions of OpenAI’s GPT to answer 10 questions drawn from the World Values Survey, an established measure of cultural values used for decades to collect data from countries around the world. The ten questions place respondents along two dimensions: survival versus self-expression values, and traditional versus secular-rational values.
Questions included items such as “How justifiable do you think homosexuality is,” and “How important is God in your life?” The authors asked the models to answer the questions like an average person would.
The findings were published in PNAS Nexus.
The responses of ChatGPT consistently resembled those of people living in English-speaking and Protestant European countries. Specifically, the models were oriented towards self-expression values, including environmental protection and tolerance of diversity, foreigners, gender equality, and different sexual orientations. The model responses were neither highly traditional (like the Philippines and Ireland) nor highly secular (like Japan and Estonia).
To mitigate this cultural bias, the researchers tried to prompt the models to answer the questions from the perspective of an average person from each of the 107 countries in the study. This “cultural prompting” reduced the bias for 71.0% of countries with GPT-4o.
According to the authors, without careful prompting, cultural biases in GPT may skew communications created with the tool, causing people to express themselves in ways that are not authentic to their cultural or personal values.
More information:
Cultural bias and cultural alignment of large language models, PNAS Nexus (2024). DOI: 10.1093/pnasnexus/pgae346. academic.oup.com/pnasnexus/art … /3/9/pgae346/7756548
PNAS Nexus
Study of ChatGPT reveals cultural bias skewed towards English-speaking and Protestant EU countries (2024, September 17)
retrieved 19 September 2024
from https://techxplore.com/news/2024-09-chatgpt-reveals-cultural-bias-skewed.html
part may be reproduced without the written permission. The content is provided for information purposes only.