Large language models (LLMs) have developed rapidly in recent years and are becoming an integral part of our everyday lives through applications like ChatGPT. An article recently published in Nature Human Behaviour explains the opportunities and risks that arise from the use of LLMs for our ability to collectively deliberate, make decisions, and solve problems.
Led by researchers from Copenhagen Business School and the Max Planck Institute for Human Development in Berlin, the interdisciplinary team of 28 scientists provides recommendations for researchers and policymakers to ensure LLMs are developed to complement rather than detract from human collective intelligence.
What do you do if you don’t know a term like LLM? You probably quickly google it or ask your team. We use the knowledge of groups, known as collective intelligence, as a matter of course in everyday life.
By combining individual skills and knowledge, our collective intelligence can achieve outcomes that exceed the capabilities of any individual alone, even experts. This collective intelligence drives the success of all kinds of groups, from small teams in the workplace to massive online communities like Wikipedia and even societies at large.
LLMs are artificial intelligence (AI) systems that analyze and generate text using large datasets and deep learning techniques. The new article explains how LLMs can enhance collective intelligence and discusses their potential impact on teams and society.
“As large language models increasingly shape the information and decision-making landscape, it’s crucial to strike a balance between harnessing their potential and safeguarding against risks. Our article details ways in which human collective intelligence can be enhanced by LLMs, and the various harms that are also possible,” says Ralph Hertwig, co-author of the article and Director at the Max Planck Institute for Human Development, Berlin.
Among the potential benefits identified by the researchers is that LLMs can significantly increase accessibility in collective processes. They break down barriers through translation services and writing assistance, for example, allowing people from different backgrounds to participate equally in discussions.
Furthermore, LLMs can accelerate idea generation or support opinion-forming processes by, for example, bringing helpful information into discussions, summarizing different opinions, and finding consensus.
Yet the use of LLMs also carries significant risks. For example, they could undermine people’s motivation to contribute to collective knowledge commons like Wikipedia and Stack Overflow. If users increasingly rely on proprietary models, the openness and diversity of the knowledge landscape may be endangered. Another issue is the risk of false consensus and pluralistic ignorance, where there is a mistaken belief that the majority accepts a norm.
“Since LLMs learn from information available online, there is a risk that minority viewpoints are unrepresented in LLM-generated responses. This can create a false sense of agreement and marginalize some perspectives,” points out Jason Burton, lead author of the study and assistant professor at Copenhagen Business School and associate research scientist at the MPIB.
“The value of this article is that it demonstrates why we need to think proactively about how LLMs are changing the online information environment and, in turn, our collective intelligence—for better and worse,” summarizes co-author Joshua Becker, assistant professor at University College London.
The authors call for greater transparency in creating LLMs, including disclosure of training data sources, and suggest that LLM developers should be subject to external audits and monitoring. This would allow for a better understanding of how LLMs are actually being developed and mitigate adverse developments.
In addition, the article offers compact information boxes on topics related to LLMs, including the role of collective intelligence in the training of LLMs. Here, the authors reflect on the role of humans in developing LLMs, including how to address goals such as diverse representation.
Two information boxes with a focus on research outline how LLMs can be used to simulate human collective intelligence, and identify open research questions, like how to avoid homogenization of knowledge and how credit and accountability should be apportioned when collective outcomes are co-created with LLMs.
More information:
Jason W. Burton et al, How large language models can reshape collective intelligence, Nature Human Behaviour (2024). DOI: 10.1038/s41562-024-01959-9. www.nature.com/articles/s41562-024-01959-9
Max Planck Society
How can we make the best possible use of large language models for a smarter, more inclusive society? (2024, September 20)
retrieved 23 September 2024
from https://techxplore.com/news/2024-09-large-language-smarter-inclusive-society.html
part may be reproduced without the written permission. The content is provided for information purposes only.
Large language models (LLMs) have developed rapidly in recent years and are becoming an integral part of our everyday lives through applications like ChatGPT. An article recently published in Nature Human Behaviour explains the opportunities and risks that arise from the use of LLMs for our ability to collectively deliberate, make decisions, and solve problems.
Led by researchers from Copenhagen Business School and the Max Planck Institute for Human Development in Berlin, the interdisciplinary team of 28 scientists provides recommendations for researchers and policymakers to ensure LLMs are developed to complement rather than detract from human collective intelligence.
What do you do if you don’t know a term like LLM? You probably quickly google it or ask your team. We use the knowledge of groups, known as collective intelligence, as a matter of course in everyday life.
By combining individual skills and knowledge, our collective intelligence can achieve outcomes that exceed the capabilities of any individual alone, even experts. This collective intelligence drives the success of all kinds of groups, from small teams in the workplace to massive online communities like Wikipedia and even societies at large.
LLMs are artificial intelligence (AI) systems that analyze and generate text using large datasets and deep learning techniques. The new article explains how LLMs can enhance collective intelligence and discusses their potential impact on teams and society.
“As large language models increasingly shape the information and decision-making landscape, it’s crucial to strike a balance between harnessing their potential and safeguarding against risks. Our article details ways in which human collective intelligence can be enhanced by LLMs, and the various harms that are also possible,” says Ralph Hertwig, co-author of the article and Director at the Max Planck Institute for Human Development, Berlin.
Among the potential benefits identified by the researchers is that LLMs can significantly increase accessibility in collective processes. They break down barriers through translation services and writing assistance, for example, allowing people from different backgrounds to participate equally in discussions.
Furthermore, LLMs can accelerate idea generation or support opinion-forming processes by, for example, bringing helpful information into discussions, summarizing different opinions, and finding consensus.
Yet the use of LLMs also carries significant risks. For example, they could undermine people’s motivation to contribute to collective knowledge commons like Wikipedia and Stack Overflow. If users increasingly rely on proprietary models, the openness and diversity of the knowledge landscape may be endangered. Another issue is the risk of false consensus and pluralistic ignorance, where there is a mistaken belief that the majority accepts a norm.
“Since LLMs learn from information available online, there is a risk that minority viewpoints are unrepresented in LLM-generated responses. This can create a false sense of agreement and marginalize some perspectives,” points out Jason Burton, lead author of the study and assistant professor at Copenhagen Business School and associate research scientist at the MPIB.
“The value of this article is that it demonstrates why we need to think proactively about how LLMs are changing the online information environment and, in turn, our collective intelligence—for better and worse,” summarizes co-author Joshua Becker, assistant professor at University College London.
The authors call for greater transparency in creating LLMs, including disclosure of training data sources, and suggest that LLM developers should be subject to external audits and monitoring. This would allow for a better understanding of how LLMs are actually being developed and mitigate adverse developments.
In addition, the article offers compact information boxes on topics related to LLMs, including the role of collective intelligence in the training of LLMs. Here, the authors reflect on the role of humans in developing LLMs, including how to address goals such as diverse representation.
Two information boxes with a focus on research outline how LLMs can be used to simulate human collective intelligence, and identify open research questions, like how to avoid homogenization of knowledge and how credit and accountability should be apportioned when collective outcomes are co-created with LLMs.
More information:
Jason W. Burton et al, How large language models can reshape collective intelligence, Nature Human Behaviour (2024). DOI: 10.1038/s41562-024-01959-9. www.nature.com/articles/s41562-024-01959-9
Max Planck Society
How can we make the best possible use of large language models for a smarter, more inclusive society? (2024, September 20)
retrieved 23 September 2024
from https://techxplore.com/news/2024-09-large-language-smarter-inclusive-society.html
part may be reproduced without the written permission. The content is provided for information purposes only.