Advances in artificial intelligence (AI) are disrupting many aspects of modern life, and the news industry is no exception. In a year with a record-breaking number of elections worldwide, there has been considerable soul-searching about the potential effect of so-called “deepfakes,” and other synthetic content, on democracies. There have also been further disruptions to the business models and trust underpinning independent journalism.
Most audiences are just starting to form opinions about AI and news, but in this year’s Digital News Report survey, which we produced at the University of Oxford’s Reuters Institute for the Study of Journalism, we included questions about the subject in 28 markets, backed up with in-depth interviews in the UK, US and Mexico.
Our findings reveal a high level of ambivalence about the use of these technologies. They also offer insights to publishers looking to implement the technologies without further eroding trust in news, which has fallen in many countries in recent years.
It is important to keep in mind that awareness of AI is still relatively low, with around half of our sample (49% globally and 56% in the UK) having read little or nothing about it. However, concerns about the accuracy of information and the potential for misinformation are top of the list when talking to those who are better informed.
Manipulated images and videos, for example around the war in Gaza, are increasingly common on social media and are already causing confusion. As one male participant said, “I have seen many examples before, and they can sometimes be very good. Thankfully, they are still pretty easy to detect but within five years they will be indistinguishable.”
Some participants felt widespread use of generative AI technologies—those that can produce content for users in text, images and video—would probably make identifying misinformation harder, which is especially worrying when it comes to important subjects, such as politics and elections.
Across 47 countries, 59% say they are worried about being able to tell what is real and fake on the internet, up three percentage points on last year. Others took a more optimistic view, noting that these technologies could be used to provide more relevant and useful content.
Use of AI by the news industry
The news industry is turning to AI for two reasons. First, they hope that automating behind-the-scenes processes such as transcription, copyediting and layout will reduce costs. Second, AI technologies could help personalize the content itself, making it more appealing for audiences.
In the last year, we have seen media companies deploying a range of AI solutions, with varying degrees of human oversight, from AI-generated summaries and illustrations to stories written by AI robots and even AI-generated newsreaders.
How do audiences feel about all of this? Across 28 markets, our survey respondents were mostly uncomfortable with the use of AI when content is created mostly by AI with some human oversight. By contrast, there is less discomfort when AI is used to assist (human) journalists, for example, in transcribing interviews or summarizing materials for research.
Here, respondents are broadly more comfortable than uncomfortable. However, we see country-level differences, possibly linked to cues people are getting from the media. British press coverage of AI, for example, has been characterized as largely negative and sensationalist, while US media narratives are shaped by the leading role of US companies and the opportunities for jobs and growth.
Comfort with AI is also closely related to the importance and seriousness of the subject being discussed. People say they feel less comfortable with AI-generated news on topics such as politics and crime, and more comfortable with sports or entertainment news, subjects where mistakes tend to have less serious consequences.
“Chatbots really shouldn’t be used for more important news like war or politics as the potential misinformation could be the reason someone votes for a candidate over another one,” a 20-year-old man in the UK told us.
Our research also shows that people who tend to trust the news in general are more likely to be comfortable with the uses of AI where humans (journalists) remain in control, compared with those who don’t. This is because those who tend to trust the news also tend to have greater faith in publishers’ ability to responsibly use AI.
Interviews we conducted show a similar pattern at the level of specific news outlets: People who trust specific news organizations, especially those they describe as the most reputable, also tend to be more comfortable with them using AI.
On the flip side, audiences who are already skeptical of or cynical about news organizations may view their trust further eroded by the implementation of these technologies.
As one woman from the US put it: “If any news organization was caught using fake images or videos in any way, it should be held accountable and I’d lose trust with them, even if they were being transparent that the content was created with AI.”
Carefully thinking about when disclosure is necessary and how to communicate it, especially in the early stages, when AI is still foreign to many people, will be a crucial element for maintaining trust. This is particularly so when AI is used to create new content with which audiences will come into direct contact. Our interviews tell us this is what audiences are most suspicious of.
Overall, we are still in the early stages of journalists’ usage of AI, but this makes it a time of maximum risk for news organizations. Our data shows that audiences are still deeply ambivalent about the use of these technologies, which means publishers need to be extremely cautious about where and how they deploy them.
Wider concerns about synthetic content flooding online platforms mean trusted brands that use the technologies responsibly could be rewarded. But get things wrong and that trust could be easily lost.
The Conversation
This article is republished from The Conversation under a Creative Commons license. Read the original article.
People are worried about the media using AI for stories of consequence, but less so for sports and entertainment (2024, June 20)
retrieved 20 June 2024
from https://techxplore.com/news/2024-06-people-media-ai-stories-consequence.html
part may be reproduced without the written permission. The content is provided for information purposes only.
Advances in artificial intelligence (AI) are disrupting many aspects of modern life, and the news industry is no exception. In a year with a record-breaking number of elections worldwide, there has been considerable soul-searching about the potential effect of so-called “deepfakes,” and other synthetic content, on democracies. There have also been further disruptions to the business models and trust underpinning independent journalism.
Most audiences are just starting to form opinions about AI and news, but in this year’s Digital News Report survey, which we produced at the University of Oxford’s Reuters Institute for the Study of Journalism, we included questions about the subject in 28 markets, backed up with in-depth interviews in the UK, US and Mexico.
Our findings reveal a high level of ambivalence about the use of these technologies. They also offer insights to publishers looking to implement the technologies without further eroding trust in news, which has fallen in many countries in recent years.
It is important to keep in mind that awareness of AI is still relatively low, with around half of our sample (49% globally and 56% in the UK) having read little or nothing about it. However, concerns about the accuracy of information and the potential for misinformation are top of the list when talking to those who are better informed.
Manipulated images and videos, for example around the war in Gaza, are increasingly common on social media and are already causing confusion. As one male participant said, “I have seen many examples before, and they can sometimes be very good. Thankfully, they are still pretty easy to detect but within five years they will be indistinguishable.”
Some participants felt widespread use of generative AI technologies—those that can produce content for users in text, images and video—would probably make identifying misinformation harder, which is especially worrying when it comes to important subjects, such as politics and elections.
Across 47 countries, 59% say they are worried about being able to tell what is real and fake on the internet, up three percentage points on last year. Others took a more optimistic view, noting that these technologies could be used to provide more relevant and useful content.
Use of AI by the news industry
The news industry is turning to AI for two reasons. First, they hope that automating behind-the-scenes processes such as transcription, copyediting and layout will reduce costs. Second, AI technologies could help personalize the content itself, making it more appealing for audiences.
In the last year, we have seen media companies deploying a range of AI solutions, with varying degrees of human oversight, from AI-generated summaries and illustrations to stories written by AI robots and even AI-generated newsreaders.
How do audiences feel about all of this? Across 28 markets, our survey respondents were mostly uncomfortable with the use of AI when content is created mostly by AI with some human oversight. By contrast, there is less discomfort when AI is used to assist (human) journalists, for example, in transcribing interviews or summarizing materials for research.
Here, respondents are broadly more comfortable than uncomfortable. However, we see country-level differences, possibly linked to cues people are getting from the media. British press coverage of AI, for example, has been characterized as largely negative and sensationalist, while US media narratives are shaped by the leading role of US companies and the opportunities for jobs and growth.
Comfort with AI is also closely related to the importance and seriousness of the subject being discussed. People say they feel less comfortable with AI-generated news on topics such as politics and crime, and more comfortable with sports or entertainment news, subjects where mistakes tend to have less serious consequences.
“Chatbots really shouldn’t be used for more important news like war or politics as the potential misinformation could be the reason someone votes for a candidate over another one,” a 20-year-old man in the UK told us.
Our research also shows that people who tend to trust the news in general are more likely to be comfortable with the uses of AI where humans (journalists) remain in control, compared with those who don’t. This is because those who tend to trust the news also tend to have greater faith in publishers’ ability to responsibly use AI.
Interviews we conducted show a similar pattern at the level of specific news outlets: People who trust specific news organizations, especially those they describe as the most reputable, also tend to be more comfortable with them using AI.
On the flip side, audiences who are already skeptical of or cynical about news organizations may view their trust further eroded by the implementation of these technologies.
As one woman from the US put it: “If any news organization was caught using fake images or videos in any way, it should be held accountable and I’d lose trust with them, even if they were being transparent that the content was created with AI.”
Carefully thinking about when disclosure is necessary and how to communicate it, especially in the early stages, when AI is still foreign to many people, will be a crucial element for maintaining trust. This is particularly so when AI is used to create new content with which audiences will come into direct contact. Our interviews tell us this is what audiences are most suspicious of.
Overall, we are still in the early stages of journalists’ usage of AI, but this makes it a time of maximum risk for news organizations. Our data shows that audiences are still deeply ambivalent about the use of these technologies, which means publishers need to be extremely cautious about where and how they deploy them.
Wider concerns about synthetic content flooding online platforms mean trusted brands that use the technologies responsibly could be rewarded. But get things wrong and that trust could be easily lost.
The Conversation
This article is republished from The Conversation under a Creative Commons license. Read the original article.
People are worried about the media using AI for stories of consequence, but less so for sports and entertainment (2024, June 20)
retrieved 20 June 2024
from https://techxplore.com/news/2024-06-people-media-ai-stories-consequence.html
part may be reproduced without the written permission. The content is provided for information purposes only.