Unlike “traditional” AI that relies on predetermined rules and patterns, generative AI is able to produce novel content—like text, video, images, and music. In a way, unlike old AI, generative AI can think outside the box. Its implications are profound and sprawling, with the potential to reshape virtually every branch of society.
Conversations about AI often turn on whether it will be a net positive or negative for society, but our new research as part of an international team suggests that acknowledging AI’s apparent paradoxes can help us develop a clearer picture about its risks and potential benefits. We focused on four major areas of society: information, work, education and health care.
Work
Digital technologies have a history of skewed benefits. More educated workers benefit while less-educated workers are displaced through automation—a trend known as “skill-biased technological change.”
By contrast, generative AI promises to enhance rather than replace human capabilities, potentially reversing this adverse trend. Studies have shown that AI tools like chat assistants and programming aids can significantly boost productivity and job satisfaction, especially for less-skilled workers.
Nonetheless, uneven access to AI technologies could worsen existing inequalities as those lacking necessary digital infrastructure or skills get left behind. For example, generative AI is unlikely to have much direct impact on the global south in the near future, due to insufficient investment in the prerequisite digital infrastructure and skills.
School
Generative AI can enhance personal support and adaptability in learning. Chatbot tutors, for instance, are set to transform educational settings by providing real-time, personalized instruction and support. This technology can realize the dream of dynamic, skill-adaptive teaching methods that directly respond to student needs without constant teacher intervention.
Yet, it must be carefully implemented to avoid perpetuating or introducing biases, not only in terms of the information that is fed into AIs but also how they are used. For instance, a study revealed that female students report using ChatGPT less frequently than their male counterparts. This disparity in technology usage could not only have immediate effects on academic achievement, but also contribute to a future gender gap in the workforce.
Health care
Generative AI could help doctors make better choices. But it could also drive them to make worse ones.
Generative AI could augment human capacities in the practice of medicine by guiding practitioners during diagnosis, screening, prognosis and triaging. It could reduce workloads, thereby making medical care more accessible and affordable. One study found that the integration of human and AI judgment led to superior performance compared to either alone, showing just how well humans and AI can work together.
That said, the diagnostic performance of some expert physicians may not be improved by AI. Another study focusing on radiology found that AI can in fact cause incorrect diagnoses in situations that otherwise would have been correctly assessed. This highlights the need for balanced integration that supplements rather than replaces humans.
Disinformation
Will generative AI exacerbate the spread of misinformation or reduce it? Generative AI promises personalized online content, potentially enhancing and customizing a user experience. It can also broaden access to content—for instance, via instant language translations or by making it easier for people with disabilities to access content.
However, it also has the potential to be a powerful tool for “surveillance capitalism.” AI may collect massive amounts of personal data that can then be exploited for corporate gain, including by leveraging people’s biases or vulnerabilities.
We’re already seeing the spread of misinformation through advanced and personalized “deepfakes,” and we may soon see AI being used to micro-target voters with persuasive fake content that could significantly affect elections.
Yet, there is also hope that AI can help with these problems. One study found that entering into a dialogue with generative AI significantly reduces conspiracy beliefs among conspiracy believers. The AI appears to be able to answer conspiracy believers’ complex questions about potential conspiracies in a way that no human can.
How should governments respond?
When we looked at how the EU, UK and US were attempting to build regulatory frameworks around these issues, our main observation was that they are falling into the trap of overlooking the potential for AI to aggravate socioeconomic inequalities.
Policy-making should balance AI innovation with social equity and consumer protection. Future regulatory improvements should include equitable tax structures, empowering workers, controlling consumer information, supporting human-complementary AI research, and implementing robust measures against AI-generated misinformation.
We find ourselves at a critical historical crossroads, where today’s decisions will have global consequences for generations to come. It’s an exciting yet daunting moment to be alive, charged with heavy responsibilities. Each of us plays a crucial role as architects of the future. We can all contribute to driving the course towards the positive use of what could be humanity’s greatest innovation, or its worst.
The Conversation
This article is republished from The Conversation under a Creative Commons license. Read the original article.
Generative AI at school, work and the hospital: The risks and rewards laid bare (2024, June 15)
retrieved 15 June 2024
from https://techxplore.com/news/2024-06-generative-ai-school-hospital-rewards.html
part may be reproduced without the written permission. The content is provided for information purposes only.
Unlike “traditional” AI that relies on predetermined rules and patterns, generative AI is able to produce novel content—like text, video, images, and music. In a way, unlike old AI, generative AI can think outside the box. Its implications are profound and sprawling, with the potential to reshape virtually every branch of society.
Conversations about AI often turn on whether it will be a net positive or negative for society, but our new research as part of an international team suggests that acknowledging AI’s apparent paradoxes can help us develop a clearer picture about its risks and potential benefits. We focused on four major areas of society: information, work, education and health care.
Work
Digital technologies have a history of skewed benefits. More educated workers benefit while less-educated workers are displaced through automation—a trend known as “skill-biased technological change.”
By contrast, generative AI promises to enhance rather than replace human capabilities, potentially reversing this adverse trend. Studies have shown that AI tools like chat assistants and programming aids can significantly boost productivity and job satisfaction, especially for less-skilled workers.
Nonetheless, uneven access to AI technologies could worsen existing inequalities as those lacking necessary digital infrastructure or skills get left behind. For example, generative AI is unlikely to have much direct impact on the global south in the near future, due to insufficient investment in the prerequisite digital infrastructure and skills.
School
Generative AI can enhance personal support and adaptability in learning. Chatbot tutors, for instance, are set to transform educational settings by providing real-time, personalized instruction and support. This technology can realize the dream of dynamic, skill-adaptive teaching methods that directly respond to student needs without constant teacher intervention.
Yet, it must be carefully implemented to avoid perpetuating or introducing biases, not only in terms of the information that is fed into AIs but also how they are used. For instance, a study revealed that female students report using ChatGPT less frequently than their male counterparts. This disparity in technology usage could not only have immediate effects on academic achievement, but also contribute to a future gender gap in the workforce.
Health care
Generative AI could help doctors make better choices. But it could also drive them to make worse ones.
Generative AI could augment human capacities in the practice of medicine by guiding practitioners during diagnosis, screening, prognosis and triaging. It could reduce workloads, thereby making medical care more accessible and affordable. One study found that the integration of human and AI judgment led to superior performance compared to either alone, showing just how well humans and AI can work together.
That said, the diagnostic performance of some expert physicians may not be improved by AI. Another study focusing on radiology found that AI can in fact cause incorrect diagnoses in situations that otherwise would have been correctly assessed. This highlights the need for balanced integration that supplements rather than replaces humans.
Disinformation
Will generative AI exacerbate the spread of misinformation or reduce it? Generative AI promises personalized online content, potentially enhancing and customizing a user experience. It can also broaden access to content—for instance, via instant language translations or by making it easier for people with disabilities to access content.
However, it also has the potential to be a powerful tool for “surveillance capitalism.” AI may collect massive amounts of personal data that can then be exploited for corporate gain, including by leveraging people’s biases or vulnerabilities.
We’re already seeing the spread of misinformation through advanced and personalized “deepfakes,” and we may soon see AI being used to micro-target voters with persuasive fake content that could significantly affect elections.
Yet, there is also hope that AI can help with these problems. One study found that entering into a dialogue with generative AI significantly reduces conspiracy beliefs among conspiracy believers. The AI appears to be able to answer conspiracy believers’ complex questions about potential conspiracies in a way that no human can.
How should governments respond?
When we looked at how the EU, UK and US were attempting to build regulatory frameworks around these issues, our main observation was that they are falling into the trap of overlooking the potential for AI to aggravate socioeconomic inequalities.
Policy-making should balance AI innovation with social equity and consumer protection. Future regulatory improvements should include equitable tax structures, empowering workers, controlling consumer information, supporting human-complementary AI research, and implementing robust measures against AI-generated misinformation.
We find ourselves at a critical historical crossroads, where today’s decisions will have global consequences for generations to come. It’s an exciting yet daunting moment to be alive, charged with heavy responsibilities. Each of us plays a crucial role as architects of the future. We can all contribute to driving the course towards the positive use of what could be humanity’s greatest innovation, or its worst.
The Conversation
This article is republished from The Conversation under a Creative Commons license. Read the original article.
Generative AI at school, work and the hospital: The risks and rewards laid bare (2024, June 15)
retrieved 15 June 2024
from https://techxplore.com/news/2024-06-generative-ai-school-hospital-rewards.html
part may be reproduced without the written permission. The content is provided for information purposes only.