AI can help humanitarians gain crucial insights to better monitor and anticipate risks, such as a conflict outbreak or escalation. But deploying systems in this context is not without risks for those affected, a new study warns.
Humanitarian organizations have been increasingly using digital technologies, and the COVID-19 pandemic has accelerated this trend.
AI-supported disaster mapping was used in Mozambique to speed up emergency response, and AI systems were used to predict food crises and rolled out by the World Bank across twenty-one countries.
But the study warns some uses of AI may expose people to additional harm and present significant risks for the protection of their rights.
The study, published in the “Research Handbook on Warfare and Artificial Intelligence,” is by Professor Ana Beduschi, from the University of Exeter Law School.
Professor Beduschi said, “AI technologies have the potential to further expand the toolkit of humanitarian missions in their preparedness, response, and recovery. But safeguards must be put in place to ensure that AI systems used to support the work of humanitarians are not transformed into tools of exclusion of populations in need of assistance. Safeguards concerning the respect and protection of data privacy should also be put in place.
“The humanitarian imperative of ‘do no harm’ should be paramount to all deployment of AI systems in situations of conflict and crisis.”
The study says humanitarian organizations designing AI systems should ensure data protection by design and by default, to minimize risks of harm—whether they are legally obliged to do so or not. They should also use data protection impact assessments (DPIAs) to understand the potential negative impacts of these technologies.
Grievance mechanisms should also be established so people can challenge decisions that were either automated or made by humans with the support of AI systems if these adversely impacted them.
Professor Beduschi added, “AI systems can analyze large amounts of multidimensional data at increasingly fast speeds, identify patterns in the data, and predict future behavior. That can help organizations gain crucial insights to better monitor and anticipate risks, such as a conflict outbreak or escalation.
“Yet, deploying AI systems in the humanitarian context is not without risks for the affected populations. Issues include the poor quality of the data used to train AI algorithms, the existence of algorithmic bias, the lack of transparency about AI decision-making, and the pervading concerns about the respect and protection of data privacy.
“It is crucial that humanitarians abide by the humanitarian imperative of ‘do not harm’ when deciding whether to deploy AI to support their action. In many cases, the sensible solution would be not to rely on AI technologies, as these may cause additional harm to civilian populations.”
More information:
Research Handbook on Warfare and Artificial Intelligence
University of Exeter
AI can support humanitarian organizations in armed conflict or crisis, but they should understand potential risks (2024, July 9)
retrieved 9 July 2024
from https://techxplore.com/news/2024-07-ai-humanitarian-armed-conflict-crisis.html
part may be reproduced without the written permission. The content is provided for information purposes only.
AI can help humanitarians gain crucial insights to better monitor and anticipate risks, such as a conflict outbreak or escalation. But deploying systems in this context is not without risks for those affected, a new study warns.
Humanitarian organizations have been increasingly using digital technologies, and the COVID-19 pandemic has accelerated this trend.
AI-supported disaster mapping was used in Mozambique to speed up emergency response, and AI systems were used to predict food crises and rolled out by the World Bank across twenty-one countries.
But the study warns some uses of AI may expose people to additional harm and present significant risks for the protection of their rights.
The study, published in the “Research Handbook on Warfare and Artificial Intelligence,” is by Professor Ana Beduschi, from the University of Exeter Law School.
Professor Beduschi said, “AI technologies have the potential to further expand the toolkit of humanitarian missions in their preparedness, response, and recovery. But safeguards must be put in place to ensure that AI systems used to support the work of humanitarians are not transformed into tools of exclusion of populations in need of assistance. Safeguards concerning the respect and protection of data privacy should also be put in place.
“The humanitarian imperative of ‘do no harm’ should be paramount to all deployment of AI systems in situations of conflict and crisis.”
The study says humanitarian organizations designing AI systems should ensure data protection by design and by default, to minimize risks of harm—whether they are legally obliged to do so or not. They should also use data protection impact assessments (DPIAs) to understand the potential negative impacts of these technologies.
Grievance mechanisms should also be established so people can challenge decisions that were either automated or made by humans with the support of AI systems if these adversely impacted them.
Professor Beduschi added, “AI systems can analyze large amounts of multidimensional data at increasingly fast speeds, identify patterns in the data, and predict future behavior. That can help organizations gain crucial insights to better monitor and anticipate risks, such as a conflict outbreak or escalation.
“Yet, deploying AI systems in the humanitarian context is not without risks for the affected populations. Issues include the poor quality of the data used to train AI algorithms, the existence of algorithmic bias, the lack of transparency about AI decision-making, and the pervading concerns about the respect and protection of data privacy.
“It is crucial that humanitarians abide by the humanitarian imperative of ‘do not harm’ when deciding whether to deploy AI to support their action. In many cases, the sensible solution would be not to rely on AI technologies, as these may cause additional harm to civilian populations.”
More information:
Research Handbook on Warfare and Artificial Intelligence
University of Exeter
AI can support humanitarian organizations in armed conflict or crisis, but they should understand potential risks (2024, July 9)
retrieved 9 July 2024
from https://techxplore.com/news/2024-07-ai-humanitarian-armed-conflict-crisis.html
part may be reproduced without the written permission. The content is provided for information purposes only.