
Unmasking the Perils of AI in Humanitarian Peacebuilding: Confronting Power Imbalances, Algorithmic Bias, and the Erosion of Agency
Author Name
Marine Ragnet and Aiza Shahid Qureshi
Published On
April 17, 2025
Keywords/Tags
Confronting Power Imbalances, Algorithmic Bias, Erosion of Agency
In the realm of humanitarian peacebuilding, the rapid integration of artificial intelligence (AI) technologies presents a complex web of challenges and ethical dilemmas that demand our urgent attention. As AI systems become increasingly entrenched in decision-making processes in humanitarian peacebuilding, we must confront the stark power imbalances, algorithmic biases, and erosion of agency that arise from their deployment. Without robust participatory governance frameworks and ethical safeguards, AI applications in peacebuilding and humanitarian operations risk exacerbating existing inequalities, perpetuating historical injustices, and undermining the very populations these interventions seek to support.
The Specter of Power Imbalances and the Digital Divide
At the heart of the danger surrounding AI in humanitarian peacebuilding lies the specter of power imbalances, a risk exacerbated by the digital divide. In many conflict-affected regions, unequal access to AI technologies and levels of data literacy create a context where vulnerable communities are excluded from the development and governance of AI systems in the humanitarian operations aiming to serve them. The biometric data collection in asylum and aid processes is one example of how people are increasingly powerless to how their information is collected and used by AI systems. These technologies determine, for example, who will receive aid or asylum support, despite evidence of racial bias leading to inaccurate identification, causing people entitled to aid to be denied support. These systems deepen the vulnerability of people already facing legal and economic insecurity, harming those who humanitarian operations seek to support.
The uneven distribution of power, resources, and knowledge amplifies existing disparities, concentrating authority among those who control AI technologies while excluding everyone else.This narrows the space to dispute inequities in AI system applications, making it more difficult for beneficiaries to contest any harms or rights violations they experience.
The global landscape of AI development itself is a stark reflection of these imbalances, with certain nation-states and tech corporations wielding significantly greater resources and expertise than others. This disparity can translate into geopolitical asymmetries within humanitarian peacebuilding initiatives, where actors with greater access to AI capabilities hold undue influence over decision-making processes. The concentration of AI expertise and computational resources among a few major tech companies, primarily based in the Global North, similarly raises the specter of “techno-colonialism,” where powerful actors impose their values and interests on communities in the Global South without adequately considering local contexts, needs, and values.
Case Study: World Food Programme’s SCOPE System in Uganda
The World Food Programme’s (WFP) implementation of the SCOPE system in Uganda offers a concrete example of the complexities surrounding AI and biometric technologies in humanitarian aid. SCOPE is a digital beneficiary and transfer management platform that uses biometric data for identity verification in aid distribution.
In 2018, the WFP deployed SCOPE in Uganda’s Bidi Bidi refugee settlement, one of the world’s largest refugee camps, hosting over 270,000 South Sudanese refugees (Latonero, 2019). The system aimed to improve aid distribution efficiency and reduce fraud. However, its implementation revealed several critical issues:
- Data Privacy and Consent: Many refugees reported feeling compelled to provide their biometric data to receive food aid, raising questions about coercion and consent in situations of extreme vulnerability (Molnar, 2019).
- Exclusion and Algorithmic Bias: Technical issues, including difficulties in capturing fingerprints from elderly individuals or those with worn hands from manual labor, led to
some refugees being denied food assistance (The Engine Room & Oxfam, 2018). - Lack of Transparency: The system’s operation was opaque to many beneficiaries, who often didn’t understand how their data was being used or stored (Responsible Data, 2019).
- Power Imbalances: The implementation highlighted the power asymmetry between aid organizations and recipients, with refugees having little say in how the technology was deployed or managed (Weitzberg et al., 2021).
- Data Sharing Concerns: Despite WFP’s assurances, refugees and advocacy groups expressed concerns about potential data sharing with the Ugandan government or others, which could put vulnerable populations at risk (Responsible Data, 2019).
AI and biometric systems, even when implemented with good intentions, can reinforce existing power imbalances, potentially exclude vulnerable individuals, and raise significant ethical concerns in humanitarian contexts. There is an urgent need for careful consideration of local contexts, robust ethical frameworks, and meaningful participation of affected communities in the design and deployment of such technologies.
The Algorithmization of Agency and the Perils of Automated Decision-Making
As AI systems become more deeply embedded in humanitarian peacebuilding efforts, the algorithmization of agency emerges as a profound threat to the autonomy and self-determination of affected communities. In this context, “algorithmization of agency” refers to the process by which algorithmic systems increasingly make decisions that were previously made by humans, particularly in areas that directly impact the lives and futures of vulnerable populations. This shift towards automated decision-making in humanitarian contexts can include:
Aid distribution algorithms that determine resource allocation Predictive models for identifying at-risk individuals or communities Automated systems for processing asylum claims or refugee status determinations AI-driven conflict early warning systems that influence intervention strategies When algorithms automate these decision-making processes, certain populations are marginalized from shaping the applications that determine their own futures. The opaque nature of many AI systems, often referred to as “black boxes,” further compounds this erosion of agency, making it challenging for affected individuals and communities to understand, challenge, or contest the decisions that impact their lives (Pasquale, 2015).
Moreover, the increasing reliance on AI in humanitarian contexts raises urgent questions about informed consent, transparency, and accountability. In many cases, beneficiaries are not fully informed about how their data is being collected, used, or shared, making it difficult for them to exercise meaningful choice or control over their personal information. The lack of transparency surrounding data-sharing practices between humanitarian organizations, governments, and private tech companies only heightens concerns over the potential misuse and weaponization of sensitive data.
Confronting Algorithmic Bias and Technological Harms
Beyond the erosion of agency and participation, the use of AI in humanitarian peacebuilding raises a host of ethical challenges related to algorithmic bias and technological harm. AI systems are not neutral; they can reflect and amplify the biases of the societies in which they are developed, leading to discriminatory outcomes in aid distribution, resource allocation, and the identification of vulnerable groups. The use of biased facial recognition systems in refugee camps, for example, has led to the wrongful denial of assistance to individuals with darker skin tones, underscoring the severe consequences of deploying AI without adequate safeguards and oversight.
Furthermore, the collection and use of sensitive personal data in AI systems pose significant privacy risks, particularly in the context of conflict-affected regions where data breaches or unauthorized access could have severe consequences for individual safety and security. The creation of “smart cards” for Rohingya refugees in Bangladesh, which collected biometric data and was later shared with the Myanmar government, illustrates the dangerous potential for AI and data to be weaponized against vulnerable populations.
Charting the Path Forward: Participatory Governance, Ethical Safeguards, and a Commitment to Justice
To navigate the complex challenges posed by AI in humanitarian peacebuilding, we must urgently develop participatory governance models that prioritize the voices, needs, and agency of conflict-affected communities. This requires a fundamental shift in the way we approach AI development and deployment, moving away from top-down, technocentric models and towards inclusive, community-driven approaches that limit harmful impacts for individuals.
In Yemen, AI technologies are being used in peacekeeping operations to support the monitoring of demands, alliances, and roles in the conflict among the various parties and the development of a more accurate record of the dynamic conflict environment. This structure was developed to support mediation processes, using natural language processing models and machine learning systems for knowledge management, extraction, and the monitoring and evaluation of conflict developments.
The researchers identified methods for using machine learning tools to assist in deliberations. For example, they were able to generate a targeted set of interventions using information extraction tools to categorize party statements and identify latent issues from party dialogues. They used these systems to measure distances between party positions on specific issues, which were difficult to identify without NLPs due to the volume of data, frequently evolving positions, and the rapidly changing conflict environment. Throughout the process, the researchers developed an approach to mitigating some of the risks of AI systems in peacebuilding through advocating for findings from each model to be triangulated with other data sources, rather than deriving conclusions from an AI system in isolation. Substituting collaboration with local actors and mediation-centered approaches to peacebuilding with AI findings would impose a narrow, essentializing view to a dynamic, complex conflict. Indeed, the findings from the NLPs and ML systems had some margins of error due to limited data sources and consequently generated misleading findings in measuring party distances. But leveraging AI systems as a tool for mediation in peacebuilding, rather than a shortcut to ending violence and promoting reconciliation, allows actors to develop a granular analysis of complex, protracted conflict and the coalitions participating in them. In today’s more protracted, asymmetric wars, AI systems can better equip peacebuilders to identify key conflict areas and the relationships between various mediation priorities to guide structured negotiations.
These natural language processing models were introduced to the mediators with further context on their uses and limitations in an iterative process, where prototypes of the project were refined and then presented to the mediation team for feedback. The development of these powerful, context-sensitive tools strengthened the mediation team’s understanding of aspects of the conflict, and the participatory approach allowed them to retain oversight over the peacebuilding process. Problems arise when mediators lack data literacy skills, a co-creation methodological framework for using these tools, and an understanding of the applications of AI systems.
Participatory AI governance strategies should encompass a range of concrete measures including the establishment of multi-stakeholder advisory councils, the implementation of AI impact assessments in the field, and the development of clear accountability mechanisms. Investing in digital literacy and capacity-building programs is also crucial to enable meaningful participation from diverse stakeholders, particularly those from marginalized groups.
Moreover, we must commit to a comprehensive research agenda that incorporates ethical considerations, human rights frameworks, and robust monitoring and evaluation methodologies. This includes conducting participatory landscape analyses to understand local contexts and concerns, critically examining existing literature across disciplines, and developing collaborative frameworks that prioritize community voices and risk mitigation strategies.
As we seek to harness the potential of AI for humanitarian peacebuilding, we must grapple with uncomfortable truths about its dual use capabilities. This technology can and has perpetuated harm and inequality, even when deployed with good intentions. The path forward demands a steadfast commitment to justice, equity, and accountability from all stakeholders – from humanitarian organizations and technology companies to policymakers and affected communities themselves.
As we seek to harness the potential of AI for humanitarian peacebuilding, we must grapple with uncomfortable truths about the ways in which technology can perpetuate harm and inequality, even when deployed with good intentions. The path forward demands a steadfast commitment to justice, equity, and accountability from all stakeholders — from humanitarian organizations and technology companies to policymakers and affected communities themselves.
The stakes could not be higher. The lives and futures of countless individuals in conflict-affected regions hang in the balance, as the power of AI is wielded in ways that can either entrench inequality and injustice or uplift and empower the most vulnerable. It is up to us to chart a course towards a more just, inclusive, and peaceful world — one in which the transformative potential of AI is harnessed in service of human dignity, self-determination, and the greater good.
The choice is ours. Will we rise to the challenge of building participatory AI governance that confronts power imbalances, challenges algorithmic bias, and safeguards the agency and rights of conflict-affected communities? Or will we stand by as these technologies become yet another tool of exclusion and oppression? The future of humanitarian peacebuilding — and the lives of countless people around the world — depends on our answer. Let us seize this moment to unmask the perils of AI and chart a path towards a brighter, more just future for all.
References:
- Arana-Catania, Miguel, et al. Supporting Peace Negotiations in the Yemen War through Machine Learning, 2022. https://arxiv.org/pdf/2207.11528
- Pasquale, F. (2015) – This is cited in the text when discussing “black box” AI systems.
- Latonero, M. (2019). Stop Surveillance Humanitarianism. The New York Times, July 11, 2019.
- Molnar, P. (2019). New technologies in migration: Human rights impacts. Forced Migration Review, (61), 7-9.
- Responsible Data. (2019). Refugee data collection and the humanitarian principles.
https://responsibledata.io/refugee-data-collection-humanitarian-principles/ - The Engine Room and Oxfam. (2018). Biometrics in the Humanitarian Sector.
- Weitzberg, K., Cheesman, M., Martin, A., & Schoemaker, E. (2021). Between surveillance and recognition: Rethinking digital identity in aid. Big Data & Society, 8(1).
- Emily Rand & LOTI, AI City