Designing and Deploying AI Tools to Support Humanitarian Practice:

A Practical Guide

Brought to you by UCL, Bold code, IRCAI, Kathmandu University, University of Cape Coast and Data Science Nigeria, with funding from the Foreign, Commonwealth & Development Office.

Artificial Intelligence (AI) has the potential to significantly enhance humanitarian responses across various domains, including disaster preparedness, resource allocation, and needs assessment.

While the technology offers transformative opportunities, it raises ethical challenges, such as biased data leading to inequitable decisions.

Technology for humanitarian applications must uphold the principles of humanitarianism: Humanity, Impartiality, Neutrality and Independence.

In an era where data-driven tools promise to transform crisis response, the ECTO framework—Educate, Co-Create, Transfer, and Optimise—offers a holistic roadmap for humanitarians to integrate AI ethically and effectively.

Dive in to explore the distinctive role of each pillar...

1. EDUCATE

By investing in Educate, organisations equip staff with the literacy needed to spot biases, handle sensitive data responsibly, and advocate for ethical standards.

2. CO-CREATE

Co-Create ensures that local voices, frontline experts, and community members guide project design and implementation, preventing top-down technology imposition and fostering genuine trust.

3. TRANSFER

Through Transfer, the focus shifts to building robust local capacity and shared governance structures, ensuring that AI solutions thrive well beyond initial deployments.

4. OPTIMISE

Finally, Optimise weaves iterative audits, feedback loops, and transparent updates into the project’s DNA, keeping pace with evolving contexts and safeguarding beneficiary interests over the long term.

1. EDUCATE

1. Identify Capacity Gaps

To use AI effectively and responsibly in humanitarian settings, staff first need a clear understanding of what AI is, its limitations, and the ethical challenges it presents.  While it’s not necessary to understand every technical detail—just as one doesn’t need to know how a car engine works to drive safely—it is essential to know how to develop AI tools fairly and responsibly.

Key Action: Identify Skill Gaps

  • Conduct a quick baseline survey or one-on-one interviews to gauge staff familiarity with AI.
  • Determine key requirements and appetite for learning.
  • Determine priority areas (e.g., data security, ethical review) where knowledge is lacking and is relevant for the organisation.

2. Understand Ethics

Humanitarian practitioners follow the Sphere Handbook and the Core Humanitarian Standard for guidance on accountability, protection, and community engagement. AI, on the other hand, is fundamentally about data and the algorithms that analyse that data for patterns and insights. Médecins Sans Frontières (MSF), for instance, has dealt with ethical challenges in conflict settings where sensitive patient geolocation data could compromise clinic locations if intercepted by armed groups. Recognising such ethical red flags is central to the education dimension of the framework.

Key Action: Link AI Ethics to Existing Codes and Standards

  • Organise short sessions mapping relevant AI principles (e.g., transparency, accountability) to established humanitarian guidelines (e.g., Sphere Handbook) specific to the project and context.
  • Provide simple guidance documents that show how AI ethics complement humanitarian principles.

3. Learn through scenarios

AI education in humanitarian contexts must be both practically applicable and ethically comprehensive. At a basic level, staff should understand how AI models work, what kinds of biases can creep into datasets, and why it is risky to over-rely on algorithmic predictions. They should also be able to spot potential dual-use scenarios, where humanitarian data might be co-opted for surveillance or conflict-related targeting.

Humanitarian organisations should develop decision-support checklists, covering everything from data security steps to community consent procedures. They can also maintain a library of case studies and previous examples complimenting the education. Practical tools allow staff to transform theoretical knowledge into day-to-day practice, making them more confident in challenging or refining proposed AI interventions.

Key Action: Include Scenario-Based Exercises

  • Use simulations (e.g., an AI tool that predicts cholera outbreaks but also reveals sensitive population clusters) to show how mishandled data can lead to harm.
  • Discuss dual-use risks explicitly: how might a seemingly benign tool be repurposed in a conflict/emergency setting or misused?
  • Justify the use of AI for the given application.
  • Make a checklist of questions staff should ask before adopting AI (e.g., “Is informed consent feasible here?”).
  • Maintain a shared drive or internal wiki with case studies and best practices.

4. Training Pathways

A few generalised workshops won’t be enough. Introductory courses can clarify AI fundamentals for field officers, while more advanced topics might suit regional data focal points. Ongoing support is equally important, as is cultural and linguistic relevance. Moreover, sustaining AI literacy requires it to become part of standard organisational policy. By giving AI training the same priority as security or safeguarding training, agencies signal that data responsibility is integral—not optional—in humanitarian settings.

Key Action: Build Progressive Training Paths

  • Offer three tiers: Basic (field teams), Intermediate (coordination), and Advanced (data specialists).
  • Schedule regular refreshers or informal “office hours” for real-time problem-solving.
  • Illustrate potential bias scenarios using culturally relevant examples (e.g., clan-based conflict, religious tensions).
  • Translate materials into local languages; provide offline modules where internet access is weak.
  • Develop an ethical review process against a set framework for AI projects. Mandate an ethics sign-off or certification process for any new AI pilot.

Education protects against the trap of “techno-solutionism,” where complex social and political challenges are hastily delegated to an algorithm. It also curbs underfunding: many organisations fail to allocate resources for thorough training, only to face larger problems when ill-informed staff deploy AI incorrectly. A well-structured education program brings these vulnerabilities to light early, allowing teams to anticipate misuse or bias and take preventive action.

2. CO-CREATE

1. Human Centred Design

AI tools and their outputs embed the assumptions, data, and design choices of their creators. Without local voices, biases and blind spots can go unnoticed, leading to tools that are inaccurate or even harmful to those they intend to serve. Practitioners can draw on methods from human-centred design and value-sensitive design. These approaches emphasise iterative cycles of prototyping, user feedback, and redesign. By cycling through repeated testing and refinement, organisations minimise the risk of launching an AI tool that seems powerful on paper yet fails to address the community’s lived realities.

Key Action: Identify & Map Stakeholders

  • Include community representatives, frontline staff, local authorities, and especially marginalised subgroups (e.g., women, persons with disabilities, ethnic minorities etc.).
  • Conduct field visits and contextual inquiries.
  • Document their potential roles, interests, and unique insights.
  • Host sessions where local stakeholders brainstorm AI use cases, share their concerns and prioritise community needs.
  • Use simple tools (sticky notes, posters) to collect feedback that can guide initial prototypes.
  • Map these individuals and organisations to understand connections and dynamics that affect the wider social landscape.

2. Continuous Feedback

Co-creation feedback mechanisms should be built into project timelines and governance structures. Co-creation is an ongoing relationship. One-off engagements risk leaving communities with half-finished solutions or “pilot projects” that vanish when donor funding ends.  Involving communities in key decisions ensures that infrastructure constraints (e.g., low connectivity, language barriers) and local priorities guide technology adoption. 

Key Action: Establish Regular Feedback Loops

  • Include multi-year strategies for joint monitoring, training updates, and iterative improvements. Seek donor support specifically earmarked for capacity-building and long-term local partnerships.
  • Clarify roles and responsibilities - who collects data, where it’s stored, who can delete or modify it, and how consent is obtained.
  • Create easy-to-use feedback channels (e.g., phone lines, instant messaging groups) for reporting issues or suggestions. Use existing frameworks (IASC Operational Guidance on Data Responsibility, DSEG) to ensure transparency and respect for local concerns.
  • Keep a living record of how and why tool selections are made, ensuring visibility for all partners. Translate key documents into local languages for accessible reference.
  • Encourage local AI experts and communities to be more engaged.

3. Power Balance

Power imbalances can persist even when communities are formally invited to participate. If external actors retain control over data or final decision-making, local engagement remains superficial, and technology becomes a new form of colonialism. Drawing on guidelines such as the ICRC’s Handbook on Data Protection in Humanitarian Action ensures that vulnerable groups are not left powerless over the information collected about them. Effective co-creation sets up joint governance mechanisms where local voices hold real authority to shape or veto AI system changes.

Key Action: Create or Strengthen Local Oversight Committees

  • Include community members, local NGOs, and possibly municipal officials.
  • Give committees the authority to suggest significant modifications to data usage or model retraining.
  • Train external experts to actively listen and take control where appropriate.
  • Ensure language interpretation is available, and that each stakeholder has equal space to speak.
  • If external consultants or agencies must leave, ensure local actors have the resources, contacts, and authority to manage the AI system independently.

Co-creation is not a cosmetic add-on; it’s a foundational approach and a humanitarian commitment to ensuring that tools are both locally relevant and ethically grounded and ultimately safely used through shared governance. By actively involving diverse stakeholders—especially marginalised groups—in everything from initial design to ongoing oversight, humanitarian organisations can build trust, reduce bias, and create solutions that truly serve local needs.

3. TRANSFER

1. Training

A well-designed AI tool will lose its relevance unless the people using it are trained to adapt, maintain, and troubleshoot it over time. In the humanitarian space, it’s all too common for external consultants or donor-funded projects to set up sophisticated systems, only to leave local teams with insufficient capacity or governance structures to manage them. Meaningful and lasting impact arises when local ownership is nurtured through consistent investment and funding in training, mentorship, and authority-sharing, as well as buy-in from local champions to adopt and adapt AI tools to local contexts and for local benefits.

Key Action: Assess Local Capacity Early
  • Use a simple assessment tool and training packages to impart technical skills, digital literacy, and leadership readiness among local staff.
  • Identify institutional funding to support the transfer process.
  • Identify “champions” who can help lead capacity-building efforts and sustain momentum once external support ends.

2. Sustainability

Even the most advanced AI system is only as valuable as the team that maintains and refines it. In humanitarian settings, it’s common for donor-funded technology projects to be set up and then left in limbo once the project cycle closes. Transfer prevents this by embedding ongoing mentorship, incremental skill-building, and clear governance structures. Moreover, instead of launching a complex predictive model from the start, organisations might begin with simpler features—like data dashboards or alert systems—allowing local staff to gain confidence step by step. Each phase of system expansion can be accompanied by targeted training sessions and real-world practice, reinforcing new skills in context.

Key Action: Plan for Post-Deployment from Day One
  • Map out how and when external experts will gradually hand over key responsibilities (e.g., data cleaning, model retraining) to local teams.
  • Align this timeline with donor expectations, ensuring that capacity-building and exit strategies are officially recognised as deliverables.
  • Include a line of funding for local training and technology transfer.
  • Use each new feature rollout as a training milestone—staff gain familiarity with one module before tackling the next.
  • Schedule monthly one-on-one or small-group sessions to identify challenges, knowledge gaps, and opportunities for deeper training.
  • Encourage staff to share tips or shortcuts on what they’ve discovered.

3. Lean Mentorship

Classroom-based training has a limited impact if staff can’t apply what they learn in real-world scenarios. Transfer emphasises on-the-job mentorship, where data experts and local staff collaborate directly on AI tasks—cleaning data, interpreting outputs, and tweaking parameters. Peer-to-peer exchanges such as staff rotations can also accelerate learning.

Key Action: Establish a Mentoring Program
  • Pair each local staff member with an experienced “mentor” (internal or external) who offers daily or weekly guidance.
  • Define clear mentorship goals (e.g., by month’s end, staff can independently run data quality checks).
  • Organise site visits or online knowledge-sharing sessions where teams can discuss challenges and best practices.
  • Recognise success stories—like a local team that solved a tricky data pipeline issue—and invite them to present their methods.

4. Governance

Skill transfer alone doesn’t guarantee true local ownership if the power to make decisions remains with external stakeholders. Establishing a local governance committee that includes community representatives, municipal authorities, and frontline staff can ensure shared oversight. This committee can periodically review AI outputs, address concerns about bias or data misuse, and approve significant changes—like retraining models when contexts shift.

Key Action: Form a Multi-Stakeholder Board
  • Identify representatives from relevant groups (community leaders, local NGOs, government, etc.).
  • Give the board genuine authority to halt or modify AI deployments if they spot ethical or operational issues.
  • Document how decisions on data handling or new features will be made (e.g., majority vote, consensus).
  • Keep records of committee meetings and outcomes for accountability.

By transferring skills, decision-making power, and system adaptability to the people who know the local context best, organisations cultivate resilience, ownership, and long-term impact. Transfer cements the place of AI as a tool in local hands but requires additional funding and planning.

4. OPTIMISE

1. Improvement Protocols

In humanitarian AI, a “deploy and abandon” mindset can be particularly damaging, as models degrade over time, data sources change, and previously unforeseen biases may emerge. Regular improvements, throughout the lifetime of the project, can avoid reliance on outdated models, leading to incorrect targeting of aid, overlooked communities, and skewed decision making.

Key Action: Schedule Regular “Health Checks”
  • Depending on programme length, set clear intervals (e.g., quarterly) to review AI performance, data validity, and user experiences.
  • Assign responsibilities for these audits so they don’t get neglected under emergency pressures.
  • Track metrics like prediction accuracy, user satisfaction, or coverage of different demographic groups.
  • Investigate any sudden drops or disparities (e.g., a spike in complaints from a specific population).
  • Periodically simulate “what-if” situations (e.g., new conflict, sudden influx of refugees) to see if the AI system holds up or needs retraining.

2. Data Upgrade

Many humanitarian AI tools rely on datasets that quickly become obsolete. In general, humanitarian data can be quite poor in terms of its disaggregation, its regularity, and its accuracy and reliability due to the difficulties of collecting data in crises. Routine data audits—carried out internally or by neutral third parties—help identify model drift, data biases, new data updates or newly emerging vulnerabilities. Frontline staff and affected communities are often the first to notice if an AI tool starts producing problematic outputs. Easy-to-use feedback channels in-app reporting features, or dedicated email addresses—empower them to escalate issues quickly.

Key Action: Implement Version Control
  • Treat AI models like software releases: Track changes over time, note when and why retraining occurred, and keep records of old versions for reference.
  • Document who authorised each change to maintain accountability.
  • Agree on specific metrics (e.g., model accuracy dipping below 80%, new government policy affecting data use) that automatically prompt re-evaluation.
  • Especially where cultural or security constraints might deter open criticism, consider an anonymous system to gather honest feedback.
  • Publicise how feedback leads to real changes, building community trust.
  • Define a process for logging complaints, assigning them to relevant staff, and providing updates to the person or group that reported the issue.

3. Keep up with the Times

External factors—like new data protection laws or evolving conflict dynamics—can necessitate swift changes in AI protocols and programming. A well-structured Optimise phase ensures teams remain agile in responding to these shifts. This might include imposing stricter access controls, updating consent procedures, or even halting data collection temporarily to avoid ethical breaches.

Key Action: Maintain a Risk Register
  • Conduct regular contextual analysis to track changes in political, social, and economic dynamics. 
  • Update regularly with potential threats: political changes, data leaks, and regulatory updates.
  • Assign risk owners who monitor developments and propose countermeasures.
  • Schedule sessions with legal advisers or ethics committees whenever major operational or policy changes arise.
  • Align with frameworks like the IASC Operational Guidance on Data Responsibility to ensure compliance and best practice.

4. Peer Learning

No organisation works alone in a humanitarian crisis. Sharing lessons—failures included—across agencies accelerates sector-wide learning. Platforms like the ALNAP, CDAC network or HNPW conferences facilitate exchanges on best practices, newly discovered pitfalls, and emerging tools. This cross-pollination fuels a collective improvement in how AI is applied to humanitarian challenges.

Key Action: Participate in Networking and Knowledge Sharing Events

  • Build new collaborations with peer organisations and networks to learn and share ways to use tools.
  • Produce joint “lessons learned” briefs to inform the wider humanitarian community.
  • Invite local and international partners to virtual roundtables or webinars to discuss successes and setbacks.
  • Capture actionable ideas and feed them back into your own Optimise processes.

Optimise is the final (and ongoing, funding contingent) step that ensures humanitarian AI tools remain fit for purpose in unpredictable environments. Optimise keeps AI systems agile and community-focused, ensuring they continue to uphold humanitarian values—fairness, impartiality, and respect for human dignity—even as conditions on the ground change.

What next?

For more information and guidance on how to implement ECTO, read the full brief. Designing and Deploying AI Tools to Support Humanitarian Practice: A Practical Guide.

Acknowledgements

This project was commissioned and supported by the UK Humanitarian Innovation Hub (UKHIH) and funded by UK International Development. The UKHIH is a humanitarian initiative funded by the Foreign, Commonwealth and Development Office (FCDO). As a UK-based humanitarian initiative, hosted by Elrha, UKHIH leverages expertise from the UK and across the globe to improve international humanitarian action, connecting the people equipped to bring about systemic changes that will strengthen and support humanitarian response.

Funded by:

Supported by: