Modern technologies have opened uppossibilities to investigate the consequences of humanitarian crises faster. 510Global, a data innovation team set up by the Netherlands Red Cross, has been lookinginto ways to use big data for humanitarian goals. They have developed a data-drivenmodel to identify priority areas for humanitarian intervention related tonatural disasters faster. The effectiveness of their work showed from the actionstaken during typhoon Haima.
This category 5 tropical superstorm hit thePhilippines in October 2016, ravaging big parts of the country. 14 people diedduring the storm and estimated damage to property was $76.9 million (Jalad, 2016). However, things could have been a lot worse.The typhoon had been anticipated for a while and as soon as it hit thePhilippines, humanitarian aid organizations received detailed information from 510Global. Within hours they managed to map the areas hit by the typhoon, showingclearly the damage done and thereby helping the Philippine Red Cross to targetpriority zones (510 Global, 2016).Typhoons Haima (2016) and Nina (2016) in thePhilippines (“Graduation Portal,” 2017) functioned as a test environment for 510Global’s data model, and outcomes showed that the data tool can possibly beused for different natural disasters in different countries. Applying such adata model in disaster stricken areas would save time and money in theprovision of aid.
However, for the model to be applied, additional research isneeded into the decision making process of humanitarian organizations. This iswhere this study would come in. Many studies into the coordination structureand decision making process of corporate organizations have been done, butthese did often not include HO’s or NGO’s. A knowledge gap can be identifiedhere when it comes to the way HO’s do their planning, decision-making andcoordination processes. In this thesis, research will be done into theseprocesses and historical cases in which data-driven decision making has beenused will be analyzed. The main problem to focus on will be integrating abig-data model like that of 510 Global into a humanitarian organization.
It may be obvious that research into this topiccarries high societal relevance, along with academic insights. Quoting 510Global itself: “Smart use of big data will positively impact faster & more(cost) effective humanitarian aid” (510 Global, 2017). Finding a solution to the integration of adata model into HO’s will not only create a grand corporate resource, it will alsohave direct impact on real life humanitarian crises, with the potential of savingand improving the lives of many humans. The next chapter will present a literaturereview on the state of the art literature, followed by an introduction of the(sub) research question(s). Research methods and tools will be discussedafterwards and the proposal concludes with an outline of the expected timeplanning.
Literature research: four challenges for big data as a tool forhumanitarian aidHumanitarian big data strategies like the oneintroduced by 510 Global have a lot of potential, but are not withoutchallenges. Several obstacles obstruct successful application of big data byhumanitarian organizations. In the following literature research existing data isused to find out about these obstacles and problems. This results in theidentification of a knowledge gap in this area, from which the main researchquestion for this thesis is derived.
Haak (2017) identifies three main challenges for the use of big data as a tool forhumanitarian aid. Firstly, the collectionof reliable and representative data. Secondly, the correct analysis and processing of the data. And thirdly, the successful integration of the data inorganizational practices.
While the first two challenges mainly concern thedomain of ICT, about which has been written a lot already (Raymond & Card, 2015), the third challenge has so far beenrelatively untouched. For the big data models to be of any use to the finalhumanitarian decision makers, it is essential that those models get integratedinto their organization structure (Vitoriano, Montero, & Ruan, 2013). A review of the available literature providedfour reasons why this integration has not been successful so far. These reasonswill be discussed in the next paragraphs. 1.
Poorcollaboration between parties involvedIn the aftermath of large scale disasters manydifferent actors are involved in providing aid to the stricken area and itsinhabitants. Both local actors (municipalities), national actors (governments,army) and international actors (NGO’s, United Nations) try to plan, manage andexecute aid under conditions of high uncertainty and pressure (Ortuño et al., 2013; UN/ISDR & UN/OCHA, 2008; Van deWalle & Comes, 2014).
Historic cases showed that collaboration between these actors during adisaster does not always go smoothly (Whipkey & Verity, 2015). Different actors bring their own informationsystems, logistics, management and decision processes, which prevents theparties from adapting a collaborative approach (Van de Walle & Comes, 2016). Cultural barriers between the localorganizations and foreign aid providers also impedes progress (Twigg, 2015). Due to these collaboration issues theprovided big data cannot be used to its full potential. 2. Incompatiblecommunicated data The gathering of big data is a complex processthat produces a highly technical model which might be incomprehensible fornon-technicians. It is therefore crucial that the produced information getsdelivered to the humanitarian decision maker’s in the right format (Van de Walle & Comes, 2016). Unfortunately, this is not always the case.
Communication of data often happens top down, on a technocratic and centralizedlevel, without taking in consideration the decision maker’s needs (Cinnamon, Jones, & Adger, 2016; Van de Walle & Comes, 2014). Also, when multiple data providers are activesimultaneously, they might send an overkill of data. A continues stream ofsurveys, questionnaires, reports and maps can create an “Information disaster” (Van de Walle & Comes, 2014), wasting resources on information that cannotbe fully processed (Read et al., 2016). Therefore, incompatibility of the data is thesecond cause that prevents humanitarian groups from integrating the data in itsorganization. 3.
Lack of a general policy framework A third reason why data has not beensuccessfully integrated in practice yet is the lack of a general framework thatstates how big data should inform decision making and how and when big datashould be used (Whipkey & Verity, 2015). The humanitarian organizations need afoundation that prescribes the policies, protocols and practices consideringthe use of big data (Chan, Bateman, & Olafsson, 2016). Since humanitarian aid groups have a highfluctuation in staff members from crisis to crisis, practices cannot just beremembered by the staff members, but should be written down in common toolkitthat can be used by all organizations (Raymond & Al Achkar, 2016). Only with such a framework can theinformation shared by the data producers be used and integrated by thehumanitarian actors. 4. Insufficientdata responsibilityThe final factor that should be taken intoconsideration when regarding the unsuccessfulness of integrating big data, isthe data responsibility of the humanitarian groups.
According to Al Achkar (2016) data responsibility encompass the following: “the ability of organizations to be ready to responsibly and effectivelydeploy and manage data collection and analysis tools, techniques and strategiesin a specific operational context before a disaster strikes”. In otherwords, the competencies and ethical standards to safely handle data, in orderto achieve their mission: providing humanitarian aid (Chaumba & Van Geene, 2003). Many humanitarian organizations do notpossess such data responsibility yet and will need training before they canintegrate big data.
Knowledge gapand research question: communicationof disaster response dataBy reviewing the available literature on thistopic, it has become clear that the integrationof big data into common practices is at the moment the greatest challengefor humanitarian aid providers. Four reasons have been found that address thischallenge and explain why integration has so far been unsuccessful. A solutionto this problem, however, has not been identified by any research yet. As Twigg(2015) describes it: “There isrelatively little guidance on integration of data; much of what is available islimited to general principles, and examples of good or bad practice are rarelydocumented or shared.” As a result, a knowledge gap for this topic can beidentified, namely, responding to each of those four reasons and finding solutionsfor them.
This opens up possibilities for more research. From data provider 510Global’s point of view, the second reason, concerning data incompatibility, ismost interesting. Therefore this thesis will focus on this issue. Concluding, thefollowing research question for the thesis proposal is formualted: “How should the communication of disasterresponse data be changed so that it is compatible with the decision makingstructure of humanitarian organizations?”Researchapproach and methods: case study& designTo answer the main research question, it isessential to obtain a detailed overview of the decision making processes insidehumanitarian organizations. To create a valid and reliable model of thedecision making structure, one can look into the practices of real life cases.
Since the theoretical concept of humanitarian big data implementation does notwork in the real world yet, a case studyapproach will be used to analyze real HO’s (Yin, 2009). By studying several cases, common practicesand requirements to big data output can be identified. In addition, a design oriented approach will be adaptedin a later stage of the thesis to create a design for the decision making modelof HO’s (Hevner et al., 2004). Combining the outcome of these two approacheswill fill the void in the functioning of the socio-technical system ofhumanitarian big data. By looking at the main research question on alower level of abstraction, seven sub-research questions (SRQ) can be formulated.These questions will be answered over the course of seven phases. In thischapter each of the phases will be discussed in order, describing the requireddata, research method, tools and deliverables in every phase.
The reader isrecommended to follow the research flow diagram while reading, which can befound at the end of this chapter. Phase 1: PreparationThe first phase encompasses the preparation ofthe thesis, in which the writer gets familiarized with the topic and itschallenges. By performing a literature review, a knowledge gap is identified.The research problem is defined, as well as the main research question and itssub- research questions. The deliverable of this phase is the thesis proposal,which marks the start of the thesis project. Phase 2: Desk researchThe research starts by gathering more informationon the topic, specifically in two areas. Firstly, data about historic cases ofhumanitarian organizations that used big data is needed, to answer the firstsub-research question. SRQ1.
In which historic cases has big data been used by humanitarian organizations?By answering this question, historic cases areidentified that can be used for the case study in the next phase. It is usefulto make a distinction between cases that successfully used big data, and casesthat were unsuccessful. These cases serve as deliverable and starting point forSRQ3 and SRQ4.
Secondly, information on the internal structureof humanitarian organizations needs to be found, so that the secondsub-research question can be answered.SRQ2.How are humanitarian organizations internally organized?Knowing how humanitarian organizationsinternally communicate and delegate work is valuable information and arequirement to design a framework on their decision making, which will be donein phase 4. One cannot expect that all HO’s are organized in the same way, itis therefore important to identify resemblances and differences.
The HO’s canthen be grouped based on their internal structure. The needed data can beacquired relatively easy, through common desk research. By researchingliterature both sub-research questions can be answered. Literature repositorieswill serve as the main tool. Phase 3: Case studyIn the next phase, the first actual researchapproach is executed: the case study. Two types of methods will be used: reallife simulations, and interviews. The simulations will happen in asemi-controlled environment, in which past cases of disaster management arere-enacted. Some kind of group decision room will be needed as a tool.
Byobserving the behavior of the team members of the humanitarian organization,their working practices can be identified. The practices of different casestudies can be compared to find commonalities and define general practices. Inunsuccessful cases of big data integration one can observe why the big data wasnot used successfully. These observations will answer the third sub-researchquestion. SRQ 3.How did humanitarian organizations use big data in their decision making?Interviews will give further insight into thepractices of HO’s, and it will also provide qualitative data on the wishes andpreferences of HO’s on the provision of big data. This is needed for the nextsub-research question.SRQ 4.
What requirements do humanitarian organizations pose to big data?Interviews require a pre-defined list ofquestions and a place to meet with the interviewee. The deliverable of thisquestion, a set of requirements on big data, will be used in phase 5, whendefining the recommendations and changes to data output. Phase 4: DesignThe fourth phase is the design phase, in whicha framework on the decision making structure of humanitarian organizations isdeveloped. Both information on the organization structure and decision makingpractices are needed. This is provided by the desk research done for SRQ 2 andthe data from the case study in SRQ 3. Both deliverables function as input forthe fifth sub-research question.SRQ 5.
What does the decision making structure of humanitarian organizations looklike?Naturally, a design method is used here todesign the framework. Simple modelling software can be used to create anoverview of the steps and processes within the humanitarian organization thatlead to their decision making. The deliverable of this phase will be aframework that shows the functions and place of disaster response data in thedecision making structure. Since it will be unlikely that every HO adapts thesame decision making structure, designing a single framework would only addressa few HO’s. The outcome of this SRQ will therefore probably be in the form ofseveral frameworks, that share resemblances with each other, but are differentbased on the found variances from SRQ 2 and SRQ 3.Phase 5: Data analysisIn the fifth phase all gathered data comestogether to finalize the research project.
The framework of the decision makingstructure and the set of requirement on big data are qualitatively analyzed andcombined to answer the sixth sub-research question.SRQ 6. How can the data provider improve itsdata output and communication methods?The answer of this question will be in the formof a report, giving recommendations to the data provider (510 Global) how toimprove their data output and communication methods.
The report will functionas a guideline the data provider can follow to decide on their behaviour andmethods. Based on with which HO the data provider is dealing, the report willsuggest different approaches for data output and communication. Phase 6: ValidationBefore a finalconclusion can be drawn, the designed framework will need to be validated. Theframework should be applicable to a wide variety of different humanitarianorganizations. After completing the fifth phase, 510 Global is supposed topossess the means to work with any kind of decision making structure. But to besure that this hypothesis holds, validation is necessary.
In other words:SRQ 7. How can the solution be generally applied to all humanitarian organizations?The suggested methodto answer this final sub-research question is to again make use of the availablecase studies. This time the goal is not merely to observe the HO’s but toactively participate and take on the role of data provider. The guidelineprovided in the report, which was the result of SRQ 6, now needs to be followedin order to determine the communication methods and data output for each case.
If the designed framework and guideline are valid, the outcome of each caseshould be the successful integration of big data by the HO. This should also beso for the cases that were unsuccessful in the past. If the framework andguideline turn out to be invalid, one can define new practices and requirementsthat were overlooked in phase 3. As a result, this validation process functionsas an iteration back to phase 3. A drawback of this method that the scope ofthis thesis will only include time for one iteration. So in case the frameworkis still not valid after the second try, additional research is required tofinalize the advice to 510 Global.
Phase 7: FinalizingIn the conclusive phase, the aggregated answersof all sub-research questions will provide an answer to the main researchquestion. “Howshould the communication of disaster response data be changed so that it iscompatible with the decision making structure of humanitarian organizations?”Recommendations to both the data provider andthe HO’s that will use the big data will be made, based on the validatedframework and guideline from SRQ 7. With this action the goal of the thesis hasbeen reached; providing a tool to smoothen the communication of data and helpintegrate the data into the decision making structure of HO’s. Research flowdiagramOn the next page youcan find the research flow diagram, which shows the seven successive phaseswith their sub-research questions (blue), deliverables (green) and researchmethods (orange).
Each answered sub-research question will provide adeliverable that will be the input of the next SRQ in a future phase. Eachphase will be covered by a separate chapter in the thesis report. A global time planning of these phases andthe execution of the thesis project can be found in the final chapter of thisreport.