A project to build a responsive web platform to help disaster responders work together more effectively and donors to allocate funding more strategically.
I worked closely with stakeholders from the beginning, mapping critical tasks, sketching, wireframing and designing the platform, and developing the front-end. I also acted as project manager.
A design document for the developers, high-resolution mockups of the platform, and ready-to-deploy front-end code.
An interactive, responsive searchable map and database of disaster management events in Asia Pacific.
A large amount of the work going into preparing to respond to natural disasters involves training and team building. The most effective response teams are ones that knew each other before they responded to disasters. Coordination across teams, sectors and countries is critical for responding to large-scale emergencies as effectively as possible.
The reality is, however, that the many different organisations involved in these activities - whether NGOs, Military, or private - don’t work together well. Furthermore, donor funding for training activities is often distributed emotionally instead of strategically - money is wasted on needless events, while important training activities remain underfunded.
Could building a community of responders and donors through an online collaborative platform help disaster responders work more efficiently?
The Center for Excellence in Disaster Management and Humanitarian Assistance (CFE-DMHA - a unit of the U.S. Pacific Command) thought so. They hired mapping software company Ushahidi to create such a platform.
As a designer and project manager for Ushahidi, I led the project and designed the entire platform.
The goal was to create a crowdsourced central repository for all disaster management and first response training events in the world. The platform needed to collect data about disaster preparedness activities, visualise these events on a map and timeline, and allow for easy filtering and analysis.
I chose to build this project using an existing API as a base (Ushahidi’s Crowdmap API), and to divide it into two phases: a quick MVP proof-of-concept, and a second version that met the full proposal requirements. Doing so would rapidly get us to the point where we could show progress to stakeholders, and have them start populating the system with training events before we rolled out more filtering and analysis features.
I interviewed stakeholders to determine the minimum that we could build for phase 1 of the project:
I decided to forego more user research at the start of the project in order to gain more development time, for a few reasons:
I chose to use an agile process of continually building and validating, adding more features and designing as we went along.
After going through several rounds of wireframes with the client, we had enough of a solid basis to move forwards.
The client was a research centre in the US military - the look and feel needed to be utilitarian, professional, and minimal. I had a good idea of what I wanted it to look like, and I created a quick moodboard to help consolidate the direction.
The MVP version we built was stripped down and only allowed users to filter events by date. The primary task for most users was to add events to the platform. I designed the interface with that in mind, making the “Add an event” button always the most prominent:
I purposefully designed the first version to be flexible - we knew we wanted to add many other components in later (tags, a feed of events, more metadata) but weren’t sure where. I kept things very simple so that my team would have as many options as possible later on in the development and design process.
The final version of the platform included a more complete set of filters (allowing users to sort events by training type and topic), individual pages for organisations to help them feel ownership of the platform, and the ability for administrators to set organisations to have their events approved automatically (reducing the administrative overhead involved in manually checking and approving submissions):
After the MVP was built, I decided not to do usability testing for the app, since I thought it was basic enough to use. In hindsight this was a mistake - we were never quite certain whether or not the system was intuitive and users were able to complete their goals.
Part of this was the limited budget and the fact that the initial contract (which I didn’t participate in creating) accounted only for very minimal UX activities, although I still could have found the time to do this.
Next time I would have spent a few days interviewing more users initially to get a deeper understanding of the problem and user goals.