When disaster strikes, speed is critical. The time it takes to properly assess damage in the wake of a major event can be the difference between life and death.
However, emergency responders must often navigate disruptions to local communication and transportation infrastructure, making accurate assessments dangerous, difficult and slow. And while satellite and aerial imagery offer less risky alternatives that cover more ground, analysts must still conduct manual, time-intensive assessments of images.
The Defense Innovation Unit's xView2 Challenge seeks to automate post-disaster damage assessment. DIU is challenging machine learning experts to develop computer vision algorithms that will speed up analysis of satellite and aerial imagery by localizing and categorizing various types of building damage caused by natural disasters.
The xView2 Challenge is DIU's second prize competition focused on furthering innovation in computer vision for humanitarian assistance and disaster relief efforts. This year's competition builds upon the xView1 Challenge, which sought out computer vision algorithms to locate and identify distinct objects on the ground useful to first responders.
"DIU's goal in hosting this challenge is to enlist the global community of machine learning experts to tackle a critically hard problem: detecting key objects in overhead imagery in context and assessing damage in a disaster situation," said Mike Kaul, DIU AI portfolio director.
"We are always looking for ways to improve rapid damage assessment to ensure we and our partners deliver the right resources to the right places at the right time, and we are confident the DIU Challenge can contribute to that goal," said FEMA Regional Administrator Robert Fenton, a partner in the challenge.
DIU led a team of experts from academia and industry to create a new dataset, xBD, to enable localization and damage assessment before and after disasters. The dataset will provide the foundation for the challenge. While several open datasets for object detection from satellite imagery already exist — for example, SpaceNet and xView — each represent only a single snapshot in time and lack information about the type and severity of damage following a disaster.
The largest and most diverse annotated building damage dataset, xBD allows ML/AI practitioners to generate and test models to help automate building damage assessment. The open source electro-optical imagery (0.3 m resolution) xBD dataset will encompass 700,000 building annotations across 5,000 square kilometers of freely available imagery from 15 countries. Seven disaster types are included: wildfire, landslides, dam collapses, volcanic eruptions, earthquakes/tsunamis and wind and flooding damage.
There are three competition prize tracks for the xView2 Challenge:
1
Open source
Teams compete for leaderboard positions and awards for top scores. By releasing their models publicly under a permissive open-source license, teams also become eligible for an additional open-source award.
2
Nonexclusive government purpose rights
Teams grant government purpose rights to become eligible for awards or top scores on the leaderboard. Solutions can be used to help future disaster recovery efforts.
3
Evaluation Only
Teams retain their intellectual property and only grant DIU the right to benchmark their solution and compete for leaderboard position. Top teams in this category will still be eligible for a special monetary prize pool for their submissions.
The best solutions for all three categories will be eligible for a share of a $150,000 prize purse. Top solvers will also be invited to present their work at the December NeurIPS 2019 Workshop on AI for humanitarian assistance and disaster relief. Winners of any cash prize will be considered eligible to be awarded follow-on work with the Defense Department. The competition will start this month and runs through November.
Findings will be applied in both operational and academic use cases that include, but are not limited to: obstructed roads, rerouting across obstructed roads, force of nature identification, resource allocation decision-making, object recognition and object identification. Baseline models, developed collaboratively between DIU and Carnegie Mellon's Software Engineering Institute, will be publicly available as a starting point for the Challenge. In addition to advancing the state of the art in damage assessment, it is envisioned that the xBD dataset will provide researchers, companies and other groups with the means and motive to develop algorithms that bring humanitarian assistance and disaster response into the age of AI.
The challenge's partners represent a first-of-its-kind coalition between the artificial intelligence and disaster response communities including NASA Earth Science Disasters Program, the Federal Emergency Management Agency's Region 9, California Governor's Office of Emergency Services, Cal Fire, the California National Guard, DOD's Joint Artificial Intelligence Center, Carnegie Mellon's Software Engineering Institute, the United States Geological Service, the National Geospatial-Intelligence Agency and the National Security Innovation Network.