Nowadays the problem with wildfires is very urgent. Every other month we hear about horrible fires such as recently in Australia or Amazonia. And this problem is present all over the world, including Ukraine, our home country. Hundreds of thousands of hectares of natural areas are destroyed in Ukraine every year by wildfires. This natural disaster causes huge and sometimes irreversible damage to nature. Billions of insects, animals, birds as well as seeds and roots of plants, located in the upper layers of the soil die in the fire. It takes a lot of time for natural ecosystems to recover, and some of their components can not recover without human assistance. The state does not keep a record of the fires and assess the damage they cause. This creates the illusion that there is no such problem. And therefore, no one takes any severe steps to resolve it. We decided to solve this problem with a technological approach. Many satellites are flying around the Earth, taking images of patches of ground such as these ones. So we are building an AI application that can detect burned areas on such images in the blink of an eye.
It takes hours for experts to analyze one such satellite image. It is extremely costly to use people to monitor vast areas, but the fire can happen anywhere. On the other hand, AI can detect fires in less than a second. The dataset we use is collected and labeled by scientists who study the impact of wildfires on ecosystems in natural parks. Our dataset consists of 4-channel Planet and 13-channel Sentinel-2 images of the eastern part of Ukraine and corresponding geojson files. Each geojson file contains polygon coordinates masking the burned areas, the unique image id on which the burned areas were noticed first and the date. Each polygon is defined manually. FIRM fire data is used to check defined polygons and make labeling more accurate. It is a very complicated process of labeling because burned areas are just a little darker than a piece of land. Experts spend about 20 hours to label one image of 8000 by 8000 pixels that cover 25 squared kilometers of land. Our application can help the experts to reduce the work they do to just checking network results. Our pipeline consists of four parts: preprocessing, training, postprocessing, results/model analysis

  1. Preprocessing: generate masks from geojson files, crop satellite images and generated masks to smaller size(256x256)
  2. Training: model training/model selection/prediction
  3. Postprocessing: merge predicted masks.
  4. Analysis: compare metrics/results, draw conclusions.

We used Unet(resnet34 backbone) as our benchmark model and were able to get pretty good results. We also experiment with GAN architectures (SeGAN).

We train models that takes as an input multichannel satellite images and returns a mask representing the burned area on that image. The uniqueness of our approach is the ability to work with spatial images taken by various satellites with different sensors onboard and the variable number of channels.

Challenges. Preprocessing of satellite images is quite a challenging task due to a specific format and high resolution. One of the challenges we ran into is that special images of the different seasons look different, so the network has to be robust to that.

What's next for

We are trying the same pipeline to solve other problems such as the segmentation of oil spills on the sea surface. We have a huge potential for analyzing satellite images for many use cases in industries. We used the same pipeline to segment house rooftops for Disaster Risk Management Open Cities Challange (currently we are in top 50 of more than 1000 contestants). We can check the state of the house or estimate damage after earthquake or other natural disasters.

Built With

Share this project:
×

Updates