Skip to main content
To KTH's start page

Multispectral Remote Sensing and Deep Learning for Wildfire Detection

Time: Mon 2021-06-14 10.00

Location: Videolänk https://kth-se.zoom.us/j/63625581961, Du som saknar dator /datorvana kontakta Yifang Ban yifang@kth.se / Use the e-mail address if you need technical assistance, Stockholm (English)

Subject area: Geodesy and Geoinformatics, Geoinformatics

Doctoral student: Xikun Hu , Geoinformatik, KTH Royal Institute of Technology

Opponent: Professor Ioannis Gitas, Laboratory of Forest Management and Remote Sensing School of Forestry and Natural Environment, Aristotle University of Thessaloniki

Supervisor: Professor Yifang Ban, Geoinformatik

Export to calendar

Abstract

Remote sensing data has great potential for wildfire detection and monitoring with enhanced spatial resolution and temporal coverage. Earth Observation satellites have been employed to systematically monitor fire activity over large regions in two ways: (i) to detect the location of actively burning spots (during the fire event), and (ii) to map the spatial extent of the burned scars (during or after the event). Active fire detection plays an important role in wildfire early warning systems. The open-access of Sentinel-2 multispectral data at 20-m resolution offers an opportunity to evaluate its complementary role to the coarse indication in the hotspots provided by MODIS-like polar-orbiting and GOES-like geostationary systems. In addition, accurate and timely mapping of burned areas is needed for damage assessment. Recent advances in deep learning (DL) provides the researcher with automatic, accurate, and bias-free large-scale mapping options for burned area mapping using uni-temporal multispectral imagery. Therefore, the objective of this thesis is to evaluate multispectral remote sensing data (in particular Sentinel-2) for wildfire detection, including active fire detection using a multi-criteria approach and burned area detection using DL models.       

For active fire detection, a multi-criteria approach based on the reflectance of B4, B11, and B12 of Sentinel-2 MSI data is developed for several representative fire-prone biomes to extract unambiguous active fire pixels. The adaptive thresholds for each biome are statistically determined from 11 million Sentinel-2 observations samples acquired over summertime (June 2019 to September 2019) across 14 regions or countries. The primary criterion is derived from 3 sigma prediction interval of OLS regression of observation samples for each biome. More specific criteria based on B11 and B12 are further introduced to reduce the omission errors (OE) and commission errors (CE).       

The multi-criteria approach proves to be effective in cool smoldering fire detection in study areas with tropical & subtropical grasslands, savannas & shrublands using the primary criterion. At the same time, additional criteria that thresholds the reflectance of B11 and B12 can effectively decrease the CE caused by extremely bright flames around the hot cores in testing sites with Mediterranean forests, woodlands & scrub. The other criterion based on reflectance ratio between B12 and B11 also avoids the effects of CE caused by hot soil pixels in sites with tropical & subtropical moist broadleaf forests. Overall, the validation performance over testing patches reveals that CE and OE can be kept at a low level  (0.14 and 0.04) as an acceptable trade-off. This multi-criteria algorithm is suitable for rapid active fire detection based on uni-temporal imagery without the requirement of multi-temporal data. Medium-resolution multispectral data can be used as a complementary choice to the coarse resolution images for their ability to detect small burning areas and to detect active fires more accurately.       

For burned area mapping, this thesis aims to expound on the capability of deep DL models for automatically mapping burned areas from uni-temporal multispectral imagery. Various burned area detection algorithms have been developed using Sentinel-2 and/or Landsat data, but most of the studies require a pre-fire image, dense time-series data, or an empirical threshold. In this thesis, several semantic segmentation network architectures, i.e., U-Net, HRNet, Fast- SCNN, and DeepLabv3+ are applied to Sentinel-2 imagery and Landsat-8 imagery over three testing sites in two local climate zones. In addition, three popular machine learning (ML) algorithms (LightGBM, KNN, and random forests) and NBR thresholding techniques (empirical and OTSU-based) are used in the same study areas for comparison.       

The validation results show that DL algorithms outperform the machine learning (ML) methods in two of the three cases with the compact burned scars,  while ML methods seem to be more suitable for mapping dispersed scar in boreal forests. Using Sentinel-2 images, U-Net and HRNet exhibit comparatively identical performance with higher kappa (around 0.9) in one heterogeneous Mediterranean fire site in Greece; Fast-SCNN performs better than others with kappa over 0.79 in one compact boreal forest fire with various burn severity in Sweden. Furthermore, directly transferring the trained models to corresponding Landsat-8 data, HRNet dominates in the three test sites among DL models and can preserve the high accuracy. The results demonstrate that DL models can make full use of contextual information and capture spatial details in multiple scales from fire-sensitive spectral bands to map burned areas. With the uni-temporal image, DL-based methods have the potential to be used for the next Earth observation satellite with onboard data processing and limited storage for previous scenes.   

In the future study, DL models will be explored to detect active fire from multi-resolution remote sensing data. The existing problem of unbalanced labeled data can be resolved via advanced DL architecture, the suitable configuration on the training dataset, and improved loss function. To further explore the damage caused by wildfire, future work will focus on the burn severity assessment based on DL models through multi-class semantic segmentation. In addition, the translation between optical and SAR imagery based on Generative Adversarial Network (GAN) model could be explored to improve burned area mapping in different weather conditions.

urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-295655