Overview about state-of-the-art land-use classification from satellite data with CNNs based on an open dataset
- Scripts you will find here
- Requirements (what we used):
- Setup Environment
- Our talks about this topic
- Resources
- How to get Sentinel-2 data
- Citation
01_split_data_to_train_and_validation.py
: split complete dataset into train and validation02_train_rgb_finetuning.py
: train VGG16 or DenseNet201 using RGB data with pre-trained weights on ImageNet03_train_rgb_from_scratch.py
: train VGG16 or DenseNet201 from scratch using RGB data04_train_ms_finetuning.py
: train VGG16 or DenseNet201 using multispectral data with pre-trained weights on ImageNet04_train_ms_finetuning_alternative.py
: an alternative way to train VGG16 or DenseNet201 using multispectral data with pre-trained weights on ImageNet05_train_ms_from_scratch.py
: train VGG16 or DenseNet201 from scratch using multispectral data06_classify_image.py
: a simple implementation to classify images with trained modelsimage_functions.py
: functions for image normalization and a simple generator for training data augmentationstatistics.py
: a simple implementation to calculate normalization parameters (i.e. mean and std of training data)
Additionally you will find the following notebooks:
Image_functions.ipynb
: notebook ofimage_functions.py
Train_from_Scratch.ipynb
: notebook of05_train_ms_from_scratch.py
Transfer_learning.ipynb
: notebook of02_train_rgb_finetuning.py
We have defined the requirements in requirements.txt. We used:
- python 3.6.x
- tensorflow 2.2
- scikit-image (0.14.1)
- gdal (2.2.4) for
06_classify_image.py
- How can I interpret the classification results? - Please have a look at our answers #3, #4, and #6.
- Is there a paper I can cite for this repository? - Please have a look at Citation
Append conda-forge to your Anaconda channels:
conda config --append channels conda-forge
Create new environment:
conda create -n pycon scikit-image gdal tqdm
conda activate pycon
pip install tensorflow-gpu
pip install keras
(or use tensorflow version of keras, i.e. from tensorflow import keras
)
See also:
- Title: "Fernerkundung mit multispektralen Satellitenbildern"
- Episode: Episode 18
- Podcast: TechTiefen by Nico Kreiling
- Language: German (Deutsch)
- Date: July 2019
Abstract
Jens Leitloff und Felix Riese berichten in Folge 18 von ihrer Forschung am “Institut für Photogrammetrie und Fernerkundung” des Karlsruher Instituts für Technologie. Mit der Bestrebung Nachhaltigkeit zu stärken erforschen die beiden etwa Verfahren, um Wasserqualität anhand von Satellitenaufnahmen zu bewerten oder die Nutzung landwirtschaftlicher Flächen zu kartografieren. Hierfür kommen unterschiedlichste Verfahren zum Einsatz wie Radaraufnahmen oder multispektrale Bilderdaten, die mehr als die drei von Menschen wahrnehmbaren Farbkanäle erfassen. Außerdem geht es um Drohnen, Satelliten und zahlreiche ML-Verfahren wie Transfer- und Aktive Learning. Persönliche Erfahrungen von Jens und Felix im Umgang mit unterschiedlichen Datenmengen runden eine thematisch Breite und anschauliche Folge ab.- Title: "Satellite Computer Vision mit Keras und Tensorflow - Best practices und beispiele aus der Forschung"
- Slides: Slides
- Language: German (Deutsch)
- Date: 15 - 16 May 2019
- DOI:
- URL: m3-konferenz.de
Abstract
> Im Forschungsfeld des Maschinellen Lernens werden zunehmend leicht zugängliche Framework wie Keras, Tensorflow oder Pytorch verwendet. Hierdurch ist ein Austausch und die Wiederverwendung bestehender (trainierter) neuronaler Netze möglich. > > Wir am Institut für Photogrammetrie und Fernerkundung (IPF) des Karlsruher Institut für Technologie (KIT) beschäftigen uns unter anderem mit der Analyse von optischen Satellitendaten. Satellitenprogramme wie Sentinel-2 von Copernicus liefern wöchentliche, weltweite und dabei frei zugängliche multispektrale Bilder, die eine Vielzahl neuartiger Anwendungen ermöglichen. Wir nehmen das zum Anlass, eine interaktive Einführung in die Auswertung dieser Satellitendaten mit Learnings aus unserer täglichen Forschung zu geben. Wir sprechen unter anderem über die folgenden Themen: > > * Einfacher Umgang mit georeferenzierten Bilddaten > * Einführung in Learning-From-Scratch und Transfer Learning mit Keras > * Anpassung von fertigen Netzen an neue Eingangsdaten (RGB → multispektral) > * Anschauliche Interpretation von Klassifikationsergebnissen > * Best Practices aus unserer Forschung, die die Arbeit mit Neuronalen Netzen wesentlich vereinfachen und beschleunigen > * Code und Daten für die ersten Schritte mit CNNs mit Keras in Python, welche in einem GitHub Repository zur Verfügung gestellt werden- Title: "Satellite data is for everyone: insights into modern remote sensing research with open data and Python"
- Slides: Slides
- Video: youtube.com/watch?v=tKRoMcBeWjQ
- Language: English
- Date: 24 - 28 October 2018
- DOI:
- URL: de.pycon.org
Abstract
> The largest earth observation programme Copernicus (http://copernicus.eu) makes it possible to perform terrestrial observations providing data for all kinds of purposes. One important objective is to monitor the land-use and land-cover changes with the Sentinel-2 satellite mission. These satellites measure the sun reflectance on the earth surface with multispectral cameras (13 channels between 440 nm to 2190 nm). Machine learning techniques like convolutional neural networks (CNN) are able to learn the link between the satellite image (spectrum) and the ground truth (land use class). In this talk, we give an overview about the state-of-the-art land-use classification with CNNs based on an open dataset. > > We use different out-of-box CNNs for the Keras deep learning library (https://keras.io/). All networks are either included in Keras itself or are available from Github repositories. We show the process of transfer learning for the RGB datasets. Furthermore, the minimal changes required to apply commonly used CNNs to multispectral data are demonstrated. Thus, the interested audience will be able to perform their own classification of remote sensing data within a very short time. Results of different network structures are visually compared. Especially the differences of transfer learning and learning from scratch are demonstrated. This also includes the amount of necessary training epochs, progress of training and validation error and visual comparison of the results of the trained networks. Finally, we give a quick overview about the current research topics including recurrent neural networks for spatio-temporal land-use classification and further applications of multi- and hyperspectral data, e.g. for the estimation of water parameters and soil characteristics.This talk:
- EuroSAT Data (Sentinel-2, Link)
Platforms for datasets:
- HyperLabelMe: a Web Platform for Benchmarking Remote Sensing Image Classifiers (Link)
- GRSS Data and Algorithm Standard Evaluation (DASE) website (Link)
Datasets:
- ISPRS 2D labeling challenge (Link)
- UC Merced Land Use Dataset (Link)
- AID: A Benchmark Dataset for Performance Evaluation of Aerial Scene Classification (Link)
- NWPU-RESISC45 (RGB, Link)
- Zurich Summer Dataset (RGB, Link)
- Note: Many German state authorities offer free geodata (high resolution images, land use/cover vector data, ...) over their geoportals. You can find an overview of all geoportals here (geoportals)
Image Segmentation Resources:
- More than 100 combinations for image segmentation routines with Keras and pretrained weights for endcoding phase (Segmentation Models)
- Another source for image segmentation with Keras including pretrained weights (Keras-FCN)
- Great link collection of image segmantation networks and datasets (Link)
- Free land use vector data of NRW (BasisDLM or openNRW)
Other:
- DeepHyperX - Deep learning for Hyperspectral imagery: gitlab.inria.fr/naudeber/DeepHyperX/
- Register at Copernicus Open Access Hub or EarthExplorer
- Find your region
- Choose tile(s) (→ area) and date
- Less tiles makes things easier
- Less clouds in the image are better
- Consider multiple dates for classes like “annual crop”
- Download L1C data
- Decide of you want to apply L2A atmospheric corrections
- Your CNN might be able to do this by itself
- If you want to correct, use Sen2Cor
- Have fun with the data
Jens Leitloff and Felix M. Riese, "Examples for CNN training and classification on Sentinel-2 data", Zenodo, 10.5281/zenodo.3268451, 2018.
@misc{leitloff2018examples,
author = {Leitloff, Jens and Riese, Felix~M.},
title = {{Examples for CNN training and classification on Sentinel-2 data}},
year = {2018},
DOI = {10.5281/zenodo.3268451},
publisher = {Zenodo},
howpublished = {\href{http://doi.org/10.5281/zenodo.3268451}{http://doi.org/10.5281/zenodo.3268451}}
}