These scripts are for use in training and testing the SegNet neural network, particularly with OpenStreetMap + Satellite Imagery training data generated by skynet-data.
Contributions are very welcome!
The quickest and easiest way to use these scripts is via the
developmentseed/skynet-train
docker image, but note that to make this work
with a GPU--necessary for reasonable training times---you will need a machine
set up to use nvidia-docker
. (The
start_instance
script uses docker-machine
to spin up an AWS EC2 g2 instance and set it up with
nvidia-docker. The start_spot_instance
script does the same thing but creates a spot
instance instead of an on demand one.)
- Create a training dataset with skynet-data.
- Run:
nvidia-docker run \
-v /path/to/training/dataset:/data \
-v /path/to/training/output:/output \
-e AWS_ACCESS_KEY_ID=... \
-e AWS_SECRET_ACCESS_KEY=... \
developmentseed/skynet-train:gpu \
--sync s3://your-bucket/training/blahbla
This will kick off a training run with the given data. Every 10000 iterations,
the model will be snapshotted and run on the test data, the training "loss"
will be plotted, and all of this uploaded to s3. (Omit the --sync
argument
and AWS creds to skip the upload.)
Each batch of test results includes a view.html
file that shows a bare-bones
viewer allowing you to browse the results on a map and compare model outputs to
the ground truth data. Use it like:
- http://your-bucket-url/...test-dir.../view.html?imagery_source=MAPID&access_token=MAPBOX_ACCESS_TOKEN where
MAPID
points to Mapbox-hosted raster tiles used for training. (Defaults tomapbox.satellite
.) - http://your-bucket-url/...test-dir.../view.html?imagery_source=http://yourtiles.com/{z}/{x}/{y} for non-Mapbox imagery tiles
Customize the training run with these params:
--model MODEL # segnet or segnet_basic, defaults to segnet
--output OUTPUT # directory in which to output training assets
--data DATA # training dataset
[--fetch-data FETCH_DATA] # s3 uri from which to download training data into DATA
[--snapshot SNAPSHOT] # snapshot frequency
[--cpu] # sets cpu mode
[--gpu [GPU [GPU ...]]] # set gpu devices to use
[--display-frequency DISPLAY_FREQUENCY] # frequency of logging output (affects granularity of plots)
[--iterations ITERATIONS] # total number of iterations to run
[--crop CROP] # crop trianing images to CROPxCROP pixels
[--batch-size BATCH_SIZE] # batch size (adjust this up or down based on GPU size. defaults to 6 for segnet and 16 for segnet_basic)
[--sync SYNC]
On an instance where training is happening, expose a simple monitoring page with:
docker run --rm -it -v /mnt/training:/output -p 80:8080 developmentseed/skynet-monitor
Prerequisites / Dependencies:
- Node and Python
- As of now, training SegNet requires building the caffe-segnet fork fork of Caffe.
- Install node dependencies by running
npm install
in the root directory of this repo.
After creating a dataset with the skynet-data
scripts, set up the model prototxt
definition files by running:
segnet/setup-model --data /path/to/dataset/ --output /path/to/training/workdir
Also copy segnet/templates/solver.prototxt
to the training work directory, and
edit it to (a) point to the right paths, and (b) set up the learning
"hyperparameters".
(NOTE: this is hard to get right at first; when we post links to a couple of pre-trained models, we'll also include a copy of the solver.prototxt we used as a reference / starting point.)
Download the pre-trained VGG weights VGG_ILSVRC_16_layers.caffemodel
from
http://www.robots.ox.ac.uk/~vgg/research/very_deep/
From your training work directory, run
$CAFFE_ROOT/build/tools/caffe train -gpu 0 -solver solver.txt \
-weights VGG_ILSVRC_16_layers.caffemodel \
2>&1 | tee train.log
You can monitor the training with:
segnet/util/plot_training_log.py train.log --watch
This will generate and continually update a plot of the "loss" (i.e., training error) which should gradually decrease as training progresses.
segnet/run_test --output /path/for/test/results/ --train /path/to/segnet_train.prototxt --weights /path/to/snapshots/segnet_blahblah_iter_XXXXX.caffemodel --classes /path/to/dataset/classes.json
This script essentially carries out the instructions outlined here: http://mi.eng.cam.ac.uk/projects/segnet/tutorial.html
After you have a trained and tested network, you'll often want to use it to predict over a larger area. We've included scripts for running this process locally or on AWS.
To run predictions locally you'll need:
- Raster imagery (as either a GeoTIFF or a VRT)
- A line delimited list of XYZ tile indices to predict on (e.g.
49757-74085-17
. These can be made with geodex) - A skynet model, trained weights, and class definitions (
.prototxt
,.caffemodel
,.json
)
To run:
docker run -v /path/to/inputs:/inputs -v /path/to/model:/model -v /path/to/output/:/inference \
developmentseed:/skynet-run:local-gpu /inputs/raster.tif /inputs/tiles.txt \
--model /model/segnet_deploy.prototxt
--weights /model/weights.caffemodel
--classes /model/classes.json
--output /inference
If you are running on a CPU, use the :local-cpu
docker image and add --cpu-only
as a final flag to the above command.
The predicted rasters and vectorized geojson outputs will be located in /inference
(and the corresponding mounted volume)
TODO: for now, see command line instructions in segnet/queue.py
and segnet/batch_inference.py
These scripts were originally developed for use on an AWS g2.2xlarge
instance. For support on newer GPUs, it may be required to:
- use a newer NVIDIA driver
- use a newer version of CUDA. To support CUDA8+, you can use the docker images tagged with
:cuda8
. They are built off an updatedcaffe-segnet
fork with support forcuDNN5
.