You can pull this docker image and run the best model on your own data (a 0.5m DEM).
Pull williamlidberg/hunting_pits:v1
And run the docker image and mounting a volume where your 0.5 m DEMs is located. Replace the path after -v and before : with your path.
docker run -it --gpus all -v /mnt/Extension_100TB/William/Projects/Cultural_remains/data:/workspace/data williamlidberg/hunting_pits:v1 bash
Inside the container you can run the model on a test chip by using this command:
python /workspace/repo/semantic_segmentation/inference_unet_from_dem.py /workspace/repo/data/test_chip/dem/ /workspace/repo/semantic_segmentation/trained_models/UNet/05m/profile_curvature1/trained.h5 /workspace/data/
The aim was to investigate whether hunting pits could be automatically mapped using Swedish national ALS data and deep learning. We also evaluated the performance of traditional topographical indices and multiple state-of-the-art topographical indices explicitly selected to enhance pit structures in high-resolution DEM data.
Docker containers will be used to manage all envrionments in this project. Different images were used for segmentation and object detection
Segmentation: semantic_segmentation
Object detection: object_detection
Navigate to respective dockerfile in the segmentation or object detection directories and build the containers
docker build -t segmentation .
docker build -t detection .
Run container
Note that this container was run on a multi instance GPU (A100). With a
normal GPU replace –gpus device=0:0 with gpus all
docker run -it --gpus device=0:0 -v /mnt/Extension_100TB/William/GitHub/Detection-of-hunting-pits-using-airborne-laser-scanning-and-deep-learning:/workspace/code -v /mnt/Extension_100TB/William/Projects/Cultural_remains/data:/workspace/data -v /mnt/Extension_100TB/national_datasets/laserdataskog/:/workspace/lidar segmentation:latest bash
There is also an option to run this container as a notebook. This was run on a server with port forwarding over VPN and SSH.
docker run -it --rm -p 8882:8882 --gpus all -v /mnt/Extension_100TB/William/GitHub/Detection-of-hunting-pits-using-airborne-laser-scanning-and-deep-learning:/workspace/code -v /mnt/Extension_100TB/William/Projects/Cultural_remains/data:/workspace/data -v /mnt/Extension_100TB/national_datasets/laserdataskog/:/workspace/lidar segmentation:latest bash
cd /workspace/code/notebooks/
jupyter lab --ip=0.0.0.0 --port=8882 --allow-root --no-browser --NotebookApp.allow_origin='*'
ssh -L 8881:localhost:8881 <IPADRESS_TO_SERVER>
The training data were collected from multiple sources. Historical forest maps from local archives where digitized and georeferenced. Open data from the swedish national heritage board were downloaded and digitized. All remains where referenced with the liDAR data in order to match the reported remain to the LiDAR data. In total 2519 hunting pits where manually digitized and corrected this way (figure 1a). A seperate demo area was also used for visual inspection (figure ab)