Skip to content

Image Object Detection ‐ Detecto

Jeffrey K Gillan edited this page Sep 19, 2024 · 39 revisions

Workshops sessions are recorded and posted to the UA Datalab Youtube Channel about 1 week after the live session.


Detecto Summary

Detecto is a light weight python library built on top of deep learning framework Pytorch.

By default, Detecto uses the convolutional neural network architecture Faster R-CNN ResNet-50 FPN. The architecture was pre-trained on the COCO (common objects in context) dataset which contains over 330,000 images, each annotated with 80 object classes (e.g., person, bicycle, car, motorcycle, airplane, bus, train, truck, boat, kite, knife, spoon, tv, book). The images and labels are generally not from aerial view points. Therefore, Detecto is not ready to identify objects in aerial images out-of-the-box. It has to be trained to do so.



The Power of CNNs

A convolutional neural network can be trained to identify objects in an image analogously to how humans can identify objects. For example, look at the aerial images of tree species above. The human brain can easily tell that these are probably 3 distinct tree species (from left to right: ponderosa pine, mesquite, willow). We can look at features such as crown shape and leaf architecture to help us identify a tree. For a computer, however, this is very challenging.

Traditional image analysis approaches such as looking at color values of individual pixels or grouping adjacent pixels based on similar values (i.e., objects), will both likely to fail at the task of identifying trees in high-resolution images.

Please watch these VERY useful videos to understand how Convolutional Neural Networks operate:



CNN models can take a few different approaches to image classification. Deepforest specifically carries out object detection.

Geospatial Training Dataset

The NWPU VHR-10 dataset ( Cheng et al., 2014a ) is a very high resolution (VHR) aerial imagery dataset that consists of 800 total images. The dataset has has ten classes of labeled objects: 1. airplane(757), 2. ship(302), 3. storage tank(655), 4. baseball diamond(390), 5. tennis court(524), 6. basketball court(159), 7. ground track field(163), 8. harbor(224), 9. bridges(124), and 10. vehicle(477). Labels were manually annotated with axis-aligned bounding boxes. 715 color images were acquired from Google Earth with the spatial resolution ranging from 0.5-m to 2-m, and 85 pansharpened color infrared (CIR) images were acquired from Vaihingen data with a spatial resolution of 0.08-m.



https://huggingface.co/datasets/satellite-image-deep-learning/SODA-A

Object labels need to be in an xml file using the PASCAL VOC format. Try Label Studio tool to create labels.

Run the Jupyter Notebook in Google Colab

One option to run the notebook is in Google Colab. This is probably the easiest option to get started. Google provides you with a virtual machine with optional (but limited) GPU resources. Open Colab Notebook

Clone this wiki locally