Tiny SSD: A Tiny Single-shot Detection Deep Convolutional Neural Network for Real-time Embedded Object Detection
This repo contains the code, data and trained models for the paper Tiny SSD: A Tiny Single-shot Detection Deep Convolutional Neural Network for Real-time Embedded Object Detection.
- How to Install
- Description of Codes
- How to Run
- Results, Outputs, Checkpoints
Tiny SSD is a single-shot detection deep convolutional neural network for real-time embedded object detection. It brings together the efficieny of Fire microarchitecture introduced in SqueezeNet and object detection performance of SSD (Single Shot Object Detector).
How to Install
conda create -n env python=3.8 -y
conda activate env
pip install -r requirements.txt
Description of Files
We use /data/detection/background to generate the target detection dataset for our experiments.
Since the generated data is stored in the repository, there is no need to run this step.
How to Run
The checkpoints will be saved in a subfolder of
Finetuning from an existing checkpoint
model path should be a subdirectory in the
./model/checkpoints/ directory, e.g.
Results, Outputs, Checkpoints
the ./model/checkpoints/net_100.pkl：class err 1.54e-03, bbox mae 1.90e-03
I used the following methods to improve performance：
HD anti-white detection object to adapt to the test image
Flip and rotate images, etc. to improve generalization performance
If we have more classes, we can further improve the model in the following aspects:
- When an object is much smaller compared with the image, the model could resize the input image bigger.
- There are typically a vast number of negative anchor boxes. To make the class distribution more balanced, we could downsample negative anchor boxes.
- In the loss function, assign different weight hyperparameters to the class loss and the offset loss.
- Use other methods to evaluate the object detection model, such as those in the single shot multibox detection paper (Liu et al., 2016).