Intro to the Google Coral Dev Board Micro: Custom Object Detection
2024-02-26 | By ShawnHymel
License: Attribution Camera Microcontrollers Coral
The Google Coral Dev Board Micro is a powerful microcontroller board with a dual-core ARM Cortex-M7 and M4 along with a Tensor Processing Unit (TPU) AI accelerator. By compiling and running models on the TPU, we can achieve impressively fast inference results in a small package. This tutorial will walk you through the process of training a custom object detection model (MobileNetV2-SSD) and running it on the Coral Micro.
Using this method, I was able to achieve inference speeds of around 8-12 frames per second (FPS) with 320x320 color input images. This is pretty impressive for full object detection running on a microcontroller-based board for under 1 watt.
If you would like to see this tutorial in video form, check out the following:
Important! The FreeRTOS-based software development kit (SDK) for the Coral Micro (called coralmicro) works in Linux only. I have not tried it in macOS, but many of the packages are installed via Aptitude when you run the installation scripts. The following steps were tested on Ubuntu 20.04.
Additionally, note that while Google has Arduino examples for the Coral Micro, I was unable to get any of the machine learning demos to compile correctly. As a result, I recommend sticking with the FreeRTOS-based coralmicro SDK.
All code and examples for this tutorial can be found in this GitHub repository.
Collect Data
Install the coralmicro SDK:
mkdir -p ~/Projects
cd ~/Projects
git clone --recurse-submodules -j8 https://github.com/google-coral/coralmicro
cd coralmicro && bash setup.sh
Clone or download the google-coral-micro-object-detection repository somewhere on your computer. Go into the image collection portion and create a symbolic link to the coralmicro SDK:
cd ~/Projects
git clone https://github.com/ShawnHymel/google-coral-micro-object-detection
cd google-coral-micro-object-detection/firmware/image-collection-http
ln -s ~/Projects/coralmicro/ coralmicro
Put your Coral Micro board into bootloader mode by pressing and holding the middle button (SW2) while plugging in the USB cable to your computer. The orange LED in the bottom-right corner should stay lit to let you know the board is in bootloader mode.
Run CMake, build, and flash the image collection firmware to the Coral Micro:
cmake -B build/ -S .
make -C build/ -j4
python3 coralmicro/scripts/flashtool.py --build_dir build/ --elf_path build/coralmicro-app
Once the Coral Micro boots, you can open a browser and navigate to http://10.10.10.1 to access the webserver running on the Coral Micro. You should be presented with a page that streams images from the camera and gives you the option to save images to your computer. Once again, this is Linux-only (Windows lacks the Ethernet-over-USB drivers to make this work).
Point the camera at your desired objects and click the Save Image button at the top to save a copy of what the camera sees to your computer. Note that the image filename will be given a random hash, so you can keep pressing save.
You will want to gather a lot of data–the more, the better! Around 400 images containing my target objects were enough to train a decent object detection model for my project, but you may need more.
Label Data
For object detection, we need to use a labeling program to create a series of bounding boxes that act as our ground truth labels. Head to https://www.makesense.ai/ and click Get Started at the bottom. Upload all your images as requested and select Object Detection for the project type.
When asked, create your labels for the different objects you want to identify. For my robot project, I want to identify “basket” and “target” objects, so I create those labels.
Click Start Project. Click and drag to create a box over each of your intended objects. Change the label as needed by selecting the drop-down menu for each bounding box on the right.
Repeat this process for all images in your dataset. When you are done, select Actions > Export Annotations. Select A .zip package containing files in VOC XML format, and click Export.
Unzip your XML files. Create a directory for your images named “images” and a directory for your annotations named “Annotations” (note the spelling and capitalization–they’re important!). Zip those directories together in an archive named “dataset.zip.”
dataset.zip
├── Annotations/
│ ├── image.01.xml
│ ├── image.02.xml
│ ├── ...
└── images/
├── image.01.jpg
├── image.02.jpg
└── ...
Train Model
Navigate to the MediaPipe Object Detection Learning notebook in the repository. Click on the Open in Colab button to open the notebook in a Google Colab instance (note that you will need a Gmail account). Follow the directions in the notebook to upload your dataset.zip. Execute all the cells (press shift+enter) to install MediaPipe, train your object detection model, and convert it to a TensorFlow Lite file compiled for the TPU.
The script should download a collection of models in a file named exported_models.zip to your computer. Unzip that file.
Test Model
If you would like to test the model with one (or more) of your images, you run this TFLite testing notebook. You will need to upload your .tflite file, metadata.json, and your test image. Note that you will need to change the IMAGE_PATH variable to match the name of your uploaded image.
Deploy to Coral Micro
Once you are happy with your model’s performance, you can deploy it to the Coral Micro to run inference locally on the device (using the TPU). From your exported_models.zip file that you downloaded, copy model_int8_edgetpu.tflite and metadata.hpp files to the google-coral-micro-object-detection/firmware/object-detection-http folder. Overwrite the default files already in that folder (those belong to the model that I trained on my data).
Navigate to the directory and create a symbolic link to the coralmicro SDK:
cd ~/Projects/google-coral-micro-object-detection/firmware/object-detection-http
ln -s ~/Projects/coralmicro/ coralmicro
Put your Coral Micro into bootloader mode. Build and flash the firmware:
cmake -B build/ -S .
make -C build/ -j4
python3 coralmicro/scripts/flashtool.py --build_dir build/ --elf_path build/coralmicro-app
Note that if you wish to make changes to the firmware without uploading the .tflite model or the index.html files, you can add the --nodata flag to the flashing script. This can save you lots of flashing time during development. Additionally, you can change the default IP address with the --usb_ip_address flag. For example:
python3 coralmicro/scripts/flashtool.py --build_dir build/ --elf_path build/coralmicro-app --nodata --usb_ip_address 192.168.2.1
Open a serial port to the Coral Micro if you wish to view the debugging information. Note that the bounding box information is streamed across the USB serial port as well as over the hardware UART pins on the Coral Micro board.
picocom /dev/ttyACM0 -b 115200
The bounding box information should be streamed over the serial ports in JSON format.
Finally, you can navigate to http://10.10.10.1 again to see images streamed from the camera with a bounding box overlay. You can open your browser’s console (F12 on most systems) to see the raw bounding box data (also in JSON format).
Going Further
A lot of steps and code are required to train a custom object detection model, convert it to a TPU file, and deploy it to a development board! While we covered the steps required to make this happen, we glossed over the theory and details. If you would like to dive into modifying the code or making your own embedded machine learning system using the Coral Micro, please check out the following resources:
- MediaPipe documentation
- Example of training an object detection model using MediaPipe
- Getting started with the Coral Dev Board Micro
- coralmicro API documentation
Have questions or comments? Continue the conversation on TechForum, DigiKey's online community and technical resource.
Visit TechForum