How to Train New TensorFlow Lite Micro Speech Models
2019-08-01 | By Adafruit Industries
License: See Original Project
Courtest of Adafruit
Guide by Lady Ada
Overview
Machine learning has come to the 'edge' - small microcontrollers that can run a very miniature version of TensorFlow Lite to do ML computations. The first demos available are for 'micro speech' which is detecting a couple words. The default words are 'yes/no' but the dataset contains many other words! This guide goes through how to train micro speech models on your own.
Install Docker
We need to be able to run a specific version/commit of TensorFlow and the dependancy requirements for TF are very extreme. We strongly suggest against trying to compile and run on your native computer OS - that way we don't get weird interactions with your OS, compiler toolchain, Python kit, etc. Also, TF really wants to run on a particular version of Linux and chances are you aren't running it.
Instead, we will be using Docker to containerize and separate the TF build so we have a compact, clean, dependable build. Docker is lighter than VMWare/vagrant, and has a very nice 'hub' backend for saving/restoring your images, all for free!
Signup and log into Docker
Sign up at https://hub.docker.com/signup
You don't need to pay for an account, but be aware the software images we'll be using are public so don't put any private data in em!
Download and Install Desktop Docker
Download Docker software for Windows or Mac, whichever matches your computer
TensorFlow needs a lot of computing resources
Give it as many CPUs and as much RAM as you can spare
You need to give it at least 8 GB of RAM or gcc will fail with a very annoying and somewhat confusing error like this (but on some other file)
ERROR: /root/tensorflow/tensorflow/core/kernels/BUILD:3371:1: C++ compilation of rule '//tensorflow/core/kernels:reduction_ops' failed (Exit 4)
gcc: internal compiler error: Killed (program cc1plus)
Please submit a full bug report,
with preprocessed source if appropriate.
See <file:///usr/share/doc/gcc-7/README.Bugs> for instructions.
Target //tensorflow/examples/speech_commands:train failed to build
Use --verbose_failures to see the command lines of failed build steps.
INFO: Elapsed time: 6058.951s, Critical Path: 3278.24s
INFO: 2606 processes: 2606 local.
FAILED: Build did NOT complete successfully
FAILED: Build did NOT complete successfully
Open a command terminal and try to login, use the same username/password as from the site
OK you're ready to go!
Create/Fork Docker Image
Start with the official TensorFlow Docker image, like github you can pull/commit/push and implictly fork when you do this between sources.
docker pull tensorflow/tensorflow will get you the latest docker image from Google
Log into the Docker image with
docker run -it tensorflow/tensorflow bash
Within the Docker root shell, install some dependencies with
apt-get install -y curl zip git
Install TensorFlow with pip
This technique does not work yet!
Go thru the process to create/fork the official Tensorflow Docker image. After running apt-get install -y curl zip git install the latest tensorflow pip package with:
pip install https://storage.googleapis.com/tensorflow-nightly/prod/tensorflow/release/ubuntu/cpu_py2_full/nightly_release/455/20190703-203107/github/tensorflow/pip_pkg/tf_nightly-1.15.0.dev20190703-cp27-cp27mu-linux_x86_64.whl
to get optimizations enabled on x86 machines, such as AVX2 if your processor supports it, try uninstalling the default tensorflow with pip uninstall tensorflow
and installing the latest build from Intel
pip install https://tensorflow-ci.intel.com/job/tensorflow-mkl-build-whl-nightly/lastSuccessfulBuild/artifact/tensorflow-1.14.0-cp27-cp27mu-linux_x86_64.whl
Clone the latest tensorflow repository
git clone https://github.com/tensorflow/tensorflow
cd tensorflow
Advanced: Build TensorFlow
If you need to compile TensorFlow from scratch, you can do it, but its very slow to get everything compiled. Once its compiled, its really fast to train models!
We have to start this way, until there's more automated methods...so here's a guide on how we did it
While this method takes a long time its the only way we were able to build models, hopefully there will be an easy to use pip installer soon!
We need to use version 0.23.1 of bazel (the build tool), so we'll install that specific version like this:
cd ~
curl -O -L https://github.com/bazelbuild/bazel/releases/download/0.23.1/bazel-0.23.1-installer-linux-x86_64.sh
chmod +x bazel-0.23.1-installer-linux-x86_64.sh
./bazel-0.23.1-installer-linux-x86_64.sh
You can verify it with bazel version
For some reason, the image is still using Python 2.7, so grab the future package so we can run python3 code
pip install future
We also need to get the right version of the 'estimator' package (we use it later)
pip uninstall tensorflow_estimator
pip install -I tensorflow_estimator==1.13.0
We need to build a specific commit of TensorFlow, so clone the repo then switch to that commit
git clone https://github.com/tensorflow/tensorflow.git
cd tensorflow
git checkout 4a464440b2e8f382f442b6e952d64a56701ab045
Go with the default configuration by running
yes "" | ./configure
Finally start the TensorFlow compile and speech training with
bazel run -c opt --copt=-mavx2 --copt=-mfma tensorflow/examples/speech_commands:train -- --model_architecture=tiny_conv --window_stride=20 --preprocess=micro --wanted_words="yes,no" --silence_percentage=25 --unknown_percentage=25 --quantize=1
This will create a micro model of the large speech data set with only "yes" and "no" words in the model (to keep it small/simple)
This will take many hours especially the first time! Go take a break and do something else (or, you can try using your computer but it will be slow because Docker is sucking up all the computational resources to compile 16,000 files)
After TensorFlow has completed compiling it will take another 2+ hours to run the training. In the end you will get something like this:
Training and freezing models
Start training a new micro speech model with
python tensorflow/examples/speech_commands/train.py -- --model_architecture=tiny_conv --window_stride=20 --preprocess=micro --wanted_words="yes,no" --silence_percentage=25 --unknown_percentage=25 --quantize=1
or, if using bazel
bazel run -c opt --copt=-mavx2 --copt=-mfma tensorflow/examples/speech_commands:train -- --model_architecture=tiny_conv --window_stride=20 --preprocess=micro --wanted_words="yes,no" --silence_percentage=25 --unknown_percentage=25 --quantize=1
This will run for a few hours
At the end you'll get your final test accuracy and checkpoint file
Checkpoint files are stored in /tmp
In this case we want /tmp/speech_commands_train/conv.ckpt-18000.* (the last place the trainer saved to)
Freeze
Take the trained weights and turn them into a frozen model on disk.
python tensorflow/examples/speech_commands/freeze.py --model_architecture=tiny_conv --window_stride=20 --preprocess=micro --wanted_words="yes,no" --quantize=1 --output_file=/tmp/tiny_conv.pb --start_checkpoint=/tmp/speech_commands_train/conv.ckpt-100
or if using bazel something like:
bazel run tensorflow/examples/speech_commands:freeze -- --model_architecture=tiny_conv --window_stride=20 --preprocess=micro --wanted_words="yes,no" --quantize=1 --output_file=/tmp/tiny_conv.pb --start_checkpoint=/tmp/speech_commands_train/tiny_conv.ckpt-18000
Convert
Convert the TensorFlow model into a TF Lite file
bazel run tensorflow/lite/toco:toco -- --input_file=/tmp/tiny_conv.pb --output_file=/tmp/tiny_conv.tflite --input_shapes=1,49,40,1 --input_arrays=Reshape_1 --output_arrays='labels_softmax' --inference_type=QUANTIZED_UINT8 --mean_values=0 --std_values=9.8077
The file can now be found in /tmp/tiny_conf.tflite
Extract & Save
Finally, you can use docker cp to copy the file from your container to your desktop. From the host computer (not the docker contrainer) run docker cp CONTAINERID:/tmp/tiny_conf.tflite.
You should now have access to the file!
Here are some example files
Commit Docker
Now's a good time to 'save' our work. Run docker ps to list all your docker containers
you can 'save' this docker container to your account with
docker commit CONTAINER_ID USERNAME/mytensorflow
where the CONTAINER_ID is the 12 character that is to the left of the image name and USERNAME is your docker login name. So in my case, docker commit c2a0a7f0a7bb ladyada/mytensorflow
It will take a few minutes while Docker runs, eventually you'll get this on the terminal:
Then push it to docker hub with docker push username/containername
Then visit your dockerhub profile to see that you have in fact pushed the docker image
Have questions or comments? Continue the conversation on TechForum, DigiKey's online community and technical resource.
Visit TechForum