Edge computing is one of those things where you have the nails and are still looking for a hammer. In an earlier post, I wrote about Why Machine Learning on the Edge is critical. Pete Warden has also shared interesting insights in Why The Future of Machine Learning is Tiny. There will be many exciting technologies coming out to accelerate the development in this space. Today, we are going to look at how to deploy a neural network (NN) on a microcontroller (MCU) with uTensor.
uTensor (micro-tensor) is a workflow that converts ML models to C++ source files, ready to be imported into MCU projects. Why generate C++ source files? Because they are human-readable and can be easily edited for a given application. The process is as follow:
In this tutorial, we will be using uTensor with Mbed and TensorFlow. It covers tool installations, training of the neural network, generating C++ files, setting up Mbed projects and the deployment. Although these instructions are for Mac OS, they are applicable to other operating systems.
Finally, we will need some input data feeding to the neural network to verify it is working. For the purpose of demonstration, we will use a generated header file that contains the data of a hand-written digit 7.
The input-data file has been prepared for you. Download and place it in your project root:
For simplicity, we will train a multi-layer perceptron (MLP) the handwritten digit dataset, MNIST. The network architecture is shown above. It takes in 28 by 28 greyscale image of a hand-written digit, flattens it to a linear 784 input. The rest of the network is consisted of:
1 input layer
2 hidden layers (128 and 64 hidden units respectively)with ReLu activation functions
The script defines the MLP and its training parameters. Running the script you should see something like:
$ python3 deep_mlp.py ... step 19000, training accuracy 0.92 step 20000, training accuracy 0.94 test accuracy 0.9274 saving checkpoint: chkps/mnist_model Converted 6 variables to const ops. written graph to: mnist_model/deep_mlp.pb the output nodes: ['y_pred']
A protocol buffer that contains the trained model will be saved to the file system. It is what we will supply to uTensor-cli for C++ code generation in the next step.
$ ls mnist_model/ deep_mlp.pb
deep_mlp.pb is what we will supply to uTensor-cli for C++ code generation in the next step.
Generating the C++ Files
Here’s the fun part. Turning a graph, deep_mlp.pb, into C++ files:
Specifying the output node helps uTensor-cli to traverse the graph and apply optimisations. The name of the output node is shown in the training message in the previous section. It depends on how the network is setup.
Compiling the Program
At this stage, we should have an Mbed project containing:
Generated C++ model
Input data header file
All we need is a main.cpp to tie everything together:
The Context class is the playground where the inference takes place. The get_deep_mlp() is a generated function. It automatically populates a Context object with the inference graph, takes a Tensor class as input. The Context class, now contains the inference graph, can be evaluated to produce an output tensor containing the inference result. The name of the output tensor is the same as the output node’s as specified by your training script.
In this example, the static array defined in the input_data.h is being used as the input for inferencing. In practice, this would be buffered sensor data or any memory-block containing the input data. The data is arranged in row-major layout in memory (same as any C array). The application has to keep the input memory-block safe for during inferencing.
The Mbed-cli needs to know what board it is compiling for, in this case, K66F. You may want to update this to the target name of your board. We are also using a custom build profile here to enable C++11 support. Expect to see the similar compilation message:
An typical Mbed board comes with an USB-interface called DAPLink. Its just is to provide drag-and-drop programmability from your Desktop, debugging and serial communication. One you’ve plugged-in your Mbed board, you should see:
Connect your board
Locate the binary under ./BUILD/YOUR_TARGET_NAME/ GCC_ARM/my_uTensor.bin
Drag and drop it into the Mbed DAPLink mount point (shown in the picture)
Wait for the transfer to complete
Getting the Output
By default, all standard outputs on Mbed, printf(), are directed toward serial terminal. DAP-Link interface enables us to view the serial communication via USB. We are going to use CoolTerm for this.
Fire up CoolTerm
Go to Options
Click on Re-Scan Serial Ports
Select the Port to usbmodem1234, this may vary every time you reconnect the board.
The baud rate is 115200, reflecting the configuration in the main.cpp
Press the reset button on your board. You should see the following message:
Simple MNIST end-to-end uTensor cli example (device)
Predicted label: 7
Congratulations! You have successfully deployed a simple neural network on a microcontroller!
uTensor is designed to be the interface between embedded engineers and data scientists alike. It currently supports:
Fully Connect Layer (MatMul & Add)
I believe edge-computing is a new paradigm for applied machine learning. Here, we are presented with unique opportunities and constraints. In many cases, machine learning models may have to be built from the stretch, tailoring toward low-power and specific use-cases. We are constantly looking into ways to drive the advancement in this field.
On behalf of the uTensor team, thank you for checking this tutorial out! My twitter handle is @neil_the_1. Send us an email at email@example.com for an uTensor Slack invitation link.