Now, they’ve they’ve got TensorFlow Lite for Micro-controllers up on running on yet another new platform, their Circuit Playground Bluefruit board.
Adafruit’s Circuit Playground Bluefruit is an interesting board, built around the Nordic nRF52840, the board has a host of sensors including; light, sound, temperature, and touch. But importantly for the TensorFlow Lite for Micro-controller’s voice recognition demo, it also has a built-in microphone.
“This demo records audio via the mic, and then gives feedback on YES (green neopixels plus upwards tone), NO (red neopixels plus downwards tone). So far so good, this could be a handy board for learning the basics of machine learning. We committed our experiments to the Adafruit TFLite Micro Speed repo.”
It’s also the first time, at least that I’m currently aware of, that we’ve seen the TensorFlow Lite for Micro-controllers demo running on Nordic hardware.
The Nordic nRF52840 is also probably the closest competitor to the Ambiq Micro Apollo 3 used in the SparkFun Edge board, and their Artemis modules.
The Apollo 3 micro-controllers consume around 0.3mA running flat out at 48MHz, and just 1 µA in deep sleep mode with Bluetooth turned off. Making the Apollo 3 processor’s power budget when running less than many micro-controllers draw in deep sleep mode. However, the nRF52840 is definitely its closest competitor, and of course the power budget of a board isn’t just down to the processor you use there are lots of other factors involved.
@ohazi@sparkfun@Ambiq_Micro@NordicTweets Looking at the #nRF52840 data sheet it will run flat out at 4.9 mA. So it draws approximately ×16 more power. So yes, the Nordic chip is more power efficient than the #ESP32, but it’s still an order of magnitude less efficient than the Ambiq #Apollo3.
After spending the last six months looking at deep learning on the edge, and investigating the new generation of custom silicon that has been designed to speed up machine learning inferencing on embedded devices. It looks like we’re almost to the point where I can, and perhaps should, spend the next six months looking at hardware a bit lower down the stack, at micro-controllers rather than micro-processors.
Although on the face of it, unlike the Raspberry Pi and other boards and accelerators I’ve been looking at up till now, timing the inferencing is harder.
That said, you should be able to do inferencing timing on micro-controllers by setting a pin high before the function call and low afterwards. If you put an oscilloscope on the pin, you can then accurately measure the time it spends high. You should also get a good qualitative idea of the variability in execution time watching for jitter on the falling edge as the pin drops low. Perhaps?
However, while I’ve looked at power budgets, as well as heating and cooling, in the past —so the environmental factors around machine learning inferencing rather than just inferencing times—these are going to be far more important when it comes to micro-controllers rather than micro-processors.
It really feels like the space around doing machine learning on this insanely low-powered hardware is starting to mature. If you’ve been following along with progress it might be time to take a second look.