Researchers from ETH Zurich and the University of Bologna have designed a micro-drone that uses a deep learning-based visual navigation system for autonomous flight. The PULP-DroNet drone is capable of following a street or corridor, while at the same time dodging unexpected obstacles and flying at high speeds. What’s more, the drone doesn’t need a human operator, no external wireless signals, and no base station; instead, all processing is done onboard the drone.
The PULP-DroNet drone uses a DNN-based visual navigation engine that’s comprised of both software and hardware. On the software side, the drone uses DroNet CNN (Convolutional Neural Network) architecture, which learns how to fly by imitating the behavior of manned vehicles that are already in that environment. It derives a steering angle and collision probability using an image taken from the drone’s forward-facing camera. That data is then transferred into control commands, allowing the drone to keep navigating while avoiding obstacles.
On the hardware end, the researchers used a Bitcraze CrazyFlie 2.0 micro-drone outfitted with a custom-designed PULP-Shield- an ultra-low power navigation module that’s equipped with a PULP GAP8 SoC, which packs nine RISC-V cores, a cluster accelerator of 8-cores, and a HiMax QVGA grey-scale camera. All processing takes place on the Shield, which controls the drone’s movements, including forward velocity and angular yaw rate.
Since the vehicle is in a micro-drone form factor, the researchers state the PULP-DroNet drone could be used for a myriad of different applications, including acting as a flying IoT hub, search and rescue, surveillance, and inspection platforms.