We take them for granted now, but photo and video filters were a big deal when they first gained traction. With nothing more than a smartphone and an Instagram account, amateur photographers could add film grain to their photos and jack up the saturation — just like the pros! But, that’s just adding simple post-processing effects after the photo or video has already been captured. This Raspberry Pi neural network converts video into complex artistic recreations, and does it all in real-time.
This project was created by Idein Inc., and runs on a Raspberry Pi 3. Video is captured with a camera module, and then the neural network gets to work. It takes an image of an existing art piece, like a Van Gogt, Picaso, or Monet, and then uses that to modify the video feed. Each frame is redrawn in the style of the selected art piece. The newly-artified video is then show on an LCD connected to the Raspberry Pi. The entire thing fits in a handheld package, and can convert the video with less than a second of lag at a few frames per second.
In order to achieve that speed, the video is running at a resolution of 256×256. Idein Inc. hasn’t posted many details beyond that, but we do know this is based off an algorithm described in the paper Perceptual Losses for Real-Time Style Transfer and Super-Resolution by Justin Johnson, Alexandre Alahi, and Li Fei-Fei. That paper documents how the team used feed-forward neural networks for optimized image processing. However Idein Inc. achieved this, it’s certainly impressive.