The Human-Computer Interaction Institute at Carnegie Mellon University is researching new methods of gesture sensing, including the use of modified smartwatches, ultrasonic transducers, and the latest being a beamforming wristband. CMU’s BeamBand is a system that employs ultrasonic beamforming to sense gestures, which uses acoustic wavefronts to project acoustic energy at various angles and focal points. This allows the surface geometry of hand to be mapped using inaudible sound in a scanner-like manner from multiple locations.
Researchers developed the BeamBand using a series of in-air ultrasonic transducers positioned in a semicircle above the hand, giving them the ideal position to capture different hand poses. Using active beamforming enables the wristband to steer and focus ultrasound to many different areas of the hand, while multiplexing the transducers captures those sonic reflections from different viewpoints, thus mapping the hand.
The transducers are driven with software-controlled waveforms using a custom sensor circuit designed using three major components — a high-voltage EMCO SIP100 DC-DC power regulator, high-voltage amplifiers, and a multiplexed analog front end. An overclocked (240MHz) Teensy 3.6 controls the show, garnering data from the BeamBand and processing the information with the sensor circuit.
The BeamBand is capable of using the ultrasonic mapping to recognize hand poses at 8 FPS and can handle a six-class hand gesture set at 94.6% accuracy. As with CMU’s other gesture sensing offerings, the BeamBand is not ready for commercial release, as it needs to be calibrated for individual use, rather than an overall generalization of users. That said, the team at HCII envision their BeamBand in next-gen smartwatches and VR platforms.