A new wearable device from MIT's AlterEgo company uses technology to interpret subtle neuromuscular signals for silent communication. The device, worn on the ears, enables tasks like conversation and device control without vocalizing words. While it offers privacy benefits, it also raises concerns about data handling in interactions.
AlterEgo, a spinoff from MIT's Media Lab, has demonstrated a wearable that captures what its developers describe as "silent speech." This involves detecting subtle movements, such as mouthing words or internal vocalization, through neuromuscular signals produced before words are spoken aloud.
The device relies on a system named Silent Sense, which identifies various speech activities, including normal speaking, silent mouthing, and faint muscle signals from intended speech. Worn on the ears, it allows users to perform voice-based tasks quietly, such as engaging in conversations, receiving live language translations, or operating digital devices. Proponents highlight its potential for privacy, as it avoids speaking sensitive information in public settings.
However, the technology introduces privacy issues, particularly since it places a computer interface between communicating parties. The device does not read thoughts; it responds only to deliberate activation of the speech system. AlterEgo's approach builds on research into silent speech interfaces, distinguishing it from broader brain-computer interfaces like those from Synchron and Neuralink, though questions remain about its accessibility applications.
At the time of the report, AlterEgo had not provided additional comments. The demonstration video showcases the device's operation, emphasizing its non-invasive design.