IIT Guwahati Unveils Game-Changing Underwater Sensor for Silent Voice Recognition



IIT Guwahati Unveils Game-Changing Underwater Sensor for Silent Voice Recognition
  • IIT Guwahati, with Ohio State University, has developed an underwater vibration sensor that enables contactless voice recognition using exhaled air, aiding people with speech disabilities.
  • The sensor captures subtle water waves caused by airflow and interprets them using AI (CNNs), eliminating the need for audible sound.
  • Low-cost, durable, and versatile, the sensor can also be used for hands-free device control, movement tracking, and underwater communication.
In a significant technological breakthrough, researchers at the Indian Institute of Technology (IIT) Guwahati, in collaboration with Ohio State University, US, have developed an innovative underwater vibration sensor capable of enabling contactless and automated voice recognition. The novel device presents a revolutionary alternative communication tool for individuals with speech disabilities, particularly those with damaged or non-functional vocal cords.
Unlike traditional voice recognition systems that rely on audible sound, this sensor leverages a basic physiological function exhalation. When a person attempts to speak, air is expelled from the lungs, even if no sound is produced. When this air passes over a water surface, it creates subtle waves. The underwater vibration sensor detects these minute disturbances and interprets them into speech signals, eliminating the need for vocal sound.
“This is one of the rare material designs that can recognize voice by monitoring water waves formed at the air-water interface due to exhaled air”, said Prof. Uttam Manna, Department of Chemistry, IIT Guwahati. “This approach opens a new avenue for communication for individuals with partially or completely damaged vocal cords”.
The sensor itself is crafted from a conductive, chemically reactive porous sponge. Positioned just beneath the air-water interface, it captures the faint vibrations caused by airflow during speech attempts. These vibrations are converted into measurable electrical signals, which are then interpreted using Convolutional Neural Networks (CNNs) a form of artificial intelligence capable of learning and identifying complex signal patterns with high accuracy.
A working prototype has been developed at a laboratory-scale cost of approximately Rs 3,000, and the research team is actively seeking industry collaborations to bring the technology into practical use at an affordable price.
Beyond aiding those with voice disabilities, the sensor shows promise in broader applications such as hands-free control of smart devices, movement detection, and exercise tracking. It also demonstrated long-term durability in underwater conditions, suggesting its potential in underwater sensing and communication systems.
The research findings have been published in the prestigious journal Advanced Functional Materials, signaling a promising step forward in assistive and smart communication technologies.