Categories
Latest
Popular

New Wearable Speech Recognition Tech Tracks Mouthed Words

New Wearable Speech Recognition
Image Source: https://www.pexels.com/photo/young-female-friends-communicating-using-sign-language-in-library-7516283/

Korea’s most revered king, Sejong the Great, personally created Korea’s language, Hangul. The king revealed it in 1443. King Sejong the Great wrote the preface of the original treatise on Hangul, the Hunminjeongeum, explaining the origin and purpose of the language he and his scholars created. According to the book, the Korean alphabet’s consonants were classified according to the speech organs involved in the sound’s production. Moreover, the shapes of the Korean letters represent specific speech organs. Thus, consonants that are spoken similarly are based on a particular shape.

Another Korean innovation 

Researchers in South Korea recently published a new project on words and speech. They have designed a tiny wearable system that uses facial movement tracking to recognize words through a silent speech recognition system. Their research has some similarities to what the fourth king of the Joseon Dynasty did before-using the shape of the mouth and the organs of speech to formulate the letters and create their alphabet. 

The research team from the School of Electrical and Electronic Engineering at South Korea’s Yonsei University developed the technology. Their primary goal is to help people who are hearing impaired and cannot communicate with other people through sign language

The technology involved

According to the researchers, their silent speech interface depends on small strain sensors to discover the expansion and contraction of the skin as the person speaks. They applied a deep learning algorithm so that the new technology could easily follow the facial movements and convert them into words. 

The strain sensor, which they attached to the face, moves synchronically with the facial skin’s movements during speech. Likewise, according to one of the project’s researchers, the strain sensors’ electric properties change. 

Based on the findings published in Nature Communications, the sensors are ultra-thin, and the researchers ensured that they are resistant to sebum and sweat so that they will adhere securely to the facial skin. During their test, the system already recognized 100 words at nearly 88 percent accuracy even without actual voice. In addition, the system performed remarkably by analyzing facial movements effectively. According to the team, it was already an unequaled high performance. 

Something new for an existing technology

The researchers say that silent speech recognition sensors already exist. However, they worked to improve on the design, making them smaller at under 8 µm thickness, giving their new system higher scalability. They say that with their new system, they only need to add more sensors, much like adding more pixels to increase the resolution of an image. With more sensors, their system can better track facial movements, enabling it to recognize more words. With their project, they want to combine their wearable strain sensor with a highly integrated circuit, similar to the production of semiconductor or display systems.  

Yonsei University researchers
Image Source: https://www.pexels.com/photo/therapist-giving-advice-3958375/

Tool to help deaf and hard-of-hearing people communicate

Yonsei University researchers say that they plan to increase the amount of information to enable their system to recognize more words and sentences to help people with language disabilities to have conversations. They hope that some people with speech and hearing disabilities will not rely on rehabilitative, sign language, and hearing aids to communicate. There have been many attempts to develop a specific tool, but most of them were challenged by the limited area of the human face.

According to the study, the researchers developed a system that enables users to understand what other people say by recognizing the changes in the shape of the lips. Their device combines a deep learning-based strain-word shift algorithm and a monocrystal silicon-based high-performance wearable strain gauge.

The research team said that their new-concept platform would help people communicate by understanding the movements of the lip shape without needing sign language.