What if Computers Could Read Our Lips? Silent Speech as an Active Mode of Interaction with Computer Systems - Livestream

Silent speech that converts lip movements into text can mitigate many challenges of speech and traditional input methods. Yet, existing silent speech recognition models are error-prone or use impractical extremal devices or implants. In this talk, I will present the findings of three projects involving silent speech input. First, a social study established silent speech as an acceptable and desired mode of interaction. Second, two empirical studies revealed that users are more tolerant of errors in silent speech and tend to speak slowly when interacting with it. Third, a new end-to-end deep neural network that can automatically segment lip sequence videos and classify them into text. In an evaluation, the model reduced the word error rate by 57% compared to the state-of-the-arts without compromising the overall computation time.
Speaker: Ahmed Sabbir Arif, UC Merced
Register at weblink to receive Zoom information
Wednesday, 10/20/21
Contact:
Website: Click to VisitCost:
FreeSave this Event:
iCalendarGoogle Calendar
Yahoo! Calendar
Windows Live Calendar
