Sign In

Communications of the ACM

ACM TechNews

Doing What the Brain Does--How Computers Learn to Listen

View as: Print Mobile App Share:

Credit: iStockphoto

Researchers at Germany's Leipzig Max Planck Institute for Human Cognitive and Brain Sciences and the Wellcome Trust Centre for Neuroimaging in London have developed a mathematical model that could significantly improve computers' ability to automatically recognize and process spoken language.

The researchers say their new language processing algorithm could eventually imitate brain mechanisms and help machines perceive and understand the world around them. The researchers created a mathematical model that was designed to imitate, in a highly simplified manner, the neuronal processes that occur during human speech comprehension.

The neuronal processes were described by algorithms that processed speech at several temporal levels. The model was able to recognize individual speech sounds and syllables and was able to process accelerated speech sequences. Additionally, the system had a brain-like ability to predict the next speech sound, and if the prediction was incorrect because the speaker made an unfamiliar syllable out of familiar sounds, the system could detect the error.

"The crucial point, from a neuroscientific perspective, is that the reactions of the model were similar to what would be observed in the human brain," says the Max Planck Institute's Stefan Kiebel.

From Max Planck Society
View Full Article


Abstracts Copyright © 2009 Information Inc., Bethesda, Maryland, USA


No entries found

Sign In for Full Access
» Forgot Password? » Create an ACM Web Account