First step towards language comprehension
When we listen to somebody speaking, our brain receives a large amount of different sounds that it can process instantaneously, turning them into words and sentences with meaning. Recently, neuroscientists have put their finger on what could be the first stage of this complex process that is language comprehension.
With a real-time brain study, a team of scientists at the University of California, San Francisco (UCSF) and linguists led by the neurosurgeon Dr Edward Chang has extended the knowledge on the interpretation of human voices.
To do this, 6 subjects were invited to listen to 500 sentences in their native tongue (English) recorded by 400 different speakers. At the same time, their brain activity was recorded using electrodes put directly in contact with their brain to locate the areas activated by listening to these sentences.
A map of the brain was created, showing that specific neurons “light up” according to the consonant or vowel that the subject hears. The scientific team showed that the “s” and “z” sounds, which are made in exactly the same way (partial obstruction of the airway), also referred to as “fricative consonants”, are grouped together on the map. In the same way, the consonants “b” and “p”, which result from the fast release of air blocked by the lips and tongue, are grouped together in another zone. Therefore, the brain sorts sounds quickly according to some criteria, among which how they are formed in the mouth. In other words, the “b” sound has its own location in the brain, as does a sound associated with the vowel “a”.
If the brain is organized according to phonetic features, these results give rise to new theories on the formation of language. Researchers have gone even further, saying that this could help doctors to better understand language problems such as dyslexia, or even help people to learn a second language. According to Dr Chang, it may also be possible to study how language is learnt and explain why this task is sometimes so painstaking.
With a real-time brain study, a team of scientists at the University of California, San Francisco (UCSF) and linguists led by the neurosurgeon Dr Edward Chang has extended the knowledge on the interpretation of human voices.
To do this, 6 subjects were invited to listen to 500 sentences in their native tongue (English) recorded by 400 different speakers. At the same time, their brain activity was recorded using electrodes put directly in contact with their brain to locate the areas activated by listening to these sentences.
A map of the brain was created, showing that specific neurons “light up” according to the consonant or vowel that the subject hears. The scientific team showed that the “s” and “z” sounds, which are made in exactly the same way (partial obstruction of the airway), also referred to as “fricative consonants”, are grouped together on the map. In the same way, the consonants “b” and “p”, which result from the fast release of air blocked by the lips and tongue, are grouped together in another zone. Therefore, the brain sorts sounds quickly according to some criteria, among which how they are formed in the mouth. In other words, the “b” sound has its own location in the brain, as does a sound associated with the vowel “a”.
If the brain is organized according to phonetic features, these results give rise to new theories on the formation of language. Researchers have gone even further, saying that this could help doctors to better understand language problems such as dyslexia, or even help people to learn a second language. According to Dr Chang, it may also be possible to study how language is learnt and explain why this task is sometimes so painstaking.
Source: Mesgarini et al. Phonetic feature encoding in human superior temporal gyrus. Science. 2014 Feb 28;343(6174):1006-10. doi: 10.1126/science.1245994.