Key Takeaways

  • By obtaining recordings from single neurons in the brain, researchers mapped neuronal activity that reflects how a person comprehends the meanings of different spoken word
  • Neuronal recordings also allowed the scientists to predict the meanings of words as they were heard in real-time during speech
  • This information could be used to determine what someone is listening to or thinking, which could help scientists develop brain-machine interfaces that allow individuals with speech-related conditions to communicate more effectively

Using a novel technology for obtaining recordings from single neurons, a team of investigators at Massachusetts General Hospital, a founding member of the Mass General Brigham healthcare system, discovered a microscopic “thesaurus” that reflects how word meanings are represented by neurons in the human brain.

The research, which is published in Nature, opens the door to understanding how humans comprehend language and provides insights that could be used to help individuals with medical conditions that affect speech.

“Humans possess an exceptional ability to extract nuanced meaning through language—when we listen to speech, we can comprehend the meanings of up to tens of thousands of words and do so seamlessly across remarkably diverse concepts and themes,” said senior author Ziv Williams, MD, a physician-investigator in the department of Neurosurgery at Massachusetts General Hospital and an associate professor of Neurosurgery at Harvard Medical School. “Yet, how the human brain processes language at the basic computational level of individual neurons has remained a challenge to understand.”

Williams and his colleagues set out to construct a detailed map of how neurons in the human brain represent word meanings—for example, how we represent the concept of animal when we hear the word cat and dog, and how we distinguish between the concept of a dog and a car.

 “We also wanted to find how humans are able to process such diverse meanings during natural speech and through which we are able to rapidly comprehend the meanings of words across a wide array of sentences, stories, and narratives,” Williams said.

To start addressing these questions, the scientists used a novel technology that allowed them to simultaneously record the activities of up to a hundred neurons from the brain while people listened to sentences (such as, “the child bent down to smell the rose”) and short stories (for example, about the life and times of Elvis Presley).

Using this new technique, the investigators discovered how neurons in the brain map words to particular meanings and how they distinguish certain meanings from others.

“For example, we found that while certain neurons preferentially activated when people heard words such as ‘ran’ or ‘jumped’, which reflect actions, other neurons preferentially activated when hearing words that have emotional connotations, such as ‘happy’ or ‘sad’,” said Williams. “Moreover, when looking at all of the neurons together, we could start building a detailed picture of how word meanings are represented in the brain.”

To comprehend language, though, it is not enough to only understand the meaning of words, but also to accurately follow their meanings within sentences. For example, most people can rapidly tell the correct meaning of words such as “sun” and “son” or “see” and “sea” when used in a sentence, even though the words sound exactly the same.

“We found that certain neurons in the brain are able to reliably distinguish between such words, and they continuously anticipate the most likely meaning of the words based on the sentence contexts in which they are heard,” said Williams.

Lastly, and perhaps most excitingly, the researchers found that, by recording a relatively small number of brain neurons, they could reliably predict the meanings of words as they were heard in real-time during speech. That is, based on the activities of the neurons, the team could determine the general ideas and concepts experienced by an individual as they were being comprehended during speech.

“By being able to decode word meaning from the activities of small numbers of brain cells, it may be possible to predict, with a certain degree of granularity, what someone is listening to or thinking,” said Williams. “It could also potentially allow us to develop brain-machine interfaces in the future that can enable individuals with conditions such as motor paralysis or stroke to communicate more effectively.”

Authorship: Mohsen Jamali, Benjamin Grannan, Jing Cai, Arjun R. Khanna, William Muñoz, Irene Caprara, Angelique C. Paulk, Sydney S. Cash, Evelina Fedorenko, and Ziv M. Williams.

Disclosures: Disclosure forms provided by the authors are available with the full text of this article at https://www.nature.com/articles/s41586-024-07643-2.

Funding:
M.J is supported by CIHR, NARSAD Young Investigator Grant, and Foundations of Human Behavior Initiative, B.G. is supported by NREF and NIH NRSA, A.K. and W.M. are supported by NIH R25NS065743, A.C.P. is supported by UG3NS123723, Tiny Blue Dot Foundation and P50MH119467, S.S.C. is supported by R44MH125700 and Tiny Blue Dot Foundation, E.F. is supported by U01NS121471 and R01 DC016950, and Z.M.W is  supported by NIH R01DC019653 and U01NS121616.

Paper cited: Jamali M et al. “Semantic encoding during language comprehension at single cell resolution.” Nature DOI: 10.1038/s41586-024-07643-2