15. Hearing and Speech

Published 2021-10-27
MIT 9.13 The Human Brain, Spring 2019
Instructor: Nancy Kanwisher
View the complete course: ocw.mit.edu/9-13S19
YouTube Playlist:    • MIT 9.13 The Human Brain, Spring 2019  

Humans use hearing in species-specific ways, for speech and music. Ongoing research is working out the functional organization of these and other human auditory skills.

* NOTE: Lecture 14: New Methods Applied to Number (student breakout groups—video not recorded)

License: Creative Commons BY-NC-SA
More information at ocw.mit.edu/terms
More courses at ocw.mit.edu/
Support OCW at ow.ly/a1If50zVRlQ

We encourage constructive comments and discussion on OCW’s YouTube and other social media channels. Personal attacks, hate speech, trolling, and inappropriate comments are not allowed and may be removed. More details at ocw.mit.edu/comments.

All Comments (21)
  • The fact that someone from Mozambique has access to an MIT lecture from the comfort of their couch is simply mind-blowing. By the way, I was surprised to hear that Professor Nancy has visited Mozambique! Anyways, thank you MIT for giving us access to high quality educational materials for free.
  • @oliveryuan2927
    Dear Professor, the claim that nobody had done the reverbs before is not true. Many years ago, I was an acoustic engineer and my field was architectural acoustics. Specifically, I was designing acoustic environment in a room for different purpose e.g. music, speech etc. One of the key properties we study is the reverberation time of the room and there many many measurements of different type of rooms and the fact that sound level decays has been very well known in the field of architectural acoustics. Just thought I’d point it out. Happy to offer more details if you are interested.
  • @alanklm
    It's amazing how little upvotes those lectures have, especially when you compare with other type of videos on youtube. I believe comments help with, so I'm here just to say thank you very much! Give us more high quality content!
  • Two random ideas off the top of my head about why the primary audio cortex might have two areas for high frequencies. 1. Backup/error detection. It would be evolutionary beneficial to still be able to hear if one side got damaged. 2. Each side might be subtly different, perhaps one side is close to being the true signal and the other is processed in some way. Some form of computation could take both these signals and use this to perform another function such as helping figure out sound direction or perform noise reduction to the signal to help us pick out the sound we're interested in.
  • Muchas gracias por habilitar los subtítulos en español. Un gran regalo para las personas de habla hispana no bilingües.
  • @sherry8444
    12:04 He asked about the "intensity or volume" and the answer was that it isn't well depicted on that graph. But I would have thought that the graph literally shows the overall loudness by how tall the squiggles are. They show the measured amplitude of the pressure wave at any given time, i.e. loudness/volume.
  • Thanks for the lecture on the hearing and speech. The study on the vowels and the consonants is very interesting. One falls on the vertical and the other on horizonal in the bar chart. Amazing. Thanks to Dr.Nancy Kan wisher and the MIT.
  • It is at sulcus I think and the areas of visual spatial, but using the near regions, I have autism spectrum disorder high functioning Asperger's it happens to me, and You helped me to know that.
  • I had to take a break at half way. I mean the brain is so amazing at carrying out these seemingly impossible tasks effortlessly... well, ironically, it's very hard to process that.
  • @yashwanthrao98055
    Very well put lecture although I can’t help myself from asking this question what essentially plays role in Choice of colors attributing to the specific Sounds stimuli is there a hidden psychoanalysis or is it just a random point of time choice 😅. Like the one depicts in Graphs !
  • @AlvaroALorite
    Btw, does anyone know by which mechanism neurons specialize to respond better to certain stimuli and not others (STRFs)? Is it analogous to what happens in deep learning neural networks? ( 54:59)