Главная
Study mode:
on
1
Intro
2
Consider an example of typical auditory input
3
The listener is interested in what happened in the world to cause the sound
4
Standard peripheral auditory model
5
Standard model of auditory cortex linear spectrotemporal filtering
6
Can we obtain better models by training systems to perform tasks?
7
Some obvious limitations
8
Behavioral comparison: Speech recognition in background noise
9
Behavioral comparison: CNN & humans on same task
10
Behavioral comparison: Sound localization
11
Network learns ear-specific cues to elevation, like humans
12
Behavioral comparison: Pitch perception
13
Longstanding controversy over timing vs. "place" information
14
Task performance correlates strongly with ability to predict neural responses
15
Example metamers from each convolutional stage
16
Summary
Description:
Explore the intricacies of auditory cortical computation in this comprehensive lecture by Josh McDermott from MIT. Delve into the complexities of auditory input processing, starting with an examination of typical sound stimuli and the listener's interest in identifying their worldly origins. Analyze the standard peripheral auditory model and the linear spectrotemporal filtering approach in auditory cortex modeling. Investigate the potential for improved models through task-based system training, while acknowledging inherent limitations. Compare behavioral aspects such as speech recognition in noisy environments, sound localization, and pitch perception between computational neural networks and human performance. Discover how task performance correlates with the ability to predict neural responses and examine example metamers from various convolutional stages. Gain valuable insights into the ongoing debate between timing and "place" information in auditory processing.

Understanding Auditory Cortical Computation

MITCBMM
Add to list
0:00 / 0:00