Sensation and Perception

Learning Objectives

  • Describe the basic anatomy and function of the auditory system
  • Explain how we encode and perceive pitch and localize sound

Our auditory system converts pressure waves into meaningful sounds. This translates into our ability to hear the sounds of nature, to appreciate the beauty of music, and to communicate with one another through spoken language. This section will provide an overview of the basic anatomy and function of the auditory system. It will include a discussion of how the sensory stimulus is translated into neural impulses, where in the brain that information is processed, how we perceive pitch, and how we know where sound is coming from.

Anatomy of the Auditory System

The ear can be separated into multiple sections. The outer ear includes the pinna, which is the visible part of the ear that protrudes from our heads, the auditory canal, and the tympanic membrane, or eardrum. The middle ear contains three tiny bones known as the ossicles, which are named the malleus (or hammer), incus (or anvil), and the stapes (or stirrup). The inner ear contains the semi-circular canals, which are involved in balance and movement (the vestibular sense), and the cochlea. The cochlea is a fluid-filled, snail-shaped structure that contains the sensory receptor cells (hair cells) of the auditory system (Figure 1).

An illustration shows sound waves entering the “auditory canal” and traveling to the inner ear. The locations of the “pinna,” “tympanic membrane (eardrum)” are labeled, as well as parts of the inner ear: the “ossicles” and its subparts, the “malleus,” “incus,” and “stapes.” A callout leads to a close-up illustration of the inner ear that shows the locations of the “semicircular canals,” “urticle,” “oval window,” “saccule,” “cochlea,” and the “basilar membrane and hair cells.”
Figure 1. The ear is divided into outer (pinna and tympanic membrane), middle (the three ossicles: malleus, incus, and stapes), and inner (cochlea and basilar membrane) divisions.

Sound waves travel along the auditory canal and strike the tympanic membrane, causing it to vibrate. This vibration results in movement of the three ossicles. As the ossicles move, the stapes presses into a thin membrane of the cochlea known as the oval window. As the stapes presses into the oval window, the fluid inside the cochlea begins to move, which in turn stimulates hair cells, which are auditory receptor cells of the inner ear embedded in the basilar membrane. The basilar membrane is a thin strip of tissue within the cochlea. Sitting on the basilar membrane is the organ of Corti, which runs the entire length of the basilar membrane from the base (by the oval window) to the apex (the “tip” of the spiral). The organ of Corti includes three rows of outer hair cells and one row of inner hair cells. The hair cells sense the vibrations by way of their tiny hairs, or stereocillia. The outer hair cells seem to function to mechanically amplify the sound-induced vibrations, whereas the inner hair cells form synapses with the auditory nerve and transduce those vibrations into action potentials, or neural spikes, which are transmitted along the auditory nerve to higher centers of the auditory pathways.

The activation of hair cells is a mechanical process: the stimulation of the hair cell ultimately leads to activation of the cell. As hair cells become activated, they generate neural impulses that travel along the auditory nerve to the brain. Auditory information is shuttled to the inferior colliculus, the medial geniculate nucleus of the thalamus, and finally to the auditory cortex in the temporal lobe of the brain for processing. Like the visual system, there is also evidence suggesting that information about auditory recognition and localization is processed in parallel streams (Rauschecker & Tian, 2000; Renier et al., 2009).

Sound Waves

As mentioned above, the vibration of the tympanic membrane is what triggers the sequence of events that lead to our perception of sound. Sound waves travel into our ears at various speeds and amplitudes. Like light waves, the physical properties of sound waves are associated with various aspects of our perception of sound. The frequency of a sound wave is associated with our perception of that sound’s pitch. High-frequency sound waves are perceived as high-pitched sounds, while low-frequency sound waves are perceived as low-pitched sounds. The audible range of sound frequencies is between 20 and 20000 Hz, with greatest sensitivity to those frequencies that fall in the middle of this range.

As was the case with the visible spectrum, other species show differences in their audible ranges. For instance, chickens have a very limited audible range, from 125 to 2000 Hz. Mice have an audible range from 1000 to 91000 Hz, and the beluga whale’s audible range is from 1000 to 123000 Hz. Our pet dogs and cats have audible ranges of about 70–45000 Hz and 45–64000 Hz, respectively (Strain, 2003).

The loudness of a given sound is closely associated with the amplitude of the sound wave. Higher amplitudes are associated with louder sounds. Loudness is measured in terms of decibels (dB), a logarithmic unit of sound intensity. A typical conversation would correlate with 60 dB; a rock concert might check in at 120 dB (Figure 2). A whisper 5 feet away or rustling leaves are at the low end of our hearing range; sounds like a window air conditioner, a normal conversation, and even heavy traffic or a vacuum cleaner are within a tolerable range. However, there is the potential for hearing damage from about 80 dB to 130 dB: These are sounds of a food processor, power lawnmower, heavy truck (25 feet away), subway train (20 feet away), live rock music, and a jackhammer. The threshold for pain is about 130 dB, a jet plane taking off or a revolver firing at close range (Dunkle, 1982).

This illustration has a vertical bar in the middle labeled Decibels (dB) numbered 0 to 140 in intervals of 20 from the bottom to the top. To the left of the bar, the “sound intensity” of different sounds is labeled: “Hearing threshold” is 0; “Whisper” is 30, “soft music” is 40, “Risk of hearing loss” is 110, “pain threshold” is 130, and “harmful” is 140. To the right of the bar are photographs depicting “common sound”: At 20 decibels is a picture of rustling leaves; At 60 is two people talking, at 80 is a car, at 90 is a food processor, at 120 is a music concert, and at 130 are jets.
Figure 2. This figure illustrates the loudness of common sounds. (credit “planes”: modification of work by Max Pfandl; credit “crowd”: modification of work by Christian Holmér; credit “blender”: modification of work by Jo Brodie; credit “car”: modification of work by NRMA New Cars/Flickr; credit “talking”: modification of work by Joi Ito; credit “leaves”: modification of work by Aurelijus Valeiša)

Although wave amplitude is generally associated with loudness, there is some interaction between frequency and amplitude in our perception of loudness within the audible range. For example, a 10 Hz sound wave is inaudible no matter the amplitude of the wave. A 1000 Hz sound wave, on the other hand, would vary dramatically in terms of perceived loudness as the amplitude of the wave increased.

Link to Learning

Watch this brief video demonstrating how frequency and amplitude interact in our perception of loudness.

Of course, different musical instruments can play the same musical note at the same level of loudness, yet they still sound quite different. This is known as the timbre of a sound. Timbre refers to a sound’s purity, and it is affected by the complex interplay of frequency, amplitude, and timing of sound waves.

Licenses and Attributions (Click to expand)

CC licensed content, Shared previously

All rights reserved content

definition

License

Icon for the Creative Commons Attribution 4.0 International License

General Psychology Copyright © by OpenStax and Lumen Learning is licensed under a Creative Commons Attribution 4.0 International License, except where otherwise noted.

Share This Book