Unit 1: Hearing & Deafness

1.1   Importance of hearing

1.2   Parts of the ear and process of hearing

1.3   Introduction to physics of sound, production and propagation of sound

1.4   Physical and psychological attributes of sound

1.5   Hearing Impairment Definition, Classification in terms of age of onset, type, degree, nature


1.1     Importance of hearing

The old Chinese proverb shows the importance of the senses in the learning process. The five senses of hearing, touch, sight, taste and smell are the primary means we use to gain new knowledge. We rarely experience with one sense alone. Our sense work together to give us a total picture of our experiences.

People of all ages learn best when involved in meaningful experiences. Learning takes place when the mind is able to put together information from all the senses and make a connection with past learning. Using many senses to gain information helps learning to be more meaningful and useful. Children naturally learn with all the senses. From birth, children are experts at learning with all five senses active. They have not learned to select the information from any one sense as more important. They are interested in everything!

The Percentage schema input form of information acquired by the human senses: sight, hearing, smell, touch, taste.

The Percentage schema input form of information acquired by the human senses: sight, hearing, smell, touch, taste.

 

As one of our most important senses, the ability to hear enables us to connect to the world for many very important, even vital, reasons.

Most importantly, it connects us to people enabling us to communicate in a way that none of our other senses can achieve.

Hearing for communicating with people

Our ability to communicate with other people is heavily dependent on our ability to understand speech which is one of the most complex sounds we have to listen to. Without good hearing in both ears, understanding what a person is saying needs more concentration and can be very tiring, especially if several people are talking or there s background noise.

Hearing for experiencing sounds around us

As important as communicating with other people certainly is, hearing matters for all the pleasure it can bring and the difference it can make to our quality of life. Listening to music, television and radio, going to the cinema or theatre, attending a place of worship, going to meetings to learn or simply for pleasure as well as listening to the sounds of nature can all be affected.

Hearing and personal safety

The dependence on good hearing for personal safety shouldn t be taken for granted. We are often more likely to hear a potential threat to our safety before it becomes visible, if it can be seen at all. Crossing the road on foot, driving a vehicle, responding to fire, smoke or intruder alarms at home, in the workplace or public buildings, and awareness of another person approaching who may mean us harm are all part of the daily, even constant, need for good hearing to protect our personal safety and physical wellbeing.

Hearing and working

The proportion of people with hearing loss who are unemployed is higher than in the general population. Untreated hearing loss can have a number of consequences in the workplace and many of those in work who are struggling with hearing loss unnecessarily experience reduced opportunities for promotion or work at a level below their skills, knowledge and experience.

Hearing and mental health

In recent years, a considerable amount of scientific evidence has been published highlighting the connection between hearing loss and mental health. It s clear that there is an association between unassisted hearing loss and cognitive decline and dementia. Why this should be the case is still not clear and much research is being undertaken to try and establish how and why hearing loss and cognitive health are connected.

Impact of hearing loss on Learning

 

The relationship between hearing loss and learning is complex. Hearing plays a vital role in the learning process, especially in the early years of education. Here are some ways in which hearing loss can impact learning:


 

1.2     Parts of the ear and process of hearing

 

human ear | Structure, Function, & Parts | Britannica

Anatomy of Ear

The three anatomical regions of the ear are the outer ear, the middle ear and the inner ear.

Outer Ear Parts of Outer ear

The outer or external ear anatomy comprises the following parts

Tympanic membrane or eardrum is made up of connective tissue. Skin covers the outer portion and from inside, it is covered by mucous membrane. The tympanic membrane separates the outer ear from the middle ear.

Outer ear Anatomy

Auricle is found closed to the side of the head and comprises of a fine plate of yellow elastic cartilage which is molded into distinct ridges, furrows and hollows forming an irregular shallow funnel. The concha is the deepest depression leading to the external auditory canal. The helix emerges from the base of the concha continuing as the rim of the upper part of the auricle. The antihelix in the inner ridge engirdles the concha and is separated by the scapha from the helix.

The external auditory canal is somewhat curved tube extending inwards from the base of the concha and blindly terminates at the tympanic membrane. In its exterior third, the wall of the canal comprises the cartilage and inner comprises the bone. The stretched passage is lined with skin covering the exterior surface of the tympanic membrane. Thin hair directed to the exterior and modified sweat glands producing earwax line the canal and prevent entry of foreign particles.

Pinna receives the sound in the form of vibration. The sound waves reach and vibrate the eardrum through the external auditory canal.

Middle Ear Parts of the Middle Ear

The middle ear anatomy is as follows

The middle ear amplifies the sound waves and transmits to the inner ear.

The middle ear cavity is an air-filled, narrow space. The upper and lower chamber, the tympanum and epitympanum are as a result of a small constriction. The chambers are called atrium and attic. The space of the middle ear somewhat appears as a rectangular room having 4 walls, a roof and a floor. The lateral wall is formed by the tympanic membrane while the superior wall is a bone separating the cranial and middle ear cavity and the brain.

The inferior wall is a thin plate separating the middle ear cavity from the jugular vein and that of the carotid artery. The posterior wall somewhat separating the middle ear cavity from the mastoid antrum. In the anterior wall the eustachian tube opening can be found, connecting the middle ear to the nasopharynx. The inner wall separating the middle from the inner ear forms a section of the otic capsule of the inner ear.

The final part of the ear is called the inner ear, which includes the cochlea, the vestibule, and the labyrinth. The cochlea, also known as the organ of hearing, is shaped much like a snail s shell and has small hair cells called cilia that are bathed in fluid. An elastic membrane runs from the beginning to the end of the cochlea, splitting it into an upper and lower part. This membrane is called the basilar membrane because it serves as the base, or ground floor, on which key hearing structures sit. The vestibule connects the cochlea to the labyrinth, a set of semicircular canals that control balance.


 

1.3     Introduction to physics of sound, production and propagation of sound

 

Have you ever wonder how are we able to hear different sounds produced around us. How are these sounds produced? Or how a single instrument can produce a wide variety of sounds? Also, why do astronauts communicate in sign languages in outer space? A sound is a form of energy that helps in hearing to living beings. It is a form of kinetic mechanical energy which moves in a form of a wave.  The sound waves show vibrational motion. Hertz (Hz) and  Decibel (dB) are widely used measurement units to measure sound.

A sound is a form of energy, just like electricity, heat or light. Sound is one of the important senses of the human body. Some sounds are pleasant, and some are annoying. We are subjected to various types of sound all time. Sound waves are the result of the vibration of objects. Let s examine some sources of sounds like a bell. When you strike a bell, it makes a loud ringing noise. Now, instead of just listening to the bell, put your finger on the bell after you have struck it. Can you feel it vibrating? This is the key to sound. It is even more evident in guitars and drums. You can see the wires vibrating every time you pluck it. When the bell or the guitar stops vibrating, the sound also stops.

The to and fro motion of the body is termed vibration. You can see examples of vibrations everywhere. Vibrating objects produce sound. Some vibrations are visible; some aren t. If you pull and then release a stretched rubber band, the band moves to and fro about the central axis and while doing, so it also produces a sound. The sound moves through a medium by alternately contracting and expanding parts of the medium it is travelling through.

 

Types of Sound

       There are different types of sound like audibleinaudiblepleasant, and unpleasant.

       The waves which frequency ranges below 16Hz are called infrasonic waves. These waves are used for the detection of an earthquake, volcanic eruption, petrol formations underground, etc.

       Whereas, the waves which frequency ranges above 20kHz are called ultrasonic waves. These waves are used by some animals to locate their prey and communicate like bats, etc.

       The waves having a frequency between 16Hz to 20kHz are audible to humans.

Production of Sound

Sound is produced by the rapid to and fro movement of an object i.e. vibration. The object set to vibration disturbs the equilibrium state of the particles in the medium and vibration keeps transmitting from one particle to another. 

The vibration of the body is the primary source of sound s genesis. The emission of sound continues as long as a body s vibration remains. By traveling across a continuous elastic membrane, this sound generates a hearing experience in our ears. 

As an example: When a tuning fork is struck, it vibrates and emits sound. The vibration will stop if the tuning fork is touched with your hand. As a result, sound output will be reduced. In the picture, as a tuning fork emits sounds, a pinball in contact with one of the tuning fork s arms continually travels away from the arm owing to the fork s vibration. 

The vibration of a body causes the sound to be generated. The vibration of the body is the primary generator of all types of sound. Mechanical energy is converted into sound as a result of vibration.

Propagation of Sound

Sound is produced by the rapid to and fro movement of an object i.e. vibration. The object set to vibration disturbs the equilibrium state of the particles in the medium and vibration keeps transmitting from one particle to another. e.g. when a tuning fork is struck against a rubber pad vibration created in the prongs can be noticed and brought near our ears we can sense the sound being produced, guitar strings produce sound when struck, etc.

The traveling of sound from the sound source to the surrounding medium is the propagation of sound. Sound waves cannot travel in a vacuum as there are no molecules present to set vibration.

Sound waves traveling through a medium reach our ears, and we hear the sound. As the sound waves are gathered by the pinna of our ear and then lead to the ear canal where they strike the eardrum. The vibration set from the eardrum is transmitted to three bones (hammer, anvil, stirrup) and finally to the inner ear. The sensitive cells present in the inner ear transmit the vibration to the brain through the auditory nerve which is registered as sound by the brain.


 

1.4     Physical and psychological attributes of sound

 

Sound, and hence music, can be analyzed in two ways: physically, by using instruments to record measurements of its properties, and psychologically, by listening to the sound and ascertaining its properties on the basis of our immediate experience. Unfortunately, there is no one-to-one correlation between the physical and psychological attributes of sound. While measuring the physical properties is usually straightforward, the psychological properties are usually multi-dimensional, meaning that more than one physical property must be taken into account to describe each psychological property.

To this consideration, we must also add the fact that describing sound and music are not the same. Music puts sound into a context, in which our perception of its properties may be influenced by other events that happen in its near temporal vicinity. The same sound in different contexts may not be described in the same manner. Second, most musical sounds are truly complex, meaning that they have many different properties, most of which may be changing at any instant. Finally, we should note that ascertaining the aural properties of music is not the same thing as reacting to it emotionally, or evaluating it.

Physical Properties of Sound

1. Frequency: The period of a sound is the duration of one cycle of its motion. Its frequency is the number of cycles that occur within a second. Frequency is measured in Hertz (abbreviated Hz). The period and frequency of a sound are reciprocally related (i.e., period = 1/frequency).

The phase of a sound is the instantaneous amplitude at a given point in time. The entire cycle of a sound wave is divided into 360 equal parts called degrees. Phase is thus a measure of time with respect to frequency or period.

Human beings can perceive frequencies from about 20 Hz up to about 18,000 Hz (18 KHz), the limit varying with the individual (aging causes loss of high frequency detection).

2. Intensity: Sound intensity is a measurement of the amount of power of a sound at a given location. It is measured in decibels (abbreviated dB), which is a measurement of the ratio between the power of a given sound and a reference sound.

3. Complex Sounds: Most sounds we hear, even those that have only a single perceived pitch, actually consist of several separate components whose amplitudes vary over the course of the duration of the sound. These frequency components may be divided into harmonic and nonharmonic (or inharmonic) partials. All frequency components of a sound constitute its spectrum. Modern studies of timbre further qualify the notion of spectrum to take into account a separate varying amplitude for each component.

Harmonic partials are tones whose frequencies are integral multiples of a fundamental frequency. Tones above the fundamental are called overtones. The fundamental frequency and its harmonic partials constitute a harmonic series or overtone series. Although the frequency between each successive overtone is the same, the musical interval or distance in pitch gets smaller as the series goes higher. It is a remarkable fact of sound perception that an entire collection of harmonic partials is perceived as a single pitch, with the collection itself being taken in as the timbre (see below).

A single tone with no overtones is a sine wave, which is the acoustical manifestation of simple harmonic motion. Each harmonic partial of a complex tone is a sine wave.

Nonharmonic partials are components of a sound whose frequencies are not harmonic partials of the fundamental frequency. Sounds containing nonharmonic partials do not possess a single pitch.

Noise is a sound containing a complex mix of all frequencies simultaneously, which is produced by random vibrations of air particles. White noise, so named by analogy with white light, has an even amplitude for all frequencies. Pink noise is like white noise but has a constant power per octave. (The term "noise" is also used to describe any "unwanted" sound.) Noise is always present in the background of any acoustical environment, which is why sound reproduction devices are measured by their signal-to-noise ratios, or the maximum intensity between the recorded signal and the background noise.

4. Envelope: Envelope is defined as the growth and decay characteristics of some property of sound; thus it has to be qualified by the property to which the envelope pertains. An amplitude envelope is the growth and decay of the amplitude of a sound, a spectral envelope is the growth and decay of its spectrum, etc. While envelope is recognized as a separate property because of the development of synthesizers and electronic music, it is usually not described as a property in books on acoustics.

5. Modulation: Modulation is defined as the periodic change of some property of sound. Thus, like envelope, it must be qualified by the property being modulated. The most common types of modulation are frequency modulation (abbreviated FM), amplitude modulation (abbreviated AM), timbre modulation, and location modulation. FM and AM are also known, respectively, as vibrato and tremolo.

Modulation always involves two signals: the carrier, or "original" signal before the modulation occurs, and the modulator (sometimes called the "program" signal), which changes the carrier signal. For clarity, it should also be noted that some books describe modulation as any type of change of a sound, thus including both periodic and aperiodic signals. It is more useful to refer to aperiodic change as random modulation.

There are always three characteristics involved in modulation: the speed or rate, the amount, and the shape of modulation. These characteristics are determined by the frequency, amplitude and waveshape of the modulator.

Modulation is the primary manner in which subsonic frequencies (i.e., tones below the lower threshold of frequency discrimination) occur in music. When modulation frequencies approach and exceed the lower frequency threshold, they begin to interact with the other audible frequencies in the tone, producing complex tones called sidebands. Control and manipulation of these sidebands is the manner in which FM synthesis (embodied in the Yamaha DX-7 and other instruments) occurs.

5. Reverberation: Reverberation is the cumulative effect that occurs when a sound is played in an acoustical space, where the sound that reaches the listener is a mixture of direct sound and sounds reflected off the walls, floor and ceiling, which arrive at the listener's ears at slightly delayed intervals. When a sound wave strikes a physical surface, some portion of the sounds is reflected away from the surface and some is absorbed by it. Researchers have determined that there are optimal reverberant characteristics for musical spaces, and this is therefore an important subject for architects and engineers.

While reverberation is associated with many different psychological properties, it is important to point out that it can effect the spectrum of the sound, since the reflected sound can resonate some of the partials of the tones.

Psychological Properties of Sound

1. Pitch: For many reasons, pitch is probably the most important characteristic in music. First, pitch can be broken into several distinct properties. Second, listeners are sensitive to the smallest changes in pitch. Finally, pitch organization is the primary topic of music theory.

There are at least three components of pitch that are basic to practically all music. The most basic is the "higher than" relationship, by which any tone can be described as higher, lower or the same as another. Second is the notion of octave equivalence, whereby tones an octave apart possess an "identity" with each other not shared with other pitches. Finally, two or more pitches sounding together create an "interval" or "chord" which has an additional similarity not shared by other intervals or chords.

Of all psychological and physical properties of sound, pitch is the one that practically has a one-to-one correlation with the physical property of frequency. The exception concerns only very low frequencies, which may sound flat. Whereas frequency is measured in Hz, pitch is measured by the identification of tones in the equally-tempered scale, or other musical scales.

The subject of tuning systems is very important for most music, but today we rarely discuss it since we assume equal temperament for most music. This is probably an error. The 12-tone equal-tempered scale took centuries of music history to evolve. Pianos are tuned even today in something more approaching meantone temperament, and organs are often tuned by historical methods. Live music is constantly adjusted by ear. Another topic for further investigation is the subject of intonation and pitch deviations, about which little research has been done.

2. Loudness: As pitch corresponds to frequency, loudness corresponds to intensity; but in this case, psychoacoustical research has shown that loudness is a function of both intensity and frequency. The accepted explanation for this is that the human ear itself possesses a resonance at about 3,000 Hz. Since this is rather high in musical terms, tones must be boosted progressively as they move lower to produce the perception of equal loudnesses.

3. Timbre: Timbre (pronounced "tam-ber") is defined in the literature as the property that enables a listener to identify the instrument playing the sound. It is also described as the "tone quality", the psychological property corresponding to the spectrum of a sound.

The traditional definition both helps and confuses the issue. It helps, because it identifies timbre as multi-dimensional (many different properties help identify the instrument, not just the spectrum); but it confuses because there are no easy generalizations about the similarities between one tone on an instrument and another. Especially when we accept the spectrum of a sound as multiple components, each with a separate envelope, we see that sounds are too complex for this property to be a single item. Also, in electronic music, identifying the "instrument" is not a purposeful activity, since this is often just something like a synthesizer.

There are two basic similarities by which the timbres of different pitches may be compared. The most direct is similarity of waveshape. Two sounds that have the same spectrum would thus be equal according to this definition, regardless of their pitches. This is the property that exists in common between all members of the clarinet family, since the shape of the instrument resonates only odd-numbered partials.

More important, however, is comparing tones on the basis of their formants. A formant is a fixed frequency area in which the loudest partials occur, regardless of the partial numbers. (For example, a formant at 1000 Hz would resonate the tenth partial of a 100 Hz tone, but the fourth partial of a 250 Hz tone, and the 25th partial of a 40 Hz tone.) Formants are produced by filters that resonate a particular part of the frequency continuum. It is on the basis of formant similarities that listeners identify vowels in speech, and this is probably our most innate concept of timbre. (Most vowels have three or more different formants, but usually one is most prominent.)

4. Sound Location: The location from which a sound emanates is a property always present in any sound. This is an obvious fact, and it is usually not structured in live music. In electronic music multitrack playback systems allow composers to make use of it.

5. Envelope: Envelope is a psychological property as well as a physical property, but very few studies have been made of perceptual envelopes. Musical terminology is full of terms that describe different types of articulations, such as legato, pizzicato, staccato, sforzando, etc. There are many perceptible gradations between the shortest pizzicato-like percussive tone to the legato-like organ tone, and electronic music practitioners can experiment with different values to determine what may be useful.

6. Properties associated with delays: The reverberant characteristics of musical spaces give rise to a number of identifiable properties that can be perceived by listeners. Effects devices allow the delay times to be manipulated in many ways, producing effects that go beyond what is possible in acoustic spaces.

Delay devices (which are usually digital, so these are often called "digital delays") allow a sound to be delayed for a specific duration and then mixed in with the original sound, with volume controls on both signals. Delays are usually measured in milliseconds (abbreviated ms), and are variable from about 1 to 500 or more ms (half a second). A variety of effects exist in a continuum from very short to very long delays. There are two "magic" numbers that help separate these effects. At about 40 ms, the delayed signal begins to become distinct, and at about 100 ms, it can be heard as a separate tone.

Thickening occurs below 40 ms, as the sound begins to increase in "fullness". At about 40 ms, the effect is described more as doubling, where two distinct voices can be heard. Between 40 and 50 ms the delayed sound starts to break away from the original sound and be perceived as an echo. Above 100 ms the effect is described as "slapback" echo, where the original signal has a distinct "answer" in the reflection. A continuum of different effects can be perceived from the shortest to the longest delay between these values.

Chorus effect is defined as the properties that exist when a tone is played on two or more instruments, which produces something different from merely intensifying the amplitude. It is produced by deviations in intonation, envelope, modulation, and rhythm, since two human instruments will never play precisely together. Many effects devices include a type of chorus effect produced by delaying the original sound by the "doubling" amount of time.

Effects devices also contain a number settings that are probably not distinct perceptual properties, but which are similar to those mentioned above. Reverberation itself, usually produced by a complex mixture of short delays, is both a coloring and "smearing" property, since it has filtering effects on the sound and a variety of the thickening-doubling-echo properties. Different reverberation settings allow experimentation with these values. Flanging is created by modulating the delay time, and produces a kind of "whooshing" effect.

7. Other Properties: There are many additional properties of sound that can be identified, and some that have been researched. All that is needed to determine whether some property exists is, first, that someone formulate an idea that something exists, and then to test the idea by listening for it in music. As time goes on, undoubtedly new ideas will be postulated and tested.


 

1.5     Hearing Impairment Definition, Classification in terms of age of onset, type, degree, nature

 

Hearing impairment is a partial or total inability to hear. It is a disability which is sub-divided in two categories of deaf and hard of hearing.

The Rights of Person with Disabilities Act, 2016-

A pure tone audiometry test measures the softest, or least audible, sound that a person can hear. During the test, you will wear earphones and hear a range of sounds directed to one ear at a time. The loudness of sound is measured in decibels (dB).

Person with disability act- (PWD, 1995)

      Definition of disability in pwd act includes hearing impairment- Hearing impairment means loss of 60 decibel or more in the better ear in speech conversation frequencies.

Centre for diseases control and prevention (CDC) Refers to hearing impairment as conditions that affect the frequency and or intensity of one s hearing. Individuals with mild to moderate hearing impairments may be hard of hearing but are not deaf these individuals differ from deaf individuals as they use their hearing to assist in communication with others

CLASSIFICATION

According to degree of impairment WHO

 

According to place of impairment

      Conductive hearing loss hearing loss due to the interference in the transmission of sound to and through the sense organ (outer or middle ear). Conductive hearing loss can be caused by blockage of the external canal, perforation of the eardrum, infections and diseases of the middle ear, and disruption or fixation of the small hearing bones. A person with a conductive hearing loss may hear better in noise than in quiet and generally hears well over the telephone. Total deafness is rarely the result of conductive hearing impairments, and a properly fitted hearing aid usually provides benefit. Sometimes a surgical correction can improve the hearing.

Conditions that cause conductive hearing loss are:

      Sensory-neural hearing loss due to the abnormality of the inner ear or the auditory nerve, or both. Sensorineural hearing impairment is more common and has many possible causes. Usually the condition results in slow, gradual loss of the sound receptors and nerve endings. Patients may experience a lack of sensitivity to sound or a lack of interpretation or clarity of sound. Speech understanding is difficult when there is background noise, and hearing sensitivity is usually better for low tones than high-pitched sounds. Hearing aids provide benefit for many patients with sensorineural impairment by amplifying sounds. However hearing aids typically do not increase the clarity of speech. When speech understanding deteriorates significantly, hearing aids may not provide sufficient benefit. Many of such patients are good candidates for a cochlear implant. These devices are surgically implanted and directly stimulate the hearing nerve to improve the ability to hear sounds and the ability to understand speech.

Some conditions that may cause congenital sensorineural hearing loss are:

      Mixed hearing loss - combination of both; sometimes called a flat loss. This is a combination of conductive and sensorineural hearing loss. Long-term ear infections can damage both the eardrum and the ossicles. Sometimes, surgical intervention may restore hearing, but it is not always effective.

      Central Auditory Processing Disorder- This form of hearing impairment occurs when the auditory centers of the brain are affected by injury, disease, tumor, heredity, birth trauma, head trauma or unknown causes. Although the outer, middle and inner parts of the ear deliver sound signals, these signals are unable to be processed and interpreted by the brain.  Thus, even though the person s hearing may be normal, there are difficulties with understanding what is being said, resulting in learning problems.  Central auditory processing involves a variety of skills such as localizing the sound, attending to it, perceiving it accurately through auditory discrimination and auditory recognition to make sense of the sound (ASHA Task Force on Central Auditory Processing Consensus Development, 1996).  Difficulties in one or more of the above-listed behaviors may constitute a central auditory processing disorder. The use of assistive technology and environmental modifications to the listening environment can provide acoustic enhancement by amplifying the incoming speech signal, thereby improving the auditory signal to noise ratio, that is, the ratio between the signal and any unwanted background noise (Pagliano, 2005).

 

According to the age at onset of deafness

       Congenitally deaf born deaf

       Adventitiously deaf born with normal hearing and became deaf through accident/illness

According to language development-

       Pre-lingually deaf born deaf or lost hearing before speech and language were developed

       Post-lingually deaf- lost hearing after development of spontaneous speech and language

 

Other descriptors associated with hearing loss as described in ASHA:

To summarize, there is no typical student with a hearing impairment.  Although all have some degree of hearing loss, it is important to note that hearing loss affects each individual differently. The effect that a hearing impairment has on a student will depend on a number of factors beyond the attributes described above.  Other important factors include the age of onset, when it was detected, how the student manages the hearing impairment (for example, level of compliance with wearing their hearing aids), the student s abilities, personality, and the quality and type of auditory intervention programs.  Hence, it is extremely important to consider each student as an individual and find out the specific type of loss in order to accommodate needs.