Aiou Solved Assignments 1 & 2 code 683 Autumn & Spring 2024

aiou solved assignment code 207

Aiou Solved Assignments code 683 Autumn & Spring 2024

AIOU Solved Assignments 1 & 2 Code 683 Autumn & Spring 2024. Solved Assignments code 683 Audiology and Audiometry 2024. Allama iqbal open university old papers.

Course: Audiology and Audiometry (683)
Level: M.A / M.Ed in Special Education
Semester: Autumn & Spring 2024
ASSIGNMENT No. 1

Q.1 Discuss the role of Audiology in the educational development of hearing

impaired children. Support your answer with suitable examples.

Answer:

Audiology (from Latin aud?re, “to hear”) is a branch of science that studies hearing, balance,

and related disorders. Its practitioners, who treat those with hearing loss and proactively

prevent related damage, are audiologists. Employing various testing strategies (e.g. hearing

tests, otoacoustic emission measurements, videonystagmography, and electrophysiologic

tests), audiology aims to determine whether someone can hear within the normal range,

and if not, which portions of hearing (high, middle, or low frequencies) are affected, to what

degree, and where the lesion causing the hearing loss is found (outer ear, middle ear, inner

ear, auditory nerve and/or central nervous system). If an audiologist determines that a

hearing loss or vestibular abnormality is present he or she will provide recommendations to

a patient as to what options (e.g. hearing aid, cochlear implants, appropriate medical

referrals) may be of assistance.

In addition to testing hearing, audiologists can also work with a wide range of clientele in

rehabilitation (individuals with tinnitus, auditory processing disorders, cochlear implant

users and/or hearing aid users), from pediatric populations to veterans and may perform

assessment of tinnitus and the vestibular system.

An audiologist is a health-care professional specializing in identifying, diagnosing, treating

and monitoring disorders of the auditory and vestibular system portions of the ear.

Audiologists are trained to diagnose, manage and/or treat hearing, tinnitus, or balance

problems. They dispense, manage, and rehabilitate hearing aids and assess candidacy for

and map cochlear implants. They counsel families through a new diagnosis of hearing loss

in infants, and help teach coping and compensation skills to late-deafened adults. They also

help design and implement personal and industrial hearing safety programs, newborn

hearing screening programs, school hearing screening programs, and provide special fitting

ear plugs and other hearing protection devices to help prevent hearing loss. Audiologists

are trained to evaluate peripheral vestibular disorders originating from inner ear

pathologies. They also provide treatment for certain vestibular and balance disorders such

as Benign Paroxysmal Positional Vertigo (BPPV). In addition, many audiologists work as

auditory scientists in a research capacity.

Audiologists have training in anatomy and physiology, hearing aids, cochlear implants,

electrophysiology, acoustics, psychophysics, neurology, vestibular function and assessment,

balance disorders, counseling and sign language. Audiologists also run neonatal hearing

screening programme which has been made compulsory in many hospitals in US, UK and

India. An Audiologist usually graduates with one of the following qualifications:

MSc(Audiology), Au.D., STI, PhD, or ScD, depending the program and country attended.

Types:

Diagnosis occurs through hearing tests. There are five common tests audiologists use to

diagnose a patient’s hearing loss. These tests include:

• Pure-tone test

• Speech test

• Middle ear test

• Auditory brainstem response

• Otoacoustic emissions

Pure-tone test

A pure-tone test determines what range of pitches an individual can hear. The test will pick

out the faintest tones a person can hear at multiple pitches, or frequency. The test is not

painful and shouldn’t cause anxiety for the patient.

During the test, the patient will wear headphones. A sound will be played through the

headphones. Should the patient hear the sound, they will respond by raising a hand,

pressing a button or saying, “yes.” Each ear will be tested individually in order to get the

most accurate results.

Speech test

During a speech test, the patient will be asked to listen to conversation in quiet and noisy

environments. To determine an individual’s speech reception threshold, the audiologist will

record word recognition or the ability to repeat words back.

Middle ear test

To determine how the middle ear is functioning, an audiologist will get measurements such

as tympanometry, acoustic reflex measures and static acoustic measures. During a middle

ear test, the audiologist pushes air pressure into the canal, causing the eardrum to vibrate

back and forth. Acoustic reflex measures provide information regarding the location of the

hearing issue. Acoustic reflex is the contraction of the middle ear when introduced to a loud

sound. Testing for acoustic measure enables an audiologist to identify a perforated eardrum

and check the opening of the ear’s ventilation tubes.

Auditory brainstem response

The auditory brainstem response test gives an audiologist data about the inner ear and

brain pathways needed for hearing. During the test, electrodes are placed on the head to

record brain wave activity.

Otoacoustic emissions

Last but not least, otoacoustic emissions, or sounds given off by the inner ear when the

cochlea is stimulated by sound, are measured to narrow down types of hearing loss. These

emissions can be measured by inserting a small probe into the ear canal. The probe

measures the sounds produced by the vibration of the outer hair cells, which occurs when

the cochlea is stimulated.

Aiou Solved Assignments 1 & 2 code 683 Autumn & Spring 2024

{================}

Q.2 Describe human auditory system. Discuss the functions of this system

particularly the neurological pathways to the cortex?

Answer:

The human auditory system is the sensory system for the sense of hearing. It includes both

the sensory organs (the ears) and the auditory parts of the sensory system. The outer ear

funnels sound vibrations to the eardrum, increasing the sound pressure in the middle

frequency range. The middle-ear ossicles further amplify the vibration pressure roughly 20

times. The base of the stapes couples vibrations into the cochlea via the oval window, which

vibrates the perilymph liquid (present throughout the inner ear) and causes the round

window to bulb out as the oval window bulges in.

Vestibular and tympanic ducts are filled with perilymph, and the smaller cochlear duct

between them is filled with endolymph, a fluid with a very different ion concentration and

voltage. Vestibular duct perilymph vibrations bend organ of Corti outer cells (4 lines)

causing prestin to be released in cell tips. This causes the cells to be chemically elongated

and shrunk (somatic motor), and hair bundles to shift which, in turn, electrically affects the

basilar membrane’s movement (hair-bundle motor). These motors (outer cells) amplify the

perilymph vibrations that initially incited them over 40-fold. Since both motors are

chemically driven they are unaffected by the newly amplified vibrations due to recuperation

time. The outer hair cells (OHC) are minimally innervated by spiral ganglion in slow

(unmyelinated) reciprocal communicative bundles (30+ hairs per nerve fiber); this contrasts

inner hair cells (IHC) that have only afferent innervation (30+ nerve fibers per one hair) but

are heavily connected. There are 4x more OHC than IHC. The basilar membrane is a wall

where the majority of the IHC and OHC sit. Basilar membrane width and stiffness

corresponds to the frequencies best sensed by the IHC. At the cochlea base the Basilar is at

its narrowest and most stiff (high-frequencies), at the cochlea apex it is at its widest and

least stiff (low-frequencies). The tectorial membrane supports the remaining IHC and OHC.

Tectorial membrane helps facilitate cochlear amplification by stimulating OHC (direct) and

IHC (via endolymph vibrations). Tectorial’s width and stiffness parallels Basilar’s and similarly

aids in frequency differentiation.

The superior olivary complex (SOC), in pons, is the first convergence of the left and right

cochlear pulses. SOC has 14 described nuclei; their abbreviation are used here (see Superior

olivary complex for their full names). MSO determines the angle the sound came from by

measuring time differences in left and right info. LSO normalizes sound levels between the

ears; it uses the sound intensities to help determine sound angle. LSO innervates the IHC.

VNTB innervate OHC. MNTB inhibit LSO via glycine. LNTB are glycine-immune, used for fast

signalling. DPO are high-frequency and tonotopical. DLPO are low-frequency and

tonotopical. VLPO have the same function as DPO, but act in a different area. PVO, CPO,

RPO, VMPO, ALPO and SPON (inhibited by glycine) are various signalling and inhibiting

nuclei.

The trapezoid body is where most of the cochlear nucleus (CN) fibers decussate (cross left

to right and vice versa); this cross aids in sound localization.[18] The CN breaks into ventral

(VCN) and dorsal (DCN) regions. The VCN has three nuclei.[clarification needed] Bushy cells

transmit timing info, their shape averages timing jitters. Stellate (chopper) cells encode

sound spectra (peaks and valleys) by spatial neural firing rates based on auditory input

strength (rather than frequency). Octopus cells have close to the best temporal precision

while firing, they decode the auditory timing code. The DCN has 2 nuclei. DCN also receives

info from VCN. Fusiform cells integrate information to determine spectral cues to locations

(for example, whether a sound originated from in front or behind). Cochlear nerve fibers

(30,000+) each have a most sensitive frequency and respond over a wide range of levels

Functions of this system particularly the neurological pathways to the cortex:

Peripheral Auditory System

Outer Ear: The pinna are the parts of the outer ear that appear as folds of cartilage. They

surround the ear canal and function as sound wave reflectors and attenuators when the

waves hit them. The pinna helps the brain identify the direction from where the sounds

originated. From the pinna, the sound waves enter a tube-like structure called auditory

canal. This canal serves as a sound amplifier. The sound waves travel through the canal and

reach the tympanic membrane (eardrum), the canal’s end.

Middle Ear: As the sound waves hit the eardrum, the sensory information goes into an air-

filled cavity through lever-teletype bones called ossicles. The three ossicles include the

hammer (malleus), anvil (incus), and stirrup (stapes). These delicate bones convert the sound

vibrations made when the sound waves hit the ear \drum into sound vibrations of higher

pressure. These transformed vibrations (still in wave form) enter the oval window.

Inner Ear: Beyond the oval window is the inner ear. This segment of the ear is filled with

liquid rather than air, that is why there is a need of conversion of low pressure sound

vibrations to higher pressure ones in the middle ear. The main structure in the inner ear is

called the cochlea, where the sensory info in wave form is transformed into the neural form.

The cochlear duct contains the organ of Corti. This organ is comprised of inner hair cells

that turn the vibrations into electric neural signals. Each hair innervates many auditory nerve

fibers, and these fibers form the auditory nerve. The auditory nerve (for hearing) combines

with the vestibular nerve (for balance), forming cranial nerve VIII or the vestibulocochlear

nerve.

Central Auditory System: Once the sound waves are turned into neural signals, they travel

through cranial nerve VIII, reaching different anatomical structures where the neural

information is further processed. The cochlear nucleus is the first site of neural processing,

followed by the superior olivary complex located in the pons, and then processed in the

inferior colliculus at the midbrain. The neural information ends up at the relay center of the

brain, called the thalamus. The info is then passed to the primary auditory cortex of the

brain, situated in the temporal lobe.

Primary Auditory Cortex: The primary auditory cortex receives auditory information from

the thalamus. The left posterior superior temporal gyrus is responsible for the perception of

sound, and in itthe primary auditory cortex is the region where the attributes of sound

(pitch, rhythm, frequency, etc.) are processed.

Aiou Solved Assignments code 683 Autumn & Spring 2024

{================}

Q.3 Define pure tone. What are the properties of pure tone audiometer and its

functions?

Answer:

A pure tone is a tone with a sinusoidal waveform; this is, a sine wave of any frequency,

phase, and amplitude. A sine wave is characterized by its frequency, the number of cycles

per second, its amplitude, the size of each cycle, and its phase that indicates the time

alignment relative to a zero-time reference point. A pure tone has the property – unique

among real-valued wave shapes – that its wave shape is unchanged by linear time-invariant

systems; that is, only the phase and amplitude change between such a system’s pure-tone

input and its output.

Sine and cosine waves can be used as basic building blocks of more complex waves. A pure

tone of any frequency and phase can be decomposed into, or built up from, a sine wave

and a cosine wave of that frequency. As additional sine waves having different frequencies

are combined, the waveform transforms from a sinusoidal shape into a more complex

shape. Sound localization is often more difficult with pure tones than with other sounds.

Properties of pure tone audiometer and its functions:

Audiometry consists of tests of function of the hearing mechanism. This includes tests of

mechanical sound transmission (middle ear function), neural sound transmission (cochlear

function), and speech discrimination ability (central integration). A complete evaluation of a

patient’s hearing must be done by trained personnel using instruments designed specifically

for this purpose. Pure tones (single frequencies) are used to test air and bone conduction.

These and speech testing are done with an audiometer. The audiometer is an electric

instrument consisting of a pure tone generator, a bone conduction oscillator for measuring

cochlear function, an attenuator for varying loudness, a microphone for speech testing, and

earphones for air conduction testing.

Other tests include impedance audiometry, which measures the mobility and air pressure of

the middle ear system and middle ear (stapedial) reflexes, and auditory brainstem response

(ABR), which measures neural transmission time from the cochlea through the brainstem.

Pure tone audiometry (PTA) is the key hearing test used to identify hearing threshold levels

of an individual, enabling determination of the degree, type and configuration of a hearing

loss and thus providing a basis for diagnosis and management. PTA is a subjective,

behavioural measurement of a hearing threshold, as it relies on patient responses to pure

tone stimuli. Therefore, PTA is only used on adults and children old enough to cooperate

with the test procedure. As with most clinical tests, calibration of the test environment, the

equipment and the stimuli to ISO standards is needed before testing proceeds. PTA only

measures audibility thresholds, rather than other aspects of hearing such as sound

localization and speech recognition. However, there are benefits to using PTA over other

forms of hearing test, such as click auditory brainstem response (ABR). PTA provides ear

specific thresholds, and uses frequency specific pure tones to give place specific responses,

so that the configuration of a hearing loss can be identified. As PTA uses both air and bone

conduction audiometry, the type of loss can also be identified via the air-bone gap.

Although PTA has many clinical benefits, it is not perfect at identifying all losses, such as

‘dead regions’ of the cochlea and neuropathies such as auditory processing disorder (APD).

This raises the question of whether or not audiograms accurately predict someone’s

perceived degree of disability.

PTA procedural standards

There are both international and British standards regarding the PTA test protocol. The

British Society of Audiology (BSA) is responsible for publishing the recommended

procedure for PTA, as well as many other audiological procedures. The British

recommended procedure is based on international standards. Although there are some

differences, the BSA-recommended procedures are in accordance with BS EN ISO 8253-1,

which is the international standard for PTA established by the International Organization for

Standardization. The BSA-recommended procedures provide a “best practice” test protocol

for professionals to follow, increasing validity and allowing standardisation of results across

Britain. The British Society of Audiology. Recommended Procedure: Pure Tone air and bone

conduction threshold audiometry with and without masking and determination of

uncomfortable loudness levels.

Variations

There are cases where conventional PTA is not an appropriate or effective method of

threshold testing. Procedural changes to the conventional test method may be necessary

with populations who are unable to cooperate with the test in order to obtain hearing

thresholds. Sound field audiometry may be more suitable when patients are unable to wear

earphones, as the stimuli are usually presented by loudspeaker. A disadvantage of this

method is that although thresholds can be obtained, results are not ear specific. In addition,

response to pure tone stimuli may be limited, because in a sound field pure tones create

standing waves, which alter sound intensity within the sound field. Therefore, it may be

necessary to use other stimuli, such as warble tones in sound field testing. There are

variations of conventional audiometry testing that are designed specifically for young

children and infants, such as behavioral observation audiometry, visual reinforcement

audiometry and play audiometry.

Conventional audiometry tests frequencies between 250 hertz (Hz) and 8 kHz, whereas high

frequency audiometry tests in the region of 8 kHz-16 kHz. Some environmental factors,

such as ototoxic medication and noise exposure, appear to be more detrimental to high

frequency sensitivity than to that of mid or low frequencies. Therefore, high frequency

audiometry is an effective method of monitoring losses that are suspected to have been

caused by these factors. It is also effective in detecting the auditory sensitivity changes that

occur with aging.

Aiou Solved Assignments 1 & 2 Autumn & Spring 2024 code 683

{================}

Q.4 Describe tempenometry. How it helps the medical or educational professionals

in medical and educational rehabilitation respectively?

Answer:

Tympanometry is an examination used to test the condition of the middle ear and mobility

of the eardrum (tympanic membrane) and the conduction bones by creating variations of

air pressure in the ear canal. Tympanometry is an objective test of middle-ear function. It is

not a hearing test, but rather a measure of energy transmission through the middle ear. The

test should not be used to assess the sensitivity of hearing and the results of this test should

always be viewed in conjunction with pure tone audiometry.

Tympanometry is a valuable component of the audiometric evaluation. In evaluating

hearing loss, tympanometry permits a distinction between sensorineural and conductive

hearing loss, when evaluation is not apparent via Weber and Rinne testing. Furthermore, in

a primary care setting, tympanometry can be helpful in making the diagnosis of otitis media

by demonstrating the presence of a middle ear effusion.

Operation

A tone of 236 Hz is generated by the tympanometer into the ear canal, where the sound

strikes the tympanic membrane, causing vibration of the middle ear, which in turn results in

the conscious perception of hearing. Some of this sound is reflected back and picked up by

the instrument. Most middle ear problems result in stiffening of the middle ear, which

causes more of the sound to be reflected back.

Admittance is how energy is transmitted through the middle ear. The instrument measures

the reflected sound and expresses it as an admittance or compliance, plotting the results on

a chart known as a tympanogram.

Normally, the air pressure in the ear canal is the same as ambient pressure. Also, under

normal conditions, the air pressure in the middle ear is approximately the same as ambient

pressure since the eustachian tube opens periodically to ventilate the middle ear and

equalize pressure. In a healthy individual, the maximum sound is transmitted through the

middle ear when the ambient air pressure in the ear canal is equal to the pressure in the

middle ear.

Procedure

After an otoscopy (examination of the ear with an otoscope) to ensure that the path to the

eardrum is clear and there is no perforation, the test is performed by inserting the

tympanometer probe in the ear canal. The instrument changes the pressure in the ear,

generates a pure tone, and measures the eardrum responses to the sound at different

pressures. This produces a series of data measuring how admittance varies with pressure,

which is plotted as a tympanogram.

Tympanograms are categorized according to the shape of the plot. A normal tympanogram

(left) is labelled Type A. There is a normal pressure in the middle ear with normal mobility of

the eardrum and ossicles. Type B and C tympanograms may reveal fluid in the middle ear,

perforation of the tympanic membrane, scarring of the tympanic membrane, lack of contact

between the ossicles, or a tumor in the middle ear.

The categorising of tympanometric data should not be used as a diagnostic indicator. It is

merely a description of shape. There is no clear distinction between the three types, nor the

two subtypes of type A, namely A and A. Only measures of static acoustic admittance, ear

canal volume, and tympanometric width/gradient compared to sex, age, and race specific

normative data can be used to somewhat accurately diagnose middle ear pathology along

with the use of other audiometric data (e.g. air and bone conduction thresholds, otoscopic

examination, normal word recognition at elevated presentation levels, etc.).

Medical or educational professionals in medical and educational rehabilitation

respectively:

Parents, teachers, school administrators, and school districts also play a role in supporting a

child’s success in the mainstream. Parents of a successfully mainstreamed child

acknowledge the child’s strengths and challenges, have realistic expectations for classroom

performance, cooperate with teachers and support personnel, and recognize the boundaries

of regular education classrooms. Most importantly, these parents support school work at

home (Teller & Lindsey, 1987).

Teachers who approach a child with a CI with unconditional acceptance in the classroom

create a social/emotional environment in which the child can be successful. These teachers

are willing to make instructional changes as needed and to obtain knowledge and skills

related to hearing loss and CIs.

Principals who are enthusiastic and committed to making mainstream education work for

the child with a CI are crucial. By providing opportunities for staff to learn about CIs and

allocating funds for acoustic and educational accommodations, administrators control the

organization’s response to educating a child with a CI. Mainstreaming a child with a CI will

be successful only with financial support at the school district level for all services the child

requires (Chute & Nevins, 2006).

SLPs and audiologists in schools have a greater likelihood of encountering a child with a CI

than ever before. Delivering appropriate services to these children requires a program

tailored to meet the child’s profile. The relationship between a child’s chronological and

language age and its impact on placement in the mainstream may provide the crucial

information necessary to develop an effective intervention plan. Vigilant professionals who

monitor the linguistic and academic demands of the classroom and the child’s ability to

meet those demands will be better able to address the challenges of mainstream placement

for every child with a CI.

Aiou Solved Assignments 1 & 2 code 683 Autumn & Spring 2024

{================}

Q.5 Discuss redundancy in speech. How a teacher of deaf can prepare the speech test material suited to the needs of children in Pakistan culturally and linguistically?

Answer:

In linguistics, redundancy refers to information that is expressed more than once. Examples

of redundancies include multiple agreement features in morphology, multiple features

distinguishing phonemes in phonology, or the use of multiple words to express a single

idea in rhetoric.

Redundancy may occur at any level of grammar. Because of agreement – a requirement in

many languages that the form of different words in a phrase or clause correspond with one

another – the same semantic information may be marked several times. In the Spanish

phrase los árboles verdes (“the green trees”), for example, the article los, the noun árboles,

and the adjective verdes are all inflected to show that the phrase is plural. An English

example would be: that man is a soldier versus those men are soldiers.

In phonology, a minimal pair is a pair of words or phrases that differs by only one phoneme,

the smallest distinctive unit of the sound system. Even so, phonemes may differ on several

phonetic features. For example, the English phonemes /p/ and /b/ in the words pin and bin

feature different voicing, aspiration, and muscular tension. Any one of these features is

sufficient to differentiate /p/ from /b/ in English.

Generative grammar uses such redundancy to simplify the form of grammatical description.

Any feature that can be predicted on the basis of other features (such as aspiration on the

basis of voicing) need not be indicated in the grammatical rule. Features that are not

redundant and therefore must be indicated by rule are called distinctive features.

As with agreement in morphology, phonologically conditioned alternation, such as

coarticulation and assimilation add redundancy on the phonological level. The redundancy

of phonological rules may clarify some vagueness in spoken communication. According to

psychologist Steven Pinker, “In the comprehension of speech, the redundancy conferred by

phonological rules can compensate for some of the ambiguity of the sound wave. For

example, a speaker may know that thisrip must be this rip and not the srip because in

English the initial consonant cluster sr is illegal.

How a teacher of deaf can prepare the speech test material suited to the needs of children in Pakistan culturally and linguistically:

So you’ve started a new term this year and you’ve discovered that one (or perhaps more) of

your students has a hearing impairment or doesn’t have English as their primary language.

Check out five quick tips to help you make the most of your classroom.

1. Use captions: All students benefit from captions and especially those who are Deaf or

hearing-impaired, plus those with English as a second language. To cater for these

students it is important to use only captioned multimedia such as TV, online video and

DVDs. Captions provide vital access to multimedia content. Media Access Australia’s CAP

THAT! initiative was created to focus on the importance and use of captions in the

classroom, and still provides relevant advice and downloadable resources.

2. Make use of available technology: Many classrooms are now equipped with

technologies such as interactive whiteboards (IWBs) and soundfield amplification

systems. If you have access to these technologies or anything similar, ensure that you’ve

been briefed on how to best use them to complement your teaching. A simple Google

search will confirm just how much choice is out there.

3. Use visual stimulus: Students who have a hearing impairment require visual cues/

support in their learning to assist their understanding of content. And of course, so do

children who have English as a second language. Teachers can use visual stimuli such as

providing lesson outlines, main points and any directions on IWB or display boards to

help these students.

4. Consider classroom arrangement: There are always variables as to where a student

who has a hearing impairment should sit in the classroom. Ensure that these students

are in a position where your face (and ideally the faces of other students if they are

participating in class discussion) are clearly visible, and where the sound of your voice is

least obstructed.

5. Keep unnecessary noise to a minimum: Students who have a hearing impairment find

it very difficult to concentrate when there is background noise. Blocking out some or all

of this noise through closing doors or windows can be a simple and effective measure.

Remember that even if your student or students use assistive hearing technology, they

do not hear in the same way that their peers do. They will benefit from having

unnecessary background noise to a minimum.

Aiou Solved Assignments 1 & 2 Autumn & Spring 2024 code 683

{================}

Leave a Reply

Your email address will not be published. Required fields are marked *