Have you ever wondered what a word looks like when it hits your eardrum?
Researchers with the Hearing and Speech Foundation’s (HSF) research and development team struggle with this question every day. And every day, they come closer and closer to discovering the answer.
When John Berry, director of HSF’s research program, began noticing patients with significant hearing loss were able to understand conversational speech at three, five and even ten feet away without their hearing aids on, he knew there was a reason.
“Traditional audiology says we need to amplify in the frequency regions where a patient’s hearing loss occurs,” Berry said. “ Most patients have significant loss over 1,000 hertz, which means they should not be able to understand conversational speech, unaided, at any distance. However, I’ve learned this is not the case.”
Starting in March 2003, Berry, with help from acoustic engineers, sound analysts and statisticians, created an anechoic chamber in order to measure exactly what speech stimuli the brain needs to perceive words and what those words look like at the eardrum.
Following a design similar to what Bell Labs used to develop the cell phone, the team constructed an anechoic chamber in HSF’s research facility. The state-of-the-art measurement system surpasses traditional “real-ear” measurement systems in that HSF researchers have crossed disciplines from strict audiology into sound production. In essence, this means they are not interested in understanding only what a person hears, but how they hear it and what that stimuli look like under various conditions. After five years of construction, overcoming technical obstacles and obtaining the best measurement equipment available, the Free-field, Multi-channel In-ear Controlled-stimuli (MIC) Analysis System is up and running.
HSF researchers travel to Tampa, Fla., March 16 - 18 to present their research at the Fifth Annual Aural Rehabilitation Conference, hosted by the University of South Florida.
“We are really excited about this opportunity,” said Megan Venable-Smith, HSF executive director. “ Our research volunteers have worked tirelessly to ensure success, and I am confident the research will raise eyebrows and turn traditional methods of audiology on its head.”
Entering into the anechoic chamber is like entering a foam womb. Designed to measure sound at 125 Hz and above, foam wedges line the floor, ceiling and walls. Two microphones hang above the participant’s chair at the end of the room and two speakers sit six feet on either side of the chair.
A PC-controlled testing station located just outside the chamber is linked to three analysis programs: Bruel & Kjaer 3560-C, a precision digital signal analyzer; ProTools “Digi 002 Rack,” a professional-grade recording and playback audio system; and MatLab Software, a mathematics analysis software recognized by research and academia. It is with this system that researchers aim to discover what acoustical energy in a speech signal a person needs to structure the entire signal.
“It is our belief that energy in the low frequencies are extremely important to the perception of speech,” Berry said. “ In order to start answering our research questions, we first measure a participant’s sound pressure level.”
In traditional audiology, hearing threshold level (HTL) audiograms are used to determine a person’s hearing loss. HTL levels have been established so that 0 dB HTL reflects the best hearing of a group of people. Because HSF researchers are interested in measuring what acoustical properties of speech look like at the eardrum, a sound pressure level (SPL) audiogram is taken, which measures the ratio of the pressure of a sound wave relative to a reference sound pressure.
After determining a participant’s SPL, researchers are able to start testing what speech stimuli look like at the ear drum. Previously-recorded words are played through speakers in the anechoic chamber, or sound field, to the participant. A reference microphone above the participant’s head and a probe microphone in the participant’s ear picks up the stimuli and sends it back to the PC-controlled analysis station where it is recorded through the B&K Pulse system. After each word has been recorded at a participant’s SPL level and normal conversation (65 decibels), researchers can create contour plots that show where the energy of each word is located by frequency and time.
“It is important to view hearing in context with other modes of communication, especially vision,” said Paul Rook, audiologist with the research team. “ These two modalities working in concert can enhance speech understanding for hearing-impaired listeners in ways not possible by either modality alone.”
HSF research is funded by Alan and Judy Boekmann and the Fluor Foundation.
HSF Research team includes:
John Berry: Lead Researcher - Hearing and Speech Foundation
Tony Cooper: Process Analyst - Analysis, Design, and Confirmation
Dr. Bob McLean: UT Statistician - Measurement, Design, and Analysis
Dr. Caroline Roberts, Au.D., CCC-A, Blount Hearing and Speech Services
Paul Rook: M.S., CCC-A, Blount Hearing and Speech Services
Doug Sanders: Process Analyst - Analysis, Design, and Confirmation
Rick Stillmaker: Bruel & Kjaer Field Engineer - Technical Trainer
Dave Thomas: Computing Design
Blake Van Hoy: ORNL Acoustic and Vibration Engineer - Pulse System Development
Phil Williams: Northrop Grumnan IT-Perceptics - Equipment Design and Fabrication
Mike Witt: Sound Engineer - Anechoic Chamber and Pulse Systems
Amanda Womac: Technical Writer - Hearing and Speech Foundation
James Zachary: Sound Recording - Word recording and playing, Pro Tools