>>NANCY MACKLIN: Good evening, everyone.
Thank you for joining us tonight with Perry Hanavan who will present a Consumer-friendly
Recap of the HLAA2018 Research Symposium: Listening in Noise.
Thank you very much for joining us tonight, Dr. Hanavan.
Before we get started I just want to thank Donna Licari from Alternative Communication
Services for providing CART this evening.
Perry is going to present for 30 or so minutes about the symposium and then I can take your
questions if you would just be kind enough to post your questions in Q&A.
I will not be responding so much to the raised hand feature of Zoom so if you do have a question
for me, post it for me in the chat or in the Q&A, all right?
Thank you so much for being here tonight.
Perry, I will let you get started.
>>PERRY HANAVAN, AuD: Welcome, everybody.
This is exciting.
It was exciting to have the Research Symposium on listening in noise.
It was perhaps a little bit complex in some of the presentations at times.
Hopefully I can kind of in this webinar provide information to make it a little bit more understandable.
I'm not in Sioux Falls right now.
I'm actually a little bit south of Denver.
Welcome from Colorado.
I'm going to start right off with a few items about listening in noise.
It's sometimes referred to as the "cocktail party" problem.
And many of the researchers refer to this issue of listening in noise as the cocktail
party or cocktail noise problem.
And it's that problem of trying to, you know, select the speakers you are trying to listen
to from all of the background noise.
And so this cocktail party problem is exacerbated to some extent with hearing loss and even
a number of people with normal hearing have problems listening in noise.
I will talk about this hidden hearing loss.
Our common audio metric tests do not detect what is referred to as hidden hearing loss.
People with hidden hearing loss have difficulty hearing in noisy situations.
So, for some of you, it is really difficult listening.
For some, it may be impossible to listen in noisy situations.
Some of us just tune out when we find it extremely difficult to hear in these situations.
Cocktail noise, what we mean is background sounds filled with voices, perhaps background
music, other sounds like clinking glasses, dishes, all of those different kind of sounds
that might be occurring.
For most of us, the brain has natural ability to filter out background noise and make it
relatively easy for us to focus on what we want to hear.
For some of us, that's not the case.
So, we have a good understanding how hearing mechanism process sounds, in other words,
how the outer ear and middle ear and inner ear and to some extent auditory nerve, function
and process sound.
But we have an incomplete understanding how human brain processes sound including speech
in conversations.
So our speakers were composed of the following: Dr. Andrew Oxenham, PhD, from the University
of Minnesota,and he is the Director, Auditory Perception and Cognition Laboratory in the
Twin Cities; then we have Dr. Evelyn Davies Venn, PhD, from the University of Minnesota.
She was unable to attend, but she had a slide show which she provided audio and it was almost
like she was there.
You just couldn't see her.
She is the Director of the Sensory Aids and Perception Lab.
Dr. Norman Lee, PhD, once previously at the University of Minnesota on a fellowship, post-doctorate
fellowship doing research, but he has now taken a job at St. Olaf College, a great liberal
arts college, in Minnesota.
Then we have Dr. DeLiang Wang, PhD, out of Ohio State University, he is director of the
OSU Perception and Neurodynamics Lab.
Our final speaker was Nima Mesgarani, PhD from Columbia University.
I made a mistake.
That I need to correct that.
He's at Columbia University.
{slide needs to be corrected] I will get that corrected.
Apologize for not seeing that earlier.
Our first presenter was Dr. Oxenham and he primarily discussed the peripheral hearing
mechanism.
What I mean by that is the outer ear, middle ear, inner ear including the cochlea and balance
mechanism and auditory nerve.
So he did not focus on how the brain processes speech.
So he primarily discussed how the hearing mechanism, peripheral hearing mechanism encodes
sound for the brain to interpret.
And so one of the things that focused on was hearing loss, results in exacerbates the cocktail
noise problem.
Most individuals, hearing loss is not in the outer ear or middle ear.
Most people have the hearing loss that involves the inner ear specifically the cochlea.
And he also discussed hidden hearing loss and I will focus on that in a little bit in
more detail.
One of the things that he showed in his slide show, and I use this when I talk about part
of the inner ear function specifically the cochlea, he showed this animation and I'm
going to show a part of this animation here.
What this is is we have a number of membranes.
The cochlea is coiled up like a snail.
This is unrolling the snail and part of the membrane so that it's on a flat surface.
You can see that let me just play this.
There is the outer ear.
And middle ear and there is the cochlea scrolled up like a snail and there it's unrolled.
And then inside here this is the membrane.
I hope the sound comes through.
May not come through.
[music playing] So I'm going to stop it there.
You can see this is laid out like piano keyboard, and in parts it responds well to high tones
and in part very low base tones.
This is part of this process that takes place in the inner ear to send information to the
brain, for the brain to interpret.
One of the other things he showed, we have two different types of hair cells in the inner
ear that sit on top of that basilar membrane that you saw and listened to.
We have two types: one type is the have the outer hair cell and inner hair cell.
Now the outer hair cell can change size.
And this happens when you listen to soft sounds.
And so I am going to play a little bit of this.
He played this to Rock Around The Clock by Bill Haley and the Comets.
I have a Spanish version here that I'm going to play.
You can see it's getting longer and shorter.
Beautiful tune there.
We call this motility where it's changing shape.
And you can see it looks like a long tube.
And this picks up the soft sounds and kind of amplifies these very soft sounds which
then transmits this information to the inner hair cells that then transmits an electrical
code to the brain.
So the other thing he talked about that seems complex.
Remember that outer hair cell that you looked at.
We have about 30,000 hair cells in the cochlea.
Each one of these hair cells is very finely tuned to a very specific tone.
And so I don't know if my pointer works here, if you can see the pointer, you can see that
this hair cell is very finely tuned right here to a specific frequency.
What happens, you can see here, when we have these hair cells get damaged, you can see
the tuning is less here.
You can see it here.
It's not as finely tuned.
It's broadly tuned.
This is part of the problem why you have trouble hearing clearly.
Also part of the problem why you have trouble hearing in noise.
It's harder to filter out specific sounds that you would like to hear.
So a little bit about hidden hearing loss.
This is kind of connected to damage to the outer hair cells.
One of the causes of hidden hearing loss is damage that occurs in the auditory nerve.
We have to connect a nerve fiber to hair cells to transfer information to the brain.
We call these synapses.
When these synapses don't work properly, they lose their connectivity.
It's sometimes referred to as synaptopathy.
But it loses some of its connection.
So not only do we have damage occurring in the inner ear but to these outer hair cells
and inner hair cells, but it also creates problems with the connections.
And so we have some problems that exacerbate this problem of listening in noise.
Now so how does this inner hearing loss occur?
There are several factors.
One is from noise exposure.
So I grew up on farm and drove tractor.
I do have a little trouble hearing in noisy situations.
That's probably due to the fact that I have had a lot of noise exposure.
As we age, this is also synapses do not function as well.
This creates a problem in noisy situations.
The second speaker as I mentioned, she was unable to attend and did have a slide show
and a recorded session.
One person said, we couldn't see her standing up at the podium.
She is very short.
But she was unable to attend at last minute.
We did get to hear her presentation.
Now, her research focuses on hearing aids, implants and hearing in noise.
As she mentioned and as we know, hearing aids and cochlear implants primarily improve audibility.
So that can increase the intensity and maybe we can hear some frequencies that we didn't
hear before.
But sometimes increasing a clarity of hearing is not always improved with cochlear implants
or hearing aids.
So some of the things that she talked about that some of the technologies built into hearing
aids that can help us hear in noisy situations is number one, the use of directional microphones.
Now almost all hearing aids in cochlear implants are coming with directional microphones.
In other words, they focus in a specific direction.
And they are not omnidirectional.
Picking up sound equally in all directions.
So if we are trying to speak to a person in front of us, that microphone will focus on
that person.
That's proven technology that helps us hear better in noise.
Digital signal processing, I will talk about this later.
Digital signal processing is where the hearing aid is trying to subtract noise from speech.
It helps a little bit, but it's it hasn't it has some problems.
Another technology is digital adaptable signal processing.
So in other words, if we were sitting in a group conversation at a table, by adapting,
the hearing aid will detect that a person to our right is speaking louder and will focus
the directional microphone on that speaker.
And then if the different speaker on the other side speaks, will adapt and turn the microphone
directionality over in that area.
So that technology is helpful for some.
Another technology that many of you may use and this is what I like to speak a lot about
is remote microphones and hearing assistive technologies.
So this is a great technology that helpful for many.
Most of our hearing aids have connections to Smartphones that have an app that can adjust
the hearing aid, tune it up a little bit in the higher frequencies, take on a little bit
of base, maybe make it easy to hear in noise in certain situations at a restaurant, for
example.
And then we can adjust it when trying to listen to the TV.
These are some of strategies that she talked about that are some of the technologies that
we now have.
The next speaker, Dr. Norman Lee who is at now at St. Olaf College, he presented his
research from two studies that he was involved in.
First one he talked about was a parasitic fly study.
In other words, this is the fly in the southern part of the United States that some of us
have to deal with.
And it's unique because it has the most accurate directional hearing of any animal.
And from studying this parasitic fly, it inspired the development of the directional microphone.
Now it's hearing mechanism is very interesting.
It has two hearing mechanisms, but the eardrums are connected together.
So this kind of limits some of the abilities of the fly and his study they documented that
noise such as cocktail noise, distracts the fly from being able to find the say the back
of a cow or a human face to land on.
And so if there is noise, it loses its sense of direction in locating the target.
So that was one of the studies that he presents on.
So, you know, animal studies are important because sometimes they help us discover new
technologies or new ways of developing computer programs that might be able to help us hear.
And I will talk about that in a little bit.
Now, the other study, this is kind of the love story that I wanted him to present specifically
on.
And, as you know, there are several different kinds of tree fogs.
One of them is a Cope's gray tree frog.
Some are similar, they are also tree frogs, but they are a little bit different.
So how does a female on a nice spring evening listening in what I call the dead of noise
to calls a countless male, and hopefully coax a tree frog and other noises that are out
there in the environment, how does it find her perfect mate?
Well, researchers know that male calls include both the high and low frequencies changing
in loudness.
So when it croaks, it croaks with a high frequency sound and a low frequency sound at the same
time.
Now, so female frogs are able to detect the simultaneous changes in these frequencies
from noise.
So kind of what he did in his study, he did a couple he had two parts to his study.
So the first part of this love story is he devised several different sounds.
One, he took out one of the simultaneous frequencies, either the low frequency or the high frequency
and included that with noise And presented that to the frogs in his research lab.
And the frog was unable to accurately identify a Cope's gray treefrog male.
A good male.
And so what they kind of proved from this, the frog brain has the ability to do statistics.
And when it hears both the high frequency sound and the low frequency sound at the same
time, it has a mechanism in the brain that kind of filters out some of the noise so that
it can hear that perfect male she wants to find.
Now, the second part of the study.
So, this is a little bit of evolution, that the female wants to identify the frog that
is making the most croaking sounds as well as the loudest croaking sounds because this
probably represents a stronger male.
And the female wants to obviously mate with a strong male so that she can have good offspring.
That was the second part of the study.
Kind of a little bit of a cute story, I think.
But he was able to prove how these female frogs were able to detect in noise and find,
you know, the ideal croaking sound from the ideal frog.
So this research findings might be able to be built into hearing aids and cochlear implants
in the future.
Who knows where this might lead.
Fourth speaker is Dr. DeLiang Wang from Ohio State University.
Engineers for years, electrical engineers have tried to achieve removal of noise from
speech.
And they have been able to do this to some extent Using a voice activity to what is called
a voice activity detector to identify gaps between people's utterances.
In other words, so if I stop speaking for a little bit or when I pause or between some
of the speech sounds I make, this is probably represents noise.
And so this these kind of voice activity detectors identify the gaps between my speech when I
am not speaking and thinks that is noise and tries to subtract that noise from the speech
leaving ideally noise free speech.
Well, it in theory would seem to work, ideally really doesn't work that well.
We call this a spectral subtraction.
And this is used in many hearing aids.
That's what I referred to as digital noise reduction.
Trying to figure out what is noise and trying to subtract those background sounds out from
the speech.
But too often it removes too much speech or removes too little noise.
It is somewhat helpful, but it's not as helpful as we would like it to be.
So in 1990, Dr. Bregman, a psychologist, proposed that the human auditory system organizes sounds
into distinct auditory streams.
Now you can see the picture over there with the police car and the siren and two people
talking and the dog barking, these are called auditory streams.
And our brain and our ears is able to tell the difference.
That's a siren.
That's people talking.
That's a dog barking.
Similarly, if we are listening, we go to a concert and listening to an orchestra or symphony.
Most of us can detect, that's violins, I think that's saxophones, that's a trumpet.
We can hear these different distinct auditory streams.
When we are listening to these sounds or these auditory streams, sometimes they are referred
to as scene analysis.
So think about that.
If we look at a picture, we can analyze the scene.
Our ears and our brain does the same thing with sound.
We can usually detect, you know, different auditory streams what Dr. Bregman refers to.
This is kind of a theory.
And Dr. Andrew Oxenham does a lot of research in this area of scene analysis.
But, anyway, Dr. DeLiang, thinking about that theory of scene analysis and auditory streams,
created a speech filter.
He was the first one to do this.
Designed on the principle of auditory scene analysis.
In other words, detects speech streams from noise.
And he was successful for increasing the speech signal from noise much better than what previous
attempts had been able to do.
However, this only worked in the lab under certain circumstances and didn't work in the
real world.
And he kept working away.
And provided the most technical talk.
And hopefully I can make this understandable if you were present at the research symposium.
So he built a machine learning program.
What do we mean by machine learning program?
Machine learning program is part of artificial intelligence.
So these are computer programs.
Some of these artificial intelligence programs have been able to beat the best test players
and other game players in the world.
It's been able to learn after several attempts of playing with a person, been able to outplay
them consistently.
That's one part.
Machine learning.
One of the types of artificial intelligence.
Now, second component is neural networks.
And you can see a picture of this that I included here.
Neural networks are kind of modeled on actual neurons in the brain.
And we pretty much know that this is kind of how we learn new tasks such as how to swing
a bat to hit a baseball or play an instrument or make new speech sounds for example, we
have neural networks in the brain that kind of learn.
And so we get better with practice.
And so this is neural networks designed which are really computer programs.
And they are making all sorts of decisions.
I think I heard that noise coming in here.
Oh, that's this person speaking here.
This might be that's the TV going, for example.
Learns, you know, these different categories of sounds and then donees some filtering and
further processing.
And then it makes some decisions and does more processing.
He built a very complex using these two concepts, machine learning and neural networks to listen
to speech and filter out noise.
I'm hoping that I can get some of his samples to play.
I don't know if any of you have Alexa or Google Assistant or Siri or Cortana at home.
Most of us have some of these on our Smartphones.
We can speak into the microphone and order pizza or order something, you know, online.
And so these systems are getting pretty good.
In fact, they are getting as accurate and identified words as humans.
These devices have 95% accuracy for words.
This is some of the complexity that is built into this device.
This gives you an example.
This is using artificial intelligence, machine learning to pick up your voice and follow
commands, for example.
so to kind of summarize a little bit of the model that Dr. Wang has developed here, is
he can place noisy speech with a lot of background sounds into a filter here.
And it's making decisions.
Is that noise?
Is that speech?
Plays it in.
And these neural networks are making decisions and filtering out and actually figure out
that's noise, that's a person's speech.
I want to focus on that.
It's kind of covered up here.
I don't know if I see it.
Comes out and cleans up and takes out the noise.
He has developed a pretty good system.
I've got an article that if you want to read a little bit more that he wrote that summarizes
his work here, I'm going to click on this.
I'm hoping this doesn't mess me up too much.
I'm going to go to this article and see if comes up here quickly.
If not, I will move on.
How am I doing on time here?
>>NANCY MACKLIN: I think you're doing just fine.
>>PERRY HANAVAN, AuD: I've only got one more speaker to talk about.
I'm going to move down here.
At the bottom of this, I'm going to play a sample that he played.
First part of this I should have covered this up.
First part is just what is said.
First part is trying to hear this in speech.
And then the second part, his computer program and deep learning machine filters it out.
So let's listen.
>> The man called the police.
>>PERRY HANAVAN, AuD: Some of you with hearing loss, that still may not have been good.
I couldn't detect this sentence at all in the first part with all of that noise.
With his device he's able to get rid of most of the background noise.
I will play another one here.
>> It's getting cold in here.
>>PERRY HANAVAN, AuD: Yeah, that's amazing.
He has another with the hearing loss in China.
So he's been working on trying to develop the technology to help her hear better in
noisy situations for years and years.
I'm hoping we are back, but I think okay.
I'm going to expand this.
I hit the right button.
Oops.
There is where I wanted to be.
Too much showing here.
There we go.
So I need to advance forward here.
It's going to cooperate.
Let's see.
>>NANCY MACKLIN: You just want to go back to present mode, right?
>>PERRY HANAVAN, AuD: Let's see.
Why am I not seeing that?
I was in the wrong thing.
Present mode, let's see.
Why is that not showing up here?
Apologize here.
Hang on a second.
Let's see.
Should appear on the screen here.
Why is that not showing up?
There we go.
So the final speaker is Dr. Nima Mesgarani from Columbia University.
And his research is a little bit different than Dr. Wang's.
Dr. Wang's research is kind of based on this theory of auditory streams.
And he's developed a model based on that.
Well, Dr. Mesgarani's research is about how does brain actually process acoustic signals.
So he's in a sense attempting to reverse engineer human sound processing in the brain into machines
and computers.
And so potentially to develop new systems and devices to help people hear with hearing
loss better.
So initially devised methods to reconstruct sounds the brain listens to and ignores by
measuring human brain waves.
This is what we sometimes do.
We just totally tuned out.
We were listening to cocktail noise.
What he did, he his first study, he used volunteers undergoing brain surgery.
He had them during surgery listen to sentences spoken by different people simultaneously.
So several people speaking at the same time while measuring the brain waves.
Fed these brain waves into computer algorithm that his lab had developed.
And the computer was able to reproduce the words the patients were that should be played
play.
Excuse me.
Paid attention to while ignoring the other speech.
So he discovered the brain is able to filter messages.
So when I want to listen to a speaker and kind of ignore the other, my brain kind of
turns up the sound of that speaker and kind of reduces the sound of the other.
He wrote a program that was able to kind of reproduce this.
So this computer program can essentially translate a person's auditory brain waves into real
words.
Now, I couldn't find the sound recording.
He actually hooked up electrodes to the brain and he was able to decode this coded message
in the brain.
Now, basically the message in the brain is just clicks.
He was able to take all those clicks with computer program and decode it so sounded
like person was speaking out of the brain.
So this laid the foundation his research laid the foundation to a brain machine interfaces.
So the next study, used speech samples listened to by persons undergoing clinical evaluations
for epilepsy surgery.
And he discovered how vowels and consonants are encoded or we can simply say coded by
recording brain wave activity through a person's brain.
Major part of the brain that we hear and is in the temporal lobe, that's in the area above
the ear here in the skull.
So we have temporal lobe on this side and temporal lobe on the other side too.
This designed to solve the cocktail noise problem.
He was able to with electric engineering technology interpret, you know, decode what the brain
and recognize vowels and consonants and recognize people's voices.
So based on his research understanding how the brain processes speech and brain can pay
attention, he has developed what he calls the cognitive hearing aid.
I will show this and play a little bit.
Similar to Dr. Wang's device, the cognitive hearing aid uses neural networks and machine
learning to decode the speech heard by the listener.
We are just taking different approaches and writing different computer programs.
Each of these are kind of on track and in discovering similar kinds of things and hopefully
will develop a hearing aid.
I'm going to the next slide here.
I'm going to just talk about this cognitive hearing aid for a second.
Automatically separates out the voices of multiple speakers in a group.
Next compares the voices of each speaker of the brain waves of person wearing this cognitive
hearing aid.
And then the speaker whose voice pattern most matches the listener's brain waves is amplified.
So he's able to having an electrode connected to the brain, he's able to tell which voice
I'm actually listening to or you are listening to.
Connects this to a hearing aid.
And that helps the hearing aid focus on that voice.
So hopefully this will make sense to you.
Here he has got an electrode hooked to this person's brain and it's doing all sorts of
processing.
And so it's listening to the speaker.
But it's figuring out who, you know, multiple speakers and background sounds you can figure
out which one of these people to focus on by being hooked into the hearing.
So let me play this.
>> How to be a shepherd.
Being a shepherd can be a lot of fun.
>>PERRY HANAVAN, AuD: You're hearing two voices a male and female voice.
Brain is going to focus on [multiple people speaking at once] male voice and define that
male voice?
Male Speaker: As well as size health and required maintenance.
For example, long haired –
>>PERRY HANAVAN, AuD: Now he's going to listen to the female.
Female Speaker: Why is sheepdog?
Some sheepdogs are better than others.
You would do best to look on a dog market online –
>>PERRY HANAVAN, AuD: And noticed how it decreased the unwanted sound and increased the wanted
sound.
So, you know, perhaps in five or ten years, we may have what's called a cognitive hearing
aid.
So in summary, I'm running I'm going a little over.
Listening in noise is a difficult task.
Hearing aids and implants are helpful but have limitations.
Human and animal studies are an increasing or understanding of how the brain processes
speech in noise and is beginning to be introducing in hearing instruments.
Oops.
And hearing aids and implants are helpful but have limitations.
So by the use of deep learning and machine learning and neural networks, we will probably
be able to incorporate these into hearing aids and help us hear better in noise.
With limited time, we have time for questions and answers.
>>NANCY MACKLIN: Perfect.
That's great.
Thank you very much.
Makes me really sorry I wasn't there for all of these presentations at the convention.
It's such interesting presentations and you did a great job of boiling them down and making
them understandable for those of us that are not scientists.
>>PERRY HANAVAN, AuD: Hopefully.
[Laughter]
>>NANCY MACKLIN: There is a couple of questions.
And I apologize.
Our suite is now being vacuumed.
So I hope that it's not too much noise in the background here.
First question, will there ever be hearing aids that make speech clearer?
>>PERRY HANAVAN, AuD: Yes, I do think.
And I think with the last two presenters, Dr. Wang and Dr. Mesgarani and other researchers
that are trying to develop your devices using what we call artificial intelligence specifically
deep learning or machine learning using neural networks and other kinds of technology.
And as we better understand how the brain, you know, interprets this coded message from
the inner ear, I think that we will.
Now, there are now going to be two hearing aids on the market that are using artificial
intelligence.
One of the hearing aids from why can't I think about Widex is kind of making decisions like
a chess player.
So when a person goes into different environments such as noisy restaurants and listening to
TV and announce a group of people and then to one speaker, it's learning from the person
how they are adjusting their smartphone to adjust their hearing aid.
It's learning to better adjust the hearing aid in these different environments than the
person wearing the hearing aid then manually adjusting.
So that's one thing that is coming along.
Starkey hearing aid is coming out with a new hearing aid that will come out here beginning
September or October.
It's built in artificial intelligence.
It's not focusing so much on clearing up speech as it has got sensors in the hearing aid.
So that when you stumble or fall, will call maybe a family member and alert them that
you have fallen.
>>NANCY MACKLIN: Wow.
>>PERRY HANAVAN, AuD: Or you can have it send information to the doctor's office.
And then you can communicate via the hearing aid and directly with the hearing aid, the
smartphone or with a physician or family member to see if you need help.
It can also measure, I think, blood pressure, heart rate, body temperature.
So there is a number of sensors built in to that particular hearing aid.
All using kind of machine learning to detect these kind of things.
So I think, yeah, this is a hot, hot item.
So how soon, I can't tell you exactly.
But both Dr. Wang and Dr. Mesgarani expressed hope that this would be coming and be able
to be built into hearing aids and cochlear implants, the technologies they were working
on.
>>NANCY MACKLIN: It does seem with AI that that capability is just right around the corner?
>>PERRY HANAVAN, AuD: Well, I think it's more around the corner than we think.
Deep learning some say this is a subset and machine learning and deep learning, these
are subsets of artificial intelligence.
And these programming capabilities are much faster to write the code compared to what
we call the basic foundation of artificial intelligence.
If you have Alexa on your smartphone, it's amazing how well, how accurate it, you know,
if you speak clearly, how accurately it converts speech into text.
>>NANCY MACKLIN: Uh huh.
>>PERRY HANAVAN, AuD: Speech recognition abilities are really increasing.
Not really that good at noise yet.
But I think from Dr. Wang's research and Dr. Mesgarani's research these technologies are
going to be built in.
Perhaps from Dr. Lee's research, some of these findings may be built in.
>>NANCY MACKLIN: Lisa asks have you heard about the Bose earphone?
It's a lightweight collar with earbuds.
You remove your hearing aidns.
And it pairs with my iPhone so I can control it from there or for something on she says,
I have moderately severe hearing loss.
Where I hear my frequency loss is below all letters, the Bose earphone are better than
my expensive hearing aids in all noisy situations.
And I can make out accents much better as well.
>>PERRY HANAVAN, AuD: Yes.
I am not familiar, I have not done a lot of research and read up on Bose technology.
They are using a little bit of what we refer to as digital noise reduction.
They have written programs that are trying to detect speakers versus noise.
And subtract that noise out of the speech.
And for some, it is quite helpful.
And for others, not so helpful.
That's great that it's helpful for you.
I'm not hearing you right now.
>>NANCY MACKLIN: I'm sorry.
Has the research by Dr. Mesgarani been published in a publically accessible place?
Mary would like to read it.
>>PERRY HANAVAN, AuD: Yes.
One of the articles I refer to is published in 2012.
Next one is 2014.
And then he published another one or two he has got a great number of research publications.
If you go to pubmed.org, and type his name in, maybe put his name in quotes, you can
turn up research articles.
Some he's the lead author.
In other articles, he's one of the maybe five or six researchers included in the research.
So that's one way to find specifically his research.
>>NANCY MACKLIN: Okay.
Maybe we can even get him to write an article for "Hearing Life".
>>PERRY HANAVAN, AuD: That would be great.
Just might do that.
>>NANCY MACKLIN: That would be great.
Gloria Matthews asked, I'm 35 and into technology.
How can I follow speech to text technology as well as advancements made in hearing technologies?
Is there one go to place to keep up on all of that, Perry?
>>PERRY HANAVAN, AuD: Come every year to HLAA convention.
[applause]
>>PERRY HANAVAN, AuD: Double the check, Nancy.
No.
It's a great place.
A lot of research a lot of folks and technicians that are developing technologies for persons
with hearing loss come and exhibit some of their devices which oftentimes you can try
out.
And this last year, there were some amazing there were six or seven really great new technologies
I was not aware of.
And so that was fun to explore.
Some of these are smartphone apps.
Some of these are fairly inexpensive devices.
Some of these are future devices that they are exploring and trying out with people attending
the convention.
That's one place.
That's the greatest place.
Now, you know, I'm an audiologist.
So I go to American Academy of Audiology convention every year.
And there is a lot of new technologies there that I can learn about and explore and presentations
on.
And I assume you are kind of interested in the acoustic technologies?
>>NANCY MACKLIN: Speech to text is what she said and also just the advancements in hearing
technologies whether it be I'm assuming she is referring to hearing aids, as well as assistive
devices as well.
It's one of those things that we kind of try to tackle here with our new website.
We are hoping to have a section on emerging technologies.
But we just don't have a staff right now to keep up on everything.
But we do have technology articles in "Hearing Life" as well.
>>PERRY HANAVAN, AuD: This is one of you know, I Google a lot of stuff.
You know, you have to be like a librarian to know the correct terms to put in because
so many items turn up and trying to find specific articles that come up are sometimes challenging.
When I type in speech recognition technologies, lots of different things turn up that are
not related to what I want to learn about.
I'm trying to think of some blogs out there that might be helpful.
But I would keep in tune with HLAA.
You might go to hearingreview.com.
And a lot of hearing aid the introduction of new technologies and new discoveries are
published there in very brief articles.
So hearingreview.com.
>>NANCY MACKLIN: Okay.
The site that you mentioned earlier, Perry, was that pubmed?
>>PERRY HANAVAN, AuD: Dot org.
>>NANCY MACKLIN: Dot org.
Okay.
Maybe we can try to corral all of these great sites and post them on our webinar replay
page.
I will try to remember to do that.
Time for one more question, I think.
This is from Russell.
He says interesting research.
But what is being done to improve directional mics?
Seems like this could have good results in a relatively short time.
Seems to me that today's directional mics are easily overwhelmed in really noisy environments.
Good question.
>>PERRY HANAVAN, AuD: Well, all of the hearing aid industry manufacturers continue to work
on this on directional microphones.
And, you know, when we look at some of the studies on animals that are good at directional
detection, you know, all of engineers look at those studies and try to incorporate what
they learn from those studies into the technology.
Part of the problem is hearing aids are small.
And trying to get a very tiny, tiny little microphone we keep shrinking the size of hearing
aids and we are able to do it because basically we are putting computers into the hearing
aids and we keep shrinking the size of that little computer we tuck inside a hearing aid.
I think a lot of this is we are going to have to use deep learning and maybe the smartphone
will do processing connected to the hearing aid might be one of the things that, I think,
in the short term might be used.
>>NANCY MACKLIN: PC asked about where patients can get to try devices or become part of the
studies?
Coming to the HLAA convention, once again, is a place where you can try all of these
devices.
You will never see so many gadgets under one roof.
But if you are talking about any kind of medical research going on, HLAA recently partnered
with Research Match.
And you can find information about that on our website under make an impact.
If you go to hearingloss.org and go to the make an impact tab, you will see more information
about that.
That's a place where you can explore what research is going on and possibly one day
even participate in a study.
I think we have to call it a night.
It's 9:00 according to my watch.
Thank you, Perry, very much.
And if I have not thanked you enough for not only doing this webinar tonight, but it was
Perry's idea for the whole research symposium of jthis topic on listening in noise and corralled
all the presenters and worked with them to make sure they were on board.
And it really was a fabulous research symposium.
I've seen great reviews about it from people that were in attendance.
Thank you, Perry, very much for doing that.
>>PERRY HANAVAN, AuD: Well, thank you for trusting in that proposal that I set.
And it was just an area that research symposium I went through all the symposium.
Couldn't find one on this topic.
We needed one on this.
>>NANCY MACKLIN: You're right.
We were overdue for this topic.
It's definitely one we need to keep an eye on as well as technology changes and so forth.
So thank you again for presenting tonight.
and thank you, Donna, for providing CART.
Good night, everybody.
>>PERRY HANAVAN, AuD: Good night.
Have a great evening.
Note From Captioner: Meeting is over.
Thank you!
No comments:
Post a Comment