Week 33

Presentation notes pt.2

For this week I was working on my presentation for the KU symposium on Saturday (April 28th, 2018). I thought it would be a idea to try and practice my presentation with the SANLAB and some of my friends. Though, I find that one can keep working on the presentation until the last minute:

slide1

slide2

This project got started by being interested in how speech recognition system works. Interactions like the one pictured in the right got me interested in how children acquire language.

slide3

This got me interested in how speech recognition (perception) works in general. So, how speech recognition work between two human speakers.

slide4

slide5

When speech recognition is done between an adult and child speaker, verbal communication can be difficult.

slide6

But overtime, as we continue to throw speech at them, they start to recognize what we’re saying.

slide7

And when we talk to Alexa (or any speech recognition system), they usually recognize us out of the box!

slide8

This is possible because such speech recognition systems are ”trained” on normative adult speech and thus optimized to recognize typical adult utterances. So, somewhere in the hardware of this computer is a statistical model of speech.

slide9

So we say let’s pretend Alexa is a baby by training it on child-directed speech and testing to see if it can recognize typical adult utterances.

KU Symposium results

Overall I thought the presentation I gave was fine. I had 10-minutes, so it wasn’t a lot of time to go into detail about the current stuff I’m doing with the project. But yeah, Rebekah and I were both given presentation awards for our respective presentations.

Best,
EO