Week 1
Interpretation
Before I dig into my first week weekly findings I would like to give some insight into why this project is interesting to me. First and foremost, it’s about understanding people, particularly the littlest of us. How do we develop into the beings that we are? I mean, who am I and why do I do what I do? These questions are philosophical and extremely difficult to answer, but often produce extremely creative and interesting interpretations on what it means to be human.
I think we could learn a lot by first trying to understand the cognitive machinery within humans and how that disposes us to interacting with the physical and non-physical world in the way that we do. At the helm is how we are able to communicate with ourselves and others to cause change within our environments. More clearly, in order express our wants and needs we must communicate. In order to communicate consistently and efficiently we must develop a robust system of communication. For us humans we call such a system language, a complex protocol that we build and refine over our lifetimes, although we often take language for granted in our day-to-day lives. Further, many young children find language to be a mystery. On the other hand, many adults marvel at how young children learn to use such a complex system.
In fact, the overarching goal of this project is to better understand how young children’s cognitive machinery is used for language processing. For instance, what are the elements that are important for children acquiring language? Our team is approaching this question from a computational perspective. Namely, building two types of computational models to represent early child language acquisition. These two models will hopefully give insight into the underlying cognitive mechanisms. Rebekah, my CREU partner will be developing network models to represent the vocabulary relationships of young children. Make sure to check out her bloghere.
My portion of the project entails designing a speech recognition system that is trained on child-directed speech (baby-talk) to model how young children acquire language from their environments, specifically their parents. There are a bunch of different types of speech recognition systems (and models they’re based on). We initially proposed to build a hidden-Markov model (HMM) based speech recognition system as it’s been the standard for many speech recognition systems. Hidden Markov models are relatively good at capturing the varying acoustical statistics in natural language (or account for any process that is sufficiently complex). It should be noted that many recent speech recognition systems have also utilized recurrent neural networks and deep neural networks for acoustic modeling. So, the temporary plan is to start with investigating HMM-Based recognition systems and if possible move onto other models.
Our two models and their analysis stands to represent the top-down (network analysis) and bottom-up (speech analysis) modeling approach. This is cool because as our two projects and models progress they will begin to inform each other, ultimately producing a unified framework for language processing within young children.
Best,
EO