"Heightened Uncertainty as a Possible View of Aphasic Language Comprehension" and "Statistical Learning of Syllables and Tones: An fNIRS Study"

CNBC Brain Bag
Center for the Neural Basis of Cognition (CNBC)

"Heightened Uncertainty as a Possible View of Aphasic Language Comprehension" and "Statistical Learning of Syllables and Tones: An fNIRS Study"

Michelle Holcomb and Sandrine Girard
University of Pittsburgh and Carnegie Mellon University
March 27, 2017 - 6:00pm
Mellon Social Room

Abstract:
In my talk, I will explore a novel but complementary view to previous models of language comprehension for people with aphasia (PWA), in which heightened representational uncertainty due to damage to the neocortex may explain behaviors observed in aphasic comprehension.  I will first discuss preliminary evidence supporting uncertainty in PWA’s language comprehension, including studies where PWA rely more heavily on prior probability than input form and a study showing that increasing younger neurotypical participants’ uncertainty results in patterns of interpretation similar to those of PWA.  I will then lay out future studies our lab hopes to conduct to further test this view.

Abstract:
Successful language acquisition requires learners to segment units embedded within larger structures; for example, words within phrases. Infants and adults employ a learning mechanism, statistical learning (SL), that facilitates the segmentation process by tracking the statistical regularities that define the linguistic input (e.g., syllables within a word are more likely to co-occur than syllables across word boundaries). While behavioral studies indicate that learners are sensitive to statistical structure, the neural correlates of SL remain undefined. We utilized near-infrared spectroscopy to measure changes in blood oxygenation in Broca’s area and its right hemisphere counterpart while undergraduates completed a tone and syllable SL task. Two versions of this study were conducted to pinpoint the neural signature of learning. In the pre-training version, participants were familiarized with the words from an artificial language before being presented with 30-second blocks of continuous sound (alternating between statistically structured and unstructured stimuli) interspersed with 30 seconds of silence. In the version without pre-training, participants were immediately exposed to 30-second blocks of continuous sound (only structured or unstructured stimuli were presented within a task) interspersed with silence. A behavioral measure of learning was also administered for participants in the version without pre-training. We predicted an increase in blood oxygenation for structured syllables and tones in the left hemisphere. We also predicted that an increase in neural activity following exposure to the structured stimuli would be positively correlated with behavioral accuracy. The results and implications of these studies will be discussed.

Food will be provided, so please RSVP here by Friday, March 24 at 12:00pm