Skip to main content

8th December 2021

Multimodal child-directed communication in concurrent and displaced learning contexts
Yasamin Motamedi (University of Edinburgh)

 

Vocabulary learning, a central challenge in language development research, is characterized as a hard problem: How do children know that the sounds and signs people produce are ‘words’ for objects, actions and properties? Underpinning most proposals is the assumption that the input children receive is arbitrary, meaning that there is no recognizable link between how we communicate and what we communicate about. Therefore, it is difficult to link label and referent in a busy visual scene, but this difficulty multiplies when we understand that language is frequently displaced — that is, we often talk about objects that are not physically present, or events that have already or not yet happened. In this talk, I will argue that the richness of multimodal communication that children receive in the input from caregivers offers a diverse range of non-arbitrary representations that can help children to link label and referent. In particular, I suggest that iconic cues, which provide imagistic representations of referents, may be especially useful in displaced learning contexts, when the referent is not physically present, while indexical cues, which can provide a visual link to the referent, are useful when the referent is physically accessible. I will present data from a corpus of multimodal child-directed communication, in which caregivers and their 2-4 year old children interact with sets of toys and talk about the toys when they are no longer present. Analysis of the multimodal cues caregivers use in different learning contexts indicate that caregivers differentially use iconic and indexical representations, in ways that can help children to learn conceptual and linguistic information about intended referents.