infinityjilo.blogg.se

Intermodal perception
Intermodal perception












intermodal perception

By including measures across different levels of analysis, critical controls for amount and type of stimulation, manipulations of task difficulty, and effects of repeated exposure, we will reveal much more about the nature, basis, and processes underlying the attentional salience of social events than can be revealed by separate studies or single measures. We use a novel combination of convergent measures: visual habituation and recovery reveal what properties of audiovisual speech events infants detect (faces, voices, amodal properties of speech), heart rate indexes the depth and efficiency of processing, and eye tracking reveals what features of dynamic faces infants selectively attend under different conditions (redundant vs nonredundant). By investigating multimodal and unimodal perception under a single framework, we will provide a basis for integrating separate literatures and reveal important interactions between modality of stimulation (unimodal, multimodal) and attention to properties of events (redundantly versus nonredundantly specified) that cannot be detected in separate research designs. Predictions concerning the role of redundancy across the senses in promoting and organizing the development of attention, perception, and learning about different properties of events in multimodal and unimodal stimulation, generated from of our model of selective attention (the intersensory redundancy hypothesis) will be tested.įive specific aims systematically explore the conditions that facilitate versus attenuate learning about faces, voices, and amodal properties of speech. In particular, this proposal explores the developmental course of infants'perception of faces, voices, and amodal properties of speech (tempo, rhythm, and intensity) in unimodal auditory, unimodal visual, and multimodal, audiovisual stimulation using convergent measures of heart rate, eye tracking, and infant controlled visual habituation. The results of the 2 studies indicate that infants can discriminate happy and angry affective expressions on the basis of motion information, and that the temporal correspondences unifying these affective events may be affect-specific rhythms.The proposed research investigates how and under what conditions various aspects of social events become salient, attended, and perceived and how this changes across development from infancy through early childhood. Infants in both conditions looked longer at the affectively concordant displays.

intermodal perception intermodal perception

In a second study, the visual and vocal displays were produced by a single individual on one occasion and were presented to infants 5 sec out of synchrony. Infants in the point light condition showed a reliable preference for the affectively concordant displays, while infants in the fully illuminated condition showed no preference for the affectively concordant display. In Study 1, one woman expressed the affects vocally, another woman expressed the affects facially, and what they said also differed. Infants saw either a normally lighted face (fully illuminated condition) or a moving dot display of a face (point light condition). 7-month-old infants saw 2 video facial expressions and heard a single vocal expression characteristic of one of the facial expressions. 2 studies were conducted to examine the roles of facial motion and temporal correspondences in the intermodal perception of happy and angry expressive events.














Intermodal perception