Project: Listening to Speech and Non-speech Sounds Activates Phonological and Semantic Knowledge Differently

Favorite
DOI
10.1177/2F1747021820923944

This project relates to the published manuscript available at https://doi.org/10.1177%2F1747021820923944

The authors describe this study and its purpose in the abstract as: How does the mind process linguistic and nonlinguistic sounds? The current study assessed the different ways that spoken words (e.g., “dog”) and characteristic sounds (e.g., ) provide access to phonological information (e.g., word-form of “dog”) and semantic information (e.g., knowledge that a dog is associated with a leash). Using an eye-tracking paradigm, we found that listening to words prompted rapid phonological activation, which was then followed by semantic access. The opposite pattern emerged for sounds, with early semantic access followed by later retrieval of phonological information. Despite differences in the time courses of conceptual access, both words and sounds elicited robust activation of phonological and semantic knowledge. These findings inform models of auditory processing by revealing the pathways between speech and non-speech input and their corresponding word forms and concepts, which influence the speed, magnitude, and duration of linguistic and nonlinguistic activation.

Project Descriptor(s)
Developmental Design
Funding Agency / Grant Number
Eunice Kennedy Shriver National Institute of Child Health and Human Development / HD059858

Most Recent Datasets in Project

Most Recent Documents in Project

Most Recent Code in Project

There are no code for this project.