Artificial Grammar Learning in children, adults, animals and machines
Résumé
Human languages all have a grammar, i.e. rules that determine how symbols in a language can be combined to create complex meaningful expressions. Despite decades of research, the evolutionary, developmental, cognitive and computational bases of grammatical abilities are still not fully understood. "Artificial Grammar Learning" (AGL) studies provide important insights into how rules and structured sequences are learned, the relevance of these processes to language in humans and the evolutionarily conserved cognitive systems that may be shared with other animals. AGL tasks can be used to study how human adults, infants, animals or machines learn artificial grammars of various sorts, consisting of rules defined typically over syllables, sounds or visual items. In this introduction, we distill some lessons from the nine others papers in this special issue, which review the advances made from this growing body of literature. We provide a critical synthesis, identify the questions that remain open, and recognize the challenges that lie ahead. A key observation across the disciplines is that the limits of human, animal and machine capabilities have yet to be reached. Thus, this interdisciplinary area of research firmly rooted in the cognitive sciences has unearthed exciting new questions and venues for research, along the way fostering impactful collaborations between traditionally disconnected disciplines that are breaking scientific ground.
Domaines
Sciences cognitivesOrigine | Fichiers produits par l'(les) auteur(s) |
---|
Loading...