Full TitleLEtter learning through MOtor training: Neurocognitive specialization for the written code investigated with electrophysiological, eye-tracking, and computational tools
Letter recognition is the foundation of reading(1), a core skill in educational sphere and also for successful participation in every aspect of modern life. Learning to read is a multisensory experience, which involves successfully linking visual letters with speech sounds and with motor gestures through handwriting. It is already clear that handwriting assists the implementation of reading in the brain and mind (as we showed in a recent meta-analysis:20), but how does it happen? To answer this question is the overarching goal of LEMON.
Two systems conspire for skilled reading: a visual shape recognition system, in the left ventral occipitotemporal cortex (including the visual word form area, VWFA), and a motor gesture decoding system, in the left dorsal premotor cortex (including the Exner’s area; 2). We(9-11) showed that experience with written script induces the emergence of a perceptual mechanism tuned to the properties of the orthographic code. The greater benefit from handwriting on subsequent letter recognition relative to other forms training(3) is likely due to strengthening of functional connectivity between visual and motor systems(5). Indeed, in prereaders, handwriting experience triggers neural specialization for letters in VWFA(3), which latter serves automatic visual word recognition(4). Paradoxically, some authors have precociously announced the “death of handwriting” in the Digital Era(7) but Education policies for successful literacy instruction must be evidence-based. The outcomes of the proposed research can thus feed into this dialog.
Our original hypothesis is that handwriting assists letter recognition via segmental, fine-grained information that feeds into the visual perceptual system. We will test it in 3 studies with an artificial-script learning paradigm, by adopting state-of-the-art neurocognitive, electroencephalography (EEG) and eye-tracking methods. Adults will receive graphophonological training in a novel script for 5-days either in baseline (mere visual exposure) or allied with motor training. At pre- and post- training, we will gather high-temporal resolution EEG and eye-movement data to test perceptual learning effects, transfer effects to reading, and the emerging neural changes in letter recognition. By using writing logging data in real time during training (HandSpy;15), we will also assess if training-induced neural changes are related to handwriting expertise (e.g., writing speed). Establishing this relation is crucial for better understanding of brain-behavior relation.
We will thus give clear answers by targeting 3 specific aims on when and how motor action affects letter perception and interacts with the other fundamental systems in letter processing (attentional-13, and phonological systems-14): 1) whether handwriting enhances letter representations by tuning distinctive letter features (Study1); 2) whether it affects early- vs late-stage visual processing and the role of attention (Study2); 3) how multimodal orthographic, phonological, and motor information is combined during letter recognition (Study3).
Letters can be described in terms of distinctive features as those coding orientation (b ≠ d) and print-sound correspondences. Study1 will test if motor experience endorses this fine level of discrimination. By using eyetracking, we will track online whether motor training assists on attention allocation to visual and phonological aspects of letters. Study2 capitalizes on powerful machine learning (multivariate pattern analysis, MVPA) of EEG data to uncover the earliest neural expression of the impact of motor training on letter perception. The use of this technique for fully data-driven analysis is pioneer and a methodological advance relative to prior conventional EEG analyses (e.g.,16) that may miss subtle effects. We predict that handwriting acts early on in letter processing (<200ms), with VWFA as foci(3), and attention to local features is the tuning process responsible for enhancement of letter neural representations. Finally, Study3 takes LEMON one step further, studying how visual, auditory, and motor information interact during letter recognition(12). We will track for the first time neural EEG responses to audio-visual-motor integration, to determine if phonological and motor inputs are integrated or processed separately and if one type prioritizes over another for efficient letter recognition.
LEMON focus on a crucial cognitive domain; it can also inform on experience-related brain plasticity, key interests of cognitive psychology and neuroscience. It also replies to challenges on Sustainable Development and Inclusive Societies through quality education, core priorities of the 2030 Agenda for Sustainable Development: understanding how learning experiences shape letter representations is an important endeavor for successful education policies in typical and dyslexic readers.