Neuroimaging Research: From Null-Hypothesis Falsification to Out-of-sample Generalization
Résumé
Brain imaging technology has boosted the quantification of neurobiological phenomena underlying human mental operations and their disturbances. Since its inception, drawing inference on neurophysiological effects hinged on classical statistical methods, especially, the general linear model. The tens of thousands variables per brain scan were routinely tackled by independent statistical tests on each voxel. This circumvented the curse of dimensionality in exchange for neurobiologically imperfect observation units, a challenging multiple comparisons problem, and limited scaling to currently growing data repositories. Yet, the always-bigger information granularity of neuroimaging data repositories has lunched a rapidly increasing adoption of statistical learning algorithms. These scale naturally to high-dimensional data, extract models from data rather than prespecifying them, and are empirically evaluated for extrapolation to unseen data. The present paper portrays commonalities and differences between long-standing classical inference and upcoming generalization inference relevant for conducting neuroimaging research.
Origine | Fichiers produits par l'(les) auteur(s) |
---|
Loading...