Assessing and tuning brain decoders: Cross-validation, caveats, and guidelines.

Neuroimage 2017 01 29;145(Pt B):166-179. Epub 2016 Oct 29.

Parietal project-team, INRIA Saclay-ile de France, France; CEA/Neurospin bât 145, 91191 Gif-Sur-Yvette, France.

Decoding, i.e. prediction from brain images or signals, calls for empirical evaluation of its predictive power. Such evaluation is achieved via cross-validation, a method also used to tune decoders' hyper-parameters. This paper is a review on cross-validation procedures for decoding in neuroimaging. It includes a didactic overview of the relevant theoretical considerations. Practical aspects are highlighted with an extensive empirical study of the common decoders in within- and across-subject predictions, on multiple datasets -anatomical and functional MRI and MEG- and simulations. Theory and experiments outline that the popular "leave-one-out" strategy leads to unstable and biased estimates, and a repeated random splits method should be preferred. Experiments outline the large error bars of cross-validation in neuroimaging settings: typical confidence intervals of 10%. Nested cross-validation can tune decoders' parameters while avoiding circularity bias. However we find that it can be favorable to use sane defaults, in particular for non-sparse decoders.

Download full-text PDF

Source
http://dx.doi.org/10.1016/j.neuroimage.2016.10.038DOI Listing
January 2017
3 Reads

Publication Analysis

Top Keywords

experiments outline
8
tune decoders'
8
cross-validation
5
theory experiments
4
meg- simulations
4
functional mri
4
mri meg-
4
simulations theory
4
outline popular
4
leads unstable
4
unstable biased
4
strategy leads
4
"leave-one-out" strategy
4
popular "leave-one-out"
4
-anatomical functional
4
multiple datasets
4
highlighted extensive
4
extensive empirical
4
aspects highlighted
4
practical aspects
4

Altmetric Statistics

Similar Publications