Open Access Green as soon as Postprint is submitted to ZB.
Minimization and estimation of the variance of prediction errors for cross-validation designs.
J. Stat. Theory Pract. 10, 420-443 (2016)
We consider the mean prediction error of a classification or regression procedure as well as its cross-validation estimates, and investigate the variance of this estimate as a function of an arbitrary cross-validation design. We decompose this variance into a scalar product of coefficients and certain covariance expressions, such that the coefficients depend solely on the resampling design, and the covariances depend solely on the data’s probability distribution. We rewrite this scalar product in such a form that the initially large number of summands can gradually be decreased down to three under the validity of a quadratic approximation to the core covariances. We show an analytical example in which this quadratic approximation holds true exactly. Moreover, in this example, we show that the leave-p–out estimator of the error depends on p only by means of a constant and can, therefore, be written in a much simpler form. Furthermore, there is an unbiased estimator of the variance of K–fold cross-validation, in contrast to a claim in the literature. As a consequence, we can show that Balanced Incomplete Block Designs have smaller variance than K–fold cross-validation. In a real data example from the UCI machine learning repository, this property can be confirmed. We finally show how to find Balanced Incomplete Block Designs in practice.
Altmetric
Additional Metrics?
Edit extra informations
Login
Publication type
Article: Journal article
Document type
Scientific Article
Keywords
Cross-validation ; Design ; Model Selection ; U-statistic
ISSN (print) / ISBN
1559-8608
e-ISSN
1559-8616
Quellenangaben
Volume: 10,
Issue: 2,
Pages: 420-443
Publisher
Taylor & Francis
Publishing Place
Colchester
Non-patent literature
Publications
Reviewing status
Peer reviewed
Institute(s)
Institute of Computational Biology (ICB)