PuSH - Publikationsserver des Helmholtz Zentrums München

Sparsity in Continuous-Depth Neural Networks.

In: (Advances in Neural Information Processing Systems). 2022. (Advances in Neural Information Processing Systems ; 35)
Verlagsversion
Open Access Gold (Paid Option)
Neural Ordinary Differential Equations (NODEs) have proven successful in learning dynamical systems in terms of accurately recovering the observed trajectories. While different types of sparsity have been proposed to improve robustness, the generalization properties of NODEs for dynamical systems beyond the observed data are underexplored. We systematically study the influence of weight and feature sparsity on forecasting as well as on identifying the underlying dynamical laws. Besides assessing existing methods, we propose a regularization technique to sparsify “input-output connections” and extract relevant features during training. Moreover, we curate real-world datasets consisting of human motion capture and human hematopoiesis single-cell RNA-seq data to realistically analyze different levels of out-of-distribution (OOD) generalization in forecasting and dynamics identification respectively. Our extensive empirical evaluation on these challenging benchmarks suggests that weight sparsity improves generalization in the presence of noise or irregular sampling. However, it does not prevent learning spurious feature dependencies in the inferred dynamics, rendering them impractical for predictions under interventions, or for inferring the true underlying dynamics. Instead, feature sparsity can indeed help with recovering sparse ground-truth dynamics compared to unregularized NODEs.
Weitere Metriken?
Zusatzinfos bearbeiten [➜Einloggen]
Publikationstyp Artikel: Konferenzbeitrag
Korrespondenzautor
ISSN (print) / ISBN 1049-5258
Konferenztitel Advances in Neural Information Processing Systems
Quellenangaben Band: 35 Heft: , Seiten: , Artikelnummer: , Supplement: ,
Nichtpatentliteratur Publikationen