Deep learning models have seen significant successes in numerous applications, but their inner workings remain elusive. The purpose of this work is to quantify the learning process of deep neural networks through the lens of a novel topological invariant called magnitude. Magnitude is an isometry invariant; its properties are an active area of research as it encodes many known invariants of a metric space. We use magnitude to study the internal representations of neural networks and propose a new method for determining their generalisation capabilities. Moreover, we theoretically connect magnitude dimension and the generalisation error, and demonstrate experimentally that the proposed framework can be a good indicator of the latter.
Impact Factor
Scopus SNIP
Web of Science Times Cited
Scopus Cited By
Altmetric
0
0
Tags
Anmerkungen
Besondere Publikation
Auf Hompepage verbergern
PublikationstypArtikel: Konferenzbeitrag
Dokumenttyp
Typ der Hochschulschrift
Herausgeber
Schlagwörter
Keywords plus
Spracheenglisch
Veröffentlichungsjahr2023
Prepublished im Jahr 0
HGF-Berichtsjahr2023
ISSN (print) / ISBN
e-ISSN
ISBN
Bandtitel
KonferenztitelProceedings of Machine Learning Research