Engel, L.* ; Mueller, J.* ; Rendon, E.J.F.* ; Dorschky, E.* ; Krauss, D.* ; Ullmann, I.* ; Eskofier, B.M. ; Vossiek, M.*
Advanced millimeter wave radar-based human pose estimation enabled by a deep learning neural network trained with optical motion capture ground truth data.
IEEE J. Microw. 5, 373-387 (2025)
This paper presents a deep learning-enabled method for human pose estimation using radar target lists, obtained through a low-cost radar system with three transmitters and four receivers in a multiple-input multiple-output setup. We address challenges in previous research that often relied on extracting ground truth poses from RGB data, which are constrained by the need for 3D mapping and vulnerability to occlusions. To overcome these limitations, we utilized optical motion capture, which is widely recognized as the gold standard for precise human motion analysis. We conducted an extensive optical motion capture study involving various recorded movement activities, which resulted in mmRadPose, a new dataset that enhances existing benchmarks for radar-based pose estimation. This dataset has been made publicly accessible. Building on this approach, we designed an application-tailored radar signal processing chain to generate suitable input for the machine learning algorithm. We further developed an attentional recurrent-based deep learning model, PntPoseAT, which predicts 24 keypoints of human poses using radar target lists. We employed cross validation to thoroughly evaluate the model. This model surpasses previous approaches and achieves an average mean per-joint position error of $6.49 \,\mathrm{c}\mathrm{m}$ with a standard deviation of $3.74 \,\mathrm{c}\mathrm{m}$ on totally unseen test data. This excellent accuracy of the reconstructed keypoint positions is particularly remarkable when you consider that a very simple radar was used for the measurements. Additionally, we conducted a comprehensive analysis of the model's performance by exploring aspects such as network architecture, the use of long short-term memory versus gated recurrent units, input data selection, and the integration of multi-head self-attention mechanisms.
Impact Factor
Scopus SNIP
Web of Science
Times Cited
Scopus
Cited By
Altmetric
Publikationstyp
Artikel: Journalartikel
Dokumenttyp
Wissenschaftlicher Artikel
Typ der Hochschulschrift
Herausgeber
Schlagwörter
Deep Learning ; Human Pose Estimation ; Machine Learning ; Millimeter Wave ; Optical Motion Capture ; Point Cloud ; Radar ; Target List; Fall Detection; Recognition
Keywords plus
Sprache
englisch
Veröffentlichungsjahr
2025
Prepublished im Jahr
0
HGF-Berichtsjahr
2025
ISSN (print) / ISBN
2692-8388
e-ISSN
2692-8388
ISBN
Bandtitel
Konferenztitel
Konferzenzdatum
Konferenzort
Konferenzband
Quellenangaben
Band: 5,
Heft: 2,
Seiten: 373-387
Artikelnummer: ,
Supplement: ,
Reihe
Verlag
IEEE
Verlagsort
445 Hoes Lane, Piscataway, Nj 08855-4141 Usa
Tag d. mündl. Prüfung
0000-00-00
Betreuer
Gutachter
Prüfer
Topic
Hochschule
Hochschulort
Fakultät
Veröffentlichungsdatum
0000-00-00
Anmeldedatum
0000-00-00
Anmelder/Inhaber
weitere Inhaber
Anmeldeland
Priorität
Begutachtungsstatus
Peer reviewed
POF Topic(s)
30205 - Bioengineering and Digital Health
Forschungsfeld(er)
Enabling and Novel Technologies
PSP-Element(e)
G-540008-001
Förderungen
Deutsche Forschungsgemeinschaft (DFG, German Research foundation)
Copyright
Erfassungsdatum
2025-05-06