möglich sobald bei der ZB eingereicht worden ist.
DataDream: Few-shot guided dataset generation.
In: (Computer Vision – ECCV 2024). Berlin [u.a.]: Springer, 2025. 252-268 (Lect. Notes Comput. Sc. ; 15129 LNCS)
While text-to-image diffusion models have been shown to achieve state-of-the-art results in image synthesis, they have yet to prove their effectiveness in downstream applications. Previous work has proposed to generate data for image classifier training given limited real data access. However, these methods struggle to generate in-distribution images or depict fine-grained features, thereby hindering the generalization of classification models trained on synthetic datasets. We propose DataDream, a framework for synthesizing classification datasets that more faithfully represents the real data distribution when guided by few-shot examples of the target classes. DataDream fine-tunes LoRA weights for the image generation model on the few real images before generating the training data using the adapted model. We then fine-tune LoRA weights for CLIP using the synthetic data to improve downstream image classification over previous approaches on a large variety of datasets. We demonstrate the efficacy of DataDream through extensive experiments, surpassing state-of-the-art classification accuracy with few-shot data across 7 out of 10 datasets, while being competitive on the other 3. Additionally, we provide insights into the impact of various factors, such as the number of real-shot and generated images as well as the fine-tuning compute on model performance. The code is available at https://github.com/ExplainableML/DataDream.
Altmetric
Weitere Metriken?
Zusatzinfos bearbeiten
[➜Einloggen]
Publikationstyp
Artikel: Konferenzbeitrag
ISSN (print) / ISBN
0302-9743
e-ISSN
1611-3349
Konferenztitel
Computer Vision – ECCV 2024
Zeitschrift
Lecture Notes in Computer Science
Quellenangaben
Band: 15129 LNCS,
Seiten: 252-268
Verlag
Springer
Verlagsort
Berlin [u.a.]
Nichtpatentliteratur
Publikationen