TY - JOUR AB - Understanding human attention mechanisms is crucial for advancing both vision science and artificial intelligence. While numerous computational models of free-viewing have been proposed, less is known about the mechanisms underlying task-driven image exploration. To address this gap, we introduce NevaClip, a novel zero-shot method for predicting visual scanpaths. NevaClip leverages contrastive language-image pretrained (CLIP) models in conjunction with human-inspired neural visual attention (NeVA) algorithms. By aligning the representation of foveated visual stimuli with associated captions, NevaClip uses gradient-driven visual exploration to generate scanpaths that simulate human attention. We also present CapMIT1003, a new dataset comprising captions and click-contingent image explorations collected from participants engaged in a captioning task. Based on the established MIT1003 benchmark, which includes eye-tracking data from free-viewing conditions, CapMIT1003 provides a valuable resource for studying human attention across both free-viewing and task-driven contexts. Additionally, we demonstrate NevaClip’s performance on the publicly available AiR-D dataset, which includes visual question answering (VQA) tasks. Experimental results show that NevaClip outperforms existing unsupervised computational models in scanpath plausibility across captioning, VQA, and free-viewing tasks. Furthermore, we demonstrate that NevaClip’s performance is sensitive to caption accuracy, with misleading captions leading to inaccurate scanpath behaviors. This underscores the importance of caption guidance in attention prediction and highlights NevaClip’s potential to advance our understanding of task-driven human attention mechanisms. Together, NevaClip and CapMIT1003 offer significant contributions to the field, providing new tools for studying and simulating human visual attention. AU - Zanca, D.* AU - Zugarini, A.* AU - Dietz, S.* AU - Altstidl, T.R.* AU - Ndjeuha, M.A.T.* AU - Chakraborty, M.* AU - Jami, N.V.S.J.* AU - Schwinn, L.* AU - Eskofier, B.M. C1 - 75855 C2 - 58144 TI - Contrastive language-image pretrained models are zero-shot human Scanpath predictors. JO - IEEE trans. artif. intell. PY - 2025 ER -