TY - JOUR AB - Machine learning methods often assume that input features are available at no cost. However, in domains like healthcare, where acquiring features could be expensive or harmful, it is necessary to balance a feature's acquisition cost against its predictive value. The task of training an AI agent to decide which features to acquire is called active feature acquisition (AFA). By deploying an AFA agent, we effectively alter the acquisition strategy and trigger a distribution shift. To safely deploy AFA agents under this distribution shift, we present the problem of active feature acquisition performance evaluation (AFAPE). We examine AFAPE under i) a no direct effect (NDE) assumption, stating that acquisitions do not affect the underlying feature values; and ii) a no unobserved confounding (NUC) assumption, stating that retrospective feature acquisition decisions were only based on observed features. We show that one can apply missing data methods under the NDE assumption and offline reinforcement learning under the NUC assumption. When NUC and NDE hold, we propose a novel semi-offline reinforcement learning framework. This framework requires a weaker positivity assumption and introduces three new estimators: A direct method (DM), an inverse probability weighting (IPW), and a double reinforcement learning (DRL) estimator. AU - von Kleist, H. AU - Zamanian, A.* AU - Shpitser, I.* AU - Ahmidi, N. C1 - 74524 C2 - 57494 CY - 31 Gibbs St, Brookline, Ma 02446 Usa TI - Evaluation of active feature acquisition methods for time-varying feature settings. JO - J. Mach. Learn. Res. VL - 26 PB - Microtome Publ PY - 2025 SN - 1532-4435 ER - TY - JOUR AB - In recent years, algorithms and neural architectures based on the Weisfeiler-Leman algorithm, a well-known heuristic for the graph isomorphism problem, have emerged as a powerful tool for machine learning with graphs and relational data. Here, we give a comprehensive overview of the algorithm's use in a machine-learning setting, focusing on the supervised regime. We discuss the theoretical background, show how to use it for supervised graph and node representation learning, discuss recent extensions, and outline the algorithm's connection to (permutation-)equivariant neural architectures. Moreover, we give an overview of current applications and future directions to stimulate further research. AU - Morris, C.* AU - Lipman, Y.* AU - Maron, H.* AU - Rieck, B. AU - Kriege, N.M.* AU - Grohe, M.* AU - Fey, M.* AU - Borgwardt, K.* C1 - 69790 C2 - 53867 CY - 31 Gibbs St, Brookline, Ma 02446 Usa TI - Weisfeiler and Leman go Machine Learning: The Story so far. JO - J. Mach. Learn. Res. VL - 24 PB - Microtome Publ PY - 2023 SN - 1532-4435 ER -