Intravenous (IV) contrast agents are an established medical tool to enhance the visibility of certain structures. However, their application substantially changes the appearance of Computed Tomography (CT) images, which - if unknown - can significantly deteriorate the diagnostic performance of neural networks. Artificial Intelligence (AI) can help to detect IV contrast, reducing the need for labour-intensive and error-prone manual labeling. However, we demonstrate that automated contrast detection can lead to discrimination against demographic subgroups. Moreover, it has been shown repeatedly that AI models can leak private training data. In this work, we analyse the fairness of conventional and privacy-preserving AI models during the detection of IV contrast on CT images. Specifically, we present models which are substantially fairer compared to a previously published baseline. For better comparability, we extend existing metrics to quantify the fairness of a model on a protected attribute in a single value. We provide a model, fulfilling a strict Differential Privacy protection of (ε,δ)=(8,2.8·10-3), which with an accuracy of 97.42% performs 5%-points better than the baseline. Additionally, while confirming prior works, that strict privacy preservation increases the discrimination against underrepresented subgroups, the proposed model is fairer than the baseline over all metrics considering race and sex as protected attributes, which extends to age for a more relaxed privacy guarantee.