TY - JOUR AB - Data privacy and security is an essential challenge in medical clinical settings, where individual hospital has its own sensitive patients data. Due to recent advances in decentralized machine learning in Federated Learning (FL), each hospital has its own private data and learning models to collaborate with other trusted participating hospitals. Heterogeneous data and models among different hospitals raise major challenges in robust FL, such as gradient leakage, where participants can exploit model weights to infer data. Here, we proposed a robust FL method to efficiently tackle data and model heterogeneity, where we train our model using knowledge distillation and a novel weighted client confidence score on hematological cytomorphology data in clinical settings. In the knowledge distillation, each participant learns from other participants by a weighted confidence score so that knowledge from clean models is distributed other than the noisy clients possessing noisy data. Moreover, we use symmetric loss to reduce the negative impact of data heterogeneity and label diversity by reducing overfitting the model to noisy labels. In comparison to the current approaches, our proposed method performs the best, and this is the first demonstration of addressing both data and model heterogeneity in end-to-end FL that lays the foundation for robust FL in laboratories and clinical applications. AU - Madni, H.A.* AU - Umer, R.M. AU - Foresti, G.L.* C1 - 70109 C2 - 55428 CY - 5 Toh Tuck Link, Singapore 596224, Singapore TI - Robust federated learning for heterogeneous model and data. JO - Int. J. Neural Syst. VL - 34 IS - 4 PB - World Scientific Publ Co Pte Ltd PY - 2024 SN - 0129-0657 ER - TY - JOUR AB - Swarm Learning (SL) is a promising approach to perform the distributed and collaborative model training without any central server. However, data sensitivity is the main concern for privacy when collaborative training requires data sharing. A neural network, especially Generative Adversarial Network (GAN), is able to reproduce the original data from model parameters, i.e. gradient leakage problem. To solve this problem, SL provides a framework for secure aggregation using blockchain methods. In this paper, we consider the scenario of compromised and malicious participants in the SL environment, where a participant can manipulate the privacy of other participant in collaborative training. We propose a method, Swarm-FHE, Swarm Learning with Fully Homomorphic Encryption (FHE), to encrypt the model parameters before sharing with the participants which are registered and authenticated by blockchain technology. Each participant shares the encrypted parameters (i.e. ciphertexts) with other participants in SL training. We evaluate our method with training of the convolutional neural networks on the CIFAR-10 and MNIST datasets. On the basis of a considerable number of experiments and results with different hyperparameter settings, our method performs better as compared to other existing methods. AU - Madni, H.A.* AU - Umer, R.M. AU - Foresti, G.L.* C1 - 67874 C2 - 54352 CY - 5 Toh Tuck Link, Singapore 596224, Singapore TI - Swarm-fhe: Fully homomorphic encryption based swarm learning for malicious clients. JO - Int. J. Neural Syst. VL - 33 IS - 8 PB - World Scientific Publ Co Pte Ltd PY - 2023 SN - 0129-0657 ER -