Skip to main content

Research Repository

Advanced Search

An integrated approach to mitigate poisoning attacks in federated learning frameworks

Latif, Shahid; Djenouri, Djamel; Adamatzky, Andrew

Authors



Abstract

While Federated learning (FL) is considered privacy-preserving by nature, it remains vulnerable to many attacks, such as data and model poisoning, that compromise data integrity and model accuracy. Conventional privacy-preserving federated learning (PPFL) mechanisms, including homomorphic encryption (HE), secure aggregation, and secure multiparty computation (SMPC) demonstrate several limitations , such as high computational complexity, significant communication overhead, and scalability challenges. To overcome the aforementioned issues, we propose an end-to-end secure FL architecture that integrates differential privacy (DP), zero-knowledge proof (ZKP), and median aggregation. DP prevents data leakage during model updates by introducing Laplacian noise for privacy preservation. ZKP is implemented through Schnorr's protocol, which enables lightweight and efficient client authentication without revealing sensitive information. Finally, median aggregation is incorporated to mitigate the impact of outliers and adversarial updates, ensuring robust prediction aggregation. The experimental results indicate that the proposed approach outperforms other well-known PPFL methods including partially homomorphic encryption (PHE), fully homomorphic encryption (FHE) and SMPC. It delivers substantial improvements in global accuracy, especially for larger client counts, with gains of 10%-30% over the other methods. The client training time is significantly reduced by 70%-90%, ensuring faster processing. The approach also excels at reducing average round latency by 80%-95%, enhancing the overall efficiency of the system. Communication overhead is significantly reduced by 65%-85%, lowering data transfer costs per round. Furthermore, the size of the model is minimized by 60%-85%, making it more resource efficient and scalable for larger deployments.

Presentation Conference Type Conference Paper (unpublished)
Conference Name International Joint Conference on Neural Networks
Start Date Jun 30, 2025
End Date Jul 5, 2025
Acceptance Date Apr 1, 2025
Deposit Date Apr 25, 2025
Peer Reviewed Peer Reviewed
Keywords Index Terms-Cybersecurity; Differential Privacy; Federated Learning; Poisoning Attacks; Zero-knowledge Proof
Public URL https://uwe-repository.worktribe.com/output/14326974