Abstract
Federated Learning (FL) has emerged as a promising approach to privacy-preserving machine learning (PPML), allowing multiple clients to collaboratively train models without sharing their raw data. This paradigm addresses critical privacy, security, and regulatory concerns that hinder traditional centralized machine learning, especially in sensitive domains such as healthcare, finance, and edge computing. This paper presents a comprehensive survey of FL algorithms, examining classical methods like FedAvg and FedSGD, optimization-aware approaches such as FedProx and Scaffold, communication-efficient techniques, and privacy-enhanced frameworks integrating differential privacy, homomorphic encryption, and secure multiparty computation. Beyond algorithmic analysis, the paper explores real-world applications of FL across domains where data sensitivity and decentralization are paramount. It also discusses prevailing threats—gradient leakage, model inversion, poisoning, and backdoor attacks—and outlines corresponding mitigation strategies. Key challenges such as client heterogeneity, communication overhead, and trust assumptions are critically examined. The study identifies open research issues, including the need for scalable personalization, incentive mechanisms, integration with explainable AI, federated reinforcement learning, and deployment in low-resource environments. This survey provides a foundational understanding of federated learning's current landscape and future potential, offering insights for researchers and practitioners aiming to develop secure, efficient, and inclusive decentralized AI systems.

This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.
Copyright (c) 2025 Tech-Sphere Journal for Pure and Applied Sciences