Federated Learning (FL) has emerged as a promising paradigm for training machine learning models across distributed devices while preserving their data privacy. However, the robustness of FL models against adversarial data and model attacks, noisy updates, and label-flipped data issues remain a critical concern. In this paper, we present a systematic literature review using the PRISMA framework to comprehensively analyze existing research on robust FL. Through a rigorous selection process using six key databases (ACM Digital Library, IEEE Xplore, ScienceDirect, Springer, Web of Science, and Scopus), we identify and categorize 244 studies into eight themes of ensuring robustness in FL: objective regularization, optimizer modification, differential privacy employment, additional dataset requirement and decentralization orchestration, manifold, client selection, new aggregation algorithms, and aggregation hyperparameter tuning. We synthesize the findings from these themes, highlighting the various approaches and their potential gaps proposed to enhance the robustness of FL models. Furthermore, we discuss future research directions, focusing on the potential of hybrid approaches, ensemble techniques, and adaptive mechanisms for addressing the challenges associated with robust FL. This review not only provides a comprehensive overview of the state-of-the-art in robust FL but also serves as a roadmap for researchers and practitioners seeking to advance the field and develop more robust and resilient FL systems.