Adversarial Robustness in Deep Learning Models for Cybersecurity Applications: A Survey

Authors

  • Khushboo

DOI:

https://doi.org/10.71143/jgxj1k64

Abstract

Deep learning has become a critical component of cybersecurity applications, allowing applications to identify an intrusion, classify malware, filter spam, and detect phishing. The further development of deep learning, though, has also opened up new threats, specifically adversarial attacks, which take advantage of the vulnerabilities in the generalization of a model. Subtly perturbed adversarial inputs can cause models to make false predictions, which is a security risk to cybersecurity systems. The present paper is a systematic review of adversarial robustness in deep learning as applied to cybersecurity models. It begins with the definition of the premises of adversarial attacks, evasion, poisoning, and model extraction. The most common attack methods, which we will discuss in the paper, are Fast Gradient Sign Method (FGSM), Projected Gradient Descent (PGD) and adversarial generative methods. Adversarial training, gradient masking, ensemble learning, and certified defences are defined as defensive techniques and described in terms of how they may be applied in the sphere of cybersecurity. Applications of intrusion detection, malware detection, and authentication systems are addressed to discover implication in practice. Adversarial training might be more resilient, but less accurate, and more costly to compute. In line with the same, certified defences are officially assured, and remain scalable. The trade-offs that are identified during the review are security, efficiency and generalization. The paper concludes by stating that adversarial defences should be included in the cybersecurity pipeline with resilient architecture. Future research directions in detecting, hybrid symbolic-neural defences, and explainable AI are to identify in real-time lightweight high-performance models. By addressing the adversarial robustness problem, deep learning systems can be more dependable, which is why they can be trusted to be used in critical cybersecurity applications.

Downloads

Download data is not yet available.

Author Biography

  • Khushboo

    Academic coordinator, Amity University, Mohali, Punjab, India 

Downloads

Published

16-12-2025

How to Cite

Khushboo. (2025). Adversarial Robustness in Deep Learning Models for Cybersecurity Applications: A Survey. International Journal of Research and Review in Applied Science, Humanities, and Technology, 2(4), 328-332. https://doi.org/10.71143/jgxj1k64