Explainable AI in Healthcare: Models, Applications, and Challenges
DOI:
https://doi.org/10.71143/pady5f23Abstract
The healthcare industry is changing due to the application of Artificial Intelligence (AI) because it opens the possibility of making predictions in analytics, clinical decision support, medical imaging diagnostics, and custom treatment. Nevertheless, most of the recent AI models, particularly the deep learning models, are classified as black boxes since they are not interpretable. This veil puts the ethics, law and practice of health care in doubt where judgment directly affects patient safety and trust. The concept of explainable artificial intelligence (XAI) has emerged as a significant research field to make AI systems easier to understand and more open, interpretable, and credible by providing explanations of their behaviour at model level. This article is a review of explainable AI in healthcare that covers the types of models, their uses, and challenges. It further describes common techniques, including the post-hoc interpretability techniques (e.g. SHAP, LIME), the inherently interpretable models (e.g. decision trees, rule-based systems), and hybrids. It has also been discussed within the framework of diagnostic imaging, electronic health records (EHRs), drug discovery, and precision medicine. Also, there are still concerns regarding how to deal with trade-offs between accuracy and interpretability, how to measure evaluation metrics in a standardized manner, how to evaluate fairly, and how to translate XAI into clinical practice. The findings of the existing literature indicate that XAI will increase compliance with clinicians and regulators and patient empowerment. But under varying clinical conditions, scalability, the expense and variability of interpretability, continue to be key bottlenecks. The new directions are building the context-specific XAI models, the federated learning support, the alignment of the models with the ethical and legal principles, and GDPR. In this paper, the explainable AI is identified as a key to the responsible use of AI in healthcare. XAI also ensures that AI is used to make healthcare delivery safer, more ethical and more effective by closing the gap between complex models and the way human beings make decisions.
Downloads
References
Mukesh Kumar, Professor & Research Supervisor, Department of Computer Science and Engineering, NIILM University, Kaithal, Haryana mrana91@gmail.com
Ashish Kumar, Research Scholar, Department of Computer Science and Engineering, NIILM University, Kaithal, Haryana
Downloads
Published
Issue
Section
License
Copyright (c) 2025 International Journal of Research and Review in Applied Science, Humanities, and Technology

This work is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License.








