Explainable Deep Learning Models for Improved Clinical Decision Support Systems
DOI:
https://doi.org/10.71143/zq7s8v05Abstract
As electronic medical records proliferate and artificial intelligence advances, clinical decision support systems (CDSSs) aid healthcare providers in diagnostic and therapeutic choices. Conventional knowledge-driven CDSSs rely on curated medical databases and fixed inference rules, offering transparent reasoning but facing high costs for data curation and uniformity. Data-driven CDSSs leverage extensive datasets and machine learning algorithms for robust predictions, yet they suffer from opaque "black-box" operations that undermine reliability. CDSSs incorporating explainable AI (XAI) deliver interpretable justifications for outputs, fostering trust through visualization of decision pathways. Despite these benefits, current XAI-CDSS implementations are constrained by limited data scopes and insufficient model interpretability. This research introduces an innovative XAI-CDSS architecture to overcome these challenges, outlines applicable datasets, resources, and models, and establishes a versatile foundational model for decision support across diverse medical conditions. We conclude with prospective advancements in CDSS innovation and underscore critical societal considerations to unlock their full clinical potential.
Downloads
Downloads
Published
Issue
Section
License

This work is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License.








