Explainable AI Models for Trustworthy Decision Support in Smart IoT Applications
DOI:
https://doi.org/10.71143/w4gze415Abstract
The rapid proliferation of the Internet of Things (IoT) has led to the emergence of intelligent environments where billions of interconnected devices generate vast amounts of data. These data-driven ecosystems rely heavily on artificial intelligence (AI) models for automated decision-making in domains such as smart healthcare, smart cities, industrial IoT (IIoT), and smart homes. However, most AI models, especially deep learning architectures, operate as “black boxes,” limiting transparency and trust. This lack of interpretability poses significant challenges in critical applications where decisions directly impact human safety, operational reliability, and regulatory compliance. Explainable Artificial Intelligence (XAI) has emerged as a promising paradigm to address these concerns by providing interpretable and transparent insights into AI-driven decisions. In IoT systems, where heterogeneous devices operate under constrained resources, the integration of XAI must balance interpretability, computational efficiency, and real-time processing requirements. Recent studies highlight that incorporating XAI techniques such as SHAP, LIME, and rule-based models significantly enhances trust, accountability, and usability in AIoT systems. This research proposes a hybrid Explainable AI framework tailored for smart IoT applications, combining lightweight machine learning models with post-hoc explanation techniques deployed at edge and cloud layers. The proposed system integrates data acquisition from IoT sensors, preprocessing pipelines, predictive modeling, and explanation modules that generate human-readable insights. A case study in smart healthcare and anomaly detection demonstrates improved transparency without compromising performance. Experimental results show that the proposed model achieves an accuracy of 94.2% while improving interpretability metrics by 38% compared to conventional black-box models. Furthermore, decision latency is reduced through edge-based inference, making the system suitable for real-time applications. This work contributes to the development of trustworthy AIoT systems by bridging the gap between model performance and interpretability. The findings emphasize the importance of explainability as a core requirement for next-generation intelligent systems, ensuring ethical, reliable, and human-centric decision support in smart IoT environments.
Downloads
Downloads
Published
Issue
Section
License

This work is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License.








