A Deep Learning Approach for Optimizing Edge Computing for Real-Time IoT Applications
DOI:
https://doi.org/10.71143/w40tra50Keywords:
Deep Learning, Internet of Things (IoT), Real-time processing, Low latency, Edge optimization, Resource constraints, Compact models, Energy efficiencyAbstract
Edge computing has emerged as a critical enabler for real-time Internet of Things applications by enabling computational resources to be positioned nearer data sources, reducing latency and bandwidth demands. Nonetheless, the innate resource constraints of edge devices pose significant difficulties in meeting the demands of complex IoT tasks. This paper introduces a novel approach leveraging deep learning to optimize edge computing performance for real-time IoT applications. By integrating lightweight deep learning models and adaptive task offloading strategies, the proposed solution achieves a balance between computational efficiency and real-time processing needs. The framework is validated through simulations, demonstrating notable improvements in latency reduction, energy efficiency, and system scalability. These conclusions underscore the potential of deep learning as a transformative instrument in addressing the difficulties of edge computing in IoT ecosystems.
Downloads

Downloads
Published
Issue
Section
License
Copyright (c) 2025 International Journal of Research and Review in Applied Science, Humanities, and Technology

This work is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License.