A Deep Learning Approach for Optimizing Edge Computing for Real-Time IoT Applications

Authors

  • Poonam
  • Sanjay Kumar Nayak
  • Reeta Mishra

DOI:

https://doi.org/10.71143/w40tra50

Keywords:

Deep Learning, Internet of Things (IoT), Real-time processing, Low latency, Edge optimization, Resource constraints, Compact models, Energy efficiency

Abstract

Edge computing has emerged as a critical enabler for real-time Internet of Things applications by enabling computational resources to be positioned nearer data sources, reducing latency and bandwidth demands. Nonetheless, the innate resource constraints of edge devices pose significant difficulties in meeting the demands of complex IoT tasks. This paper introduces a novel approach leveraging deep learning to optimize edge computing performance for real-time IoT applications. By integrating lightweight deep learning models and adaptive task offloading strategies, the proposed solution achieves a balance between computational efficiency and real-time processing needs. The framework is validated through simulations, demonstrating notable improvements in latency reduction, energy efficiency, and system scalability. These conclusions underscore the potential of deep learning as a transformative instrument in addressing the difficulties of edge computing in IoT ecosystems.

Downloads

Download data is not yet available.

Downloads

Published

14-02-2025

How to Cite

Poonam, Sanjay Kumar Nayak, & Reeta Mishra. (2025). A Deep Learning Approach for Optimizing Edge Computing for Real-Time IoT Applications. International Journal of Research and Review in Applied Science, Humanities, and Technology, 2(2), 43-51. https://doi.org/10.71143/w40tra50

Similar Articles

1-10 of 19

You may also start an advanced similarity search for this article.