Reinforcement Learning Approaches for Energy-Efficient IoT Resource Allocation

Authors

  • Devendra Pratap Singh

DOI:

https://doi.org/10.71143/b38kjt23

Keywords:

Reinforcement Learning, Internet of things, power conservation, resource scheduling, deep reinforcement learning

Abstract

The IoT has become a paradigm shift and already has connected billions of devices in the healthcare, transportation, production, and smart cities sectors. Since this growth is exponential, a great challenge has been provision of resources (particularly its energy efficiency). IoT devices are described as having low power, computing power, and bandwidth. The non-uniform and extremely dynamic nature of the IoT environment cannot be practically addressed using the classical optimization models. It can be quite promising to use the reinforcement Learning (RL) to attain autonomous and adaptive decisions in the resources allocation based on the data reduction without energy consumption. The article shall include a literature review of reinforcement learning systems to effectively distribute the IoT resources in terms of energy consumption. It introduces the theoretical models of RL, Markov Decision Process, Q-learning and Deep Reinforcement Learning (DRL) and applies them to maximize the power consumption, bandwidth allocation and offloading of computations. The paper discusses such popular RL-based architecture as Q-learning to dynamical spectrum accessing, Deep Q-Network to task allocation, and actor-critic architecture to power harvesting. It further talks about hybrid solutions using RL that could be used to solve the privacy and scalability problem by generalizing to non-metric type of edge computing and federated learning. It is revealed that the RL-based approaches is way better than the time-honoured heuristics since it accommodates the dynamical requirements of the network and consumes lesser powers but does not improve the performance of the Quality of Service (QoS). Scalability, speed of conversion, interpretability and practical application, however, remain an issue. As mentioned in the paper, reinforcement learning has been suggested as a strong paradigm to establish sustainable IoT ecosystems and that future research should also consider lightweight, explainable, and privacy-preserving instantiations of RL models, which can be implemented in the resource-constrained IoT setting.

Downloads

Download data is not yet available.

References

Atzori, L., Iera, A., & Morabito, G. (2010). The Internet of Things: A survey. Computer Networks, 54(15), 2787–2805.

Chen, Y., Zhang, N., Zhang, Y., Chen, X., & Shen, X. (2020). Energy efficient resource allocation in IoT networks: A reinforcement learning approach. IEEE Internet of Things Journal, 7(8), 7445–7457.

Liu, Y., Yu, F. R., Li, X., Ji, H., & Leung, V. C. (2019). Energy-efficient resource allocation in wireless networks: An RL-based survey. IEEE Communications Surveys & Tutorials, 21(2), 1462–1491.

Lillicrap, T. P., Hunt, J. J., Pritzel, A., Heess, N., Erez, T., Tassa, Y., ... & Wierstra, D. (2016). Continuous control with deep reinforcement learning. ICLR 2016.

Mnih, V., Kavukcuoglu, K., Silver, D., Rusu, A. A., Veness, J., Bellemare, M. G., ... & Hassabis, D. (2015). Human-level control through deep reinforcement learning. Nature, 518(7540), 529–533.

Shrestha, R., Bajracharya, R., Shrestha, A. P., & Nam, S. Y. (2021). 6G-enabled IoT resource management using RL: Opportunities and challenges. Sensors, 21(5), 1645.

Sutton, R. S., & Barto, A. G. (2018). Reinforcement learning: An introduction. MIT Press.

Wang, S., Tuor, T., Salonidis, T., Leung, K. K., Makaya, C., He, T., & Chan, K. (2020). Adaptive federated learning in resource-constrained edge computing systems. IEEE Journal on Selected Areas in Communications, 37(6), 1205–1221.

Zhang, J., Xu, W., & Wang, H. (2021). Computation offloading in IoT: A DRL perspective. IEEE Wireless Communications, 28(3), 56–62.

Zhao, J., Ni, Q., & Sun, Y. (2019). Reinforcement learning for dynamic spectrum access in energy-constrained IoT. IEEE Transactions on Cognitive Communications and Networking, 5(3), 595–605.

Tang, F., Mao, B., & Li, Z. (2019). Deep reinforcement learning for resource allocation in edge-enabled IoT. IEEE Network, 33(3), 138–145.

Yu, R., Zhang, J., & Chen, Y. (2019). A deep actor–critic approach for energy harvesting IoT networks. IEEE Internet of Things Journal, 6(5), 8262–8272.

Khan, W. Z., Rehman, M. H., Zangoti, H. M., Afzal, M. K., Armi, N., & Salah, K. (2020). Industrial IoT: Security and resource management with AI. IEEE Access, 8, 30173–30188.

He, C., Anwar, A., & Chen, Z. (2021). Reinforcement learning for sustainable IoT: A review. Sustainable Computing: Informatics and Systems, 30, 100512.

Zappone, A., Di Renzo, M., & Debbah, M. (2019). Wireless networks design with RL: Models and algorithms. IEEE Communications Magazine, 57(6), 86–92.

Downloads

Published

21-07-2025

How to Cite

Devendra Pratap Singh. (2025). Reinforcement Learning Approaches for Energy-Efficient IoT Resource Allocation. International Journal of Research and Review in Applied Science, Humanities, and Technology, 2(3), 259-264. https://doi.org/10.71143/b38kjt23

Similar Articles

1-10 of 80

You may also start an advanced similarity search for this article.