GEE Maximization in UAV-Aided Mobile IoT Networks Using Deep Reinforcement Learning

Research output: Contribution to journalArticlepeer-review

Abstract

The rapid advancement of Internet of Things (IoT) technology has improved the connectivity of several applications. The recent introduction of mobile IoT devices (IoTDs) has further broadened the scope of conventional IoT networks, resulting in the Internet of Mobile Things (IoMT). However, the IoTDs’ limited data-storage capacity and dynamic mobility is challenging for efficient data collection in resource-constrained IoMT networks. In this work, a deep reinforcement learning (DRL) method is developed to efficiently collect IoTDs’ data using an Unmanned Aerial Vehicle (UAV). The proposed UAV scheduling is designed in the Deep Deterministic Policy Gradient (DDPG) framework, over the Twin Delayed Deep Deterministic Policy Gradient (TD3) approach. The DRL agent (UAV) learns data collection policies by adapting to different IoMT network uncertainties, such as IoTD data-storage levels, mobility patterns, and data transfer constraints. Simulation results indicate the superiority of the proposed TD3-based UAV scheduling method over other DRL approaches in UAV-IoTD data collection and continuing network functionality. They motivate using the proposed method in designing reliable and autonomous IoMT networks.

Keywords

  • Data Collection
  • Deep Reinforcement Learning
  • Internet of Mobile Things
  • Mobility Pattern
  • TD3
  • UAV

Fingerprint

Dive into the research topics of 'GEE Maximization in UAV-Aided Mobile IoT Networks Using Deep Reinforcement Learning'. Together they form a unique fingerprint.

Cite this