Login

Data-driven Deep Reinforcement Learning for Online Flight Resource Allocation in UAV-aided Wireless Powered Sensor Networks
Ref: CISTER-TR-220102       Publication Date: 16 to 20, May, 2022

Data-driven Deep Reinforcement Learning for Online Flight Resource Allocation in UAV-aided Wireless Powered Sensor Networks

Ref: CISTER-TR-220102       Publication Date: 16 to 20, May, 2022

Abstract:
In wireless powered sensor networks (WPSN), data of ground sensors can be collected or relayed by an unmanned aerial vehicle (UAV) while the battery of the ground sensor can be charged via wireless power transfer. A key challenge of resource allocation in UAV-aided WPSN is to prevent battery drainage and buffer overflow of the ground sensors in the presence of highly dynamic lossy airborne channels which can result in packet reception errors. Moreover, state and action spaces of the resource allocation problem are large, which is hardly explored online. To address the challenges, a new data-driven deep reinforcement learning framework, DDRL-RA, is proposed to train flight resource allocation online so that the data packet loss is minimized. Due to time-varying airborne channels, DDRL-RA firstly leverages long short-term memory (LSTM) with pre-collected offline datasets for channel randomness predictions. Then, Deep Deterministic Policy Gradient (DDPG) is studied to control the flight trajectory of the UAV, and schedule the ground sensor to transmit data and harvest energy. To evaluate the performance of DDRL-RA, a UAV-ground sensor testbed is built, where real-world datasets of channel gains are collected. DDRL-RA is implemented on Tensorflow, and numerical results show that DDRL-RA achieves 19\% lower packet loss than other learning-based frameworks.

Authors:
Kai Li
,
Wei Ni
,
Harrison Kurunathan
,
Falko Dressler


IEEE International Conference on Communications (ICC), SAC - Aerial Communications Track.
Seoul, South Korea.



Record Date: 26, Jan, 2022