Mobile cellular-connected UAVs: reinforcement learning for sky limits
File(s)2009.09815v1.pdf (778.26 KB)
Accepted version
Author(s)
Rosas De Andraca, Fernando Ernesto
Azari, M
Arani, A
Type
Conference Paper
Abstract
A cellular-connected unmanned aerial vehicle (UAV) faces several key challenges concerning connectivity and energy efficiency. Through a learning-based strategy, we propose a general novel multi-armed bandit (MAB) algorithm to reduce disconnectivity time, handover rate, and energy consumption of UAV by taking into account its time of task completion. By formulating the problem as a function of UAV's velocity, we show how each of these performance indicators (PIs) is improved by adopting a proper range of corresponding learning parameter, e.g. 50% reduction in HO rate as compared to a blind strategy. However, results reveal that the optimal combination of the learning parameters depends critically on any specific application and the weights of PIs on the final objective function.
Date Acceptance
2020-07-01
Citation
pp.1-6
Publisher
IEEE
Start Page
1
End Page
6
Copyright Statement
Copyright © 2020 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works.
Identifier
https://ieeexplore.ieee.org/abstract/document/9367580
Source
IEEE Globecom Workshops 2020
Publication Status
Published
Start Date
2020-12-07
Finish Date
2020-12-11
Coverage Spatial
Taipei, Taiwan
Date Publish Online
2021-03-05