TY - CHAP
T1 - A Novel Hybrid Approach for Fault-Tolerant Control of UAVs based on Robust Reinforcement Learning
AU - Sohège, Yves
AU - Quiñones-Grueiro, Marcos
AU - Provan, Gregory
N1 - Publisher Copyright:
© 2021 IEEE
PY - 2021
Y1 - 2021
N2 - The control of complex autonomous systems has significantly improved in recent years and unmanned aerial vehicles (UAVs) have become popular in the research community. Although the use of UAVs is increasing, much work remains to guarantee fault- tolerant control (FTC) properties of these vehicles. Model-based controllers are the standard way to control UAVs, however obtaining models of the system and environment for every possible operating condition a UAV can experience in a real-world scenario is not feasible. Reinforcement Learning has shown promise in controlling complex systems but requires training in a simulator (requiring a model) of the system. Further, stability guarantees do not exist for learning-based controllers, which limits their large scale application in the real-world. We propose a novel hybrid FTC approach that uses a learned supervisory controller (together with low-level PID controllers) with key stability guarantees. We use a robust reinforcement learning approach to learn the supervisory control parameters and prove stability. We empirically validate our framework using trajectory-following experiments (in simulation) for a quadcopter subject to rotor faults, wind disturbances, and severe position and attitude noise.
AB - The control of complex autonomous systems has significantly improved in recent years and unmanned aerial vehicles (UAVs) have become popular in the research community. Although the use of UAVs is increasing, much work remains to guarantee fault- tolerant control (FTC) properties of these vehicles. Model-based controllers are the standard way to control UAVs, however obtaining models of the system and environment for every possible operating condition a UAV can experience in a real-world scenario is not feasible. Reinforcement Learning has shown promise in controlling complex systems but requires training in a simulator (requiring a model) of the system. Further, stability guarantees do not exist for learning-based controllers, which limits their large scale application in the real-world. We propose a novel hybrid FTC approach that uses a learned supervisory controller (together with low-level PID controllers) with key stability guarantees. We use a robust reinforcement learning approach to learn the supervisory control parameters and prove stability. We empirically validate our framework using trajectory-following experiments (in simulation) for a quadcopter subject to rotor faults, wind disturbances, and severe position and attitude noise.
UR - https://www.scopus.com/pages/publications/85125478999
U2 - 10.1109/ICRA48506.2021.9562097
DO - 10.1109/ICRA48506.2021.9562097
M3 - Chapter
AN - SCOPUS:85125478999
T3 - Proceedings - IEEE International Conference on Robotics and Automation
SP - 10719
EP - 10725
BT - 2021 IEEE International Conference on Robotics and Automation, ICRA 2021
PB - Institute of Electrical and Electronics Engineers Inc.
T2 - 2021 IEEE International Conference on Robotics and Automation, ICRA 2021
Y2 - 30 May 2021 through 5 June 2021
ER -