Quantifying Uncertainty in Complex Reinforcement Learning Scenarios

Research output: Chapter in Book/Report/Conference proceedingsConference proceedingpeer-review

Abstract

Addressing new challenges in reinforcement learning (RL) research requires identifying the most suitable algorithms, which involves their development and evaluation using various benchmarks. This paper presents a comparative analysis of two methodologies for classifying the complexity of RL problems, including real-world benchmarks and the existence of optimal policies. The exploration extends to the existence of optimal policies highlighting the significance of assumptions and methodologies in different studies. Additionally, two theories are presented that demonstrate conditions under which an optimal policy does not exist. A complexity classification based on these theories is introduced.

Original languageEnglish
Title of host publicationEuropean Conference on Multi-Agent Systems
Pages77-90
Number of pages14
DOIs
Publication statusPublished - 2025
Event21st European Conference on Multi-Agent Systems, EUMAS 2024 - Dublin, Ireland
Duration: 26 Aug 202428 Aug 2024

Publication series

NameLecture Notes in Computer Science ((LNAI,volume 15685))

Conference

Conference21st European Conference on Multi-Agent Systems, EUMAS 2024
Country/TerritoryIreland
CityDublin
Period26/08/2428/08/24

Keywords

  • Optimal policy
  • real-world benchmarks
  • Uncertainty quantification

Fingerprint

Dive into the research topics of 'Quantifying Uncertainty in Complex Reinforcement Learning Scenarios'. Together they form a unique fingerprint.

Cite this