Deakin University
Browse

An empirical investigation of value-based multi-objective reinforcement learning for stochastic environments

Download (1.28 MB)
journal contribution
posted on 2025-09-09, 00:19 authored by Kewen Ding, Peter Vamplew, Cameron Foale, Richard DazeleyRichard Dazeley
Abstract One common approach to solve multi-objective reinforcement learning (MORL) problems is to extend conventional Q-learning by using vector Q-values in combination with a utility function. However issues can arise with this approach in the context of stochastic environments, particularly when optimising for the scalarised expected reward (SER) criterion. This paper extends prior research, providing a detailed examination of the factors influencing the frequency with which value-based MORL Q-learning algorithms learn the SER-optimal policy for an environment with stochastic state transitions. We empirically examine several variations of the core multi-objective Q-learning algorithm as well as reward engineering approaches and demonstrate the limitations of these methods. In particular, we highlight the critical impact of the noisy Q-value estimates issue on the stability and convergence of these algorithms.

History

Related Materials

Location

Cambridge, Eng.

Open access

  • Yes

Language

eng

Journal

Knowledge Engineering Review

Volume

40

Article number

e6

Pagination

1-29

ISSN

0269-8889

eISSN

1469-8005

Publisher

Cambridge University Press