Deakin University
Browse

File(s) under permanent embargo

Steering approaches to Pareto-optimal multiobjective reinforcement learning

journal contribution
posted on 2017-11-01, 00:00 authored by P Vamplew, R Issabekov, Richard DazeleyRichard Dazeley, C Foale, A Berry, T Moore, Douglas CreightonDouglas Creighton
For reinforcement learning tasks with multiple objectives, it may be advantageous to learn stochastic or non-stationary policies. This paper investigates two novel algorithms for learning non-stationary policies which produce Pareto-optimal behaviour (w-steering and Q-steering), by extending prior work based on the concept of geometric steering. Empirical results demonstrate that both new algorithms offer substantial performance improvements over stationary deterministic policies, while Q-steering significantly outperforms w-steering when the agent has no information about recurrent states within the environment. It is further demonstrated that Q-steering can be used interactively by providing a human decision-maker with a visualisation of the Pareto front and allowing them to adjust the agent's target point during learning. To demonstrate broader applicability, the use of Q-steering in combination with function approximation is also illustrated on a task involving control of local battery storage for a residential solar power system.

History

Journal

Neurocomputing

Volume

263

Season

Part of special issue: Multiobjective Reinforcement Learning: Theory and Applications

Pagination

26 - 38

Publisher

Elsevier

Location

Amsterdam, The Netherlands

ISSN

0925-2312

eISSN

1872-8286

Language

eng

Publication classification

C1.1 Refereed article in a scholarly journal

Copyright notice

2017, Elsevier