File(s) under permanent embargo
Steering approaches to Pareto-optimal multiobjective reinforcement learning
journal contribution
posted on 2017-11-01, 00:00 authored by P Vamplew, R Issabekov, Richard DazeleyRichard Dazeley, C Foale, A Berry, T Moore, Douglas CreightonDouglas CreightonFor reinforcement learning tasks with multiple objectives, it may be advantageous to learn stochastic or non-stationary policies. This paper investigates two novel algorithms for learning non-stationary policies which produce Pareto-optimal behaviour (w-steering and Q-steering), by extending prior work based on the concept of geometric steering. Empirical results demonstrate that both new algorithms offer substantial performance improvements over stationary deterministic policies, while Q-steering significantly outperforms w-steering when the agent has no information about recurrent states within the environment. It is further demonstrated that Q-steering can be used interactively by providing a human decision-maker with a visualisation of the Pareto front and allowing them to adjust the agent's target point during learning. To demonstrate broader applicability, the use of Q-steering in combination with function approximation is also illustrated on a task involving control of local battery storage for a residential solar power system.
History
Journal
NeurocomputingVolume
263Season
Part of special issue: Multiobjective Reinforcement Learning: Theory and ApplicationsPagination
26 - 38Publisher
ElsevierLocation
Amsterdam, The NetherlandsPublisher DOI
ISSN
0925-2312eISSN
1872-8286Language
engPublication classification
C1.1 Refereed article in a scholarly journalCopyright notice
2017, ElsevierUsage metrics
Categories
No categories selectedKeywords
Licence
Exports
RefWorks
BibTeX
Ref. manager
Endnote
DataCite
NLM
DC