Deakin University
Browse

File(s) under permanent embargo

Softmax exploration strategies for multiobjective reinforcement learning

journal contribution
posted on 2017-11-08, 00:00 authored by P Vamplew, Richard DazeleyRichard Dazeley, C Foale
Despite growing interest over recent years in applying reinforcement learning to multiobjective problems, there has been little research into the applicability and effectiveness of exploration strategies within the multiobjective context. This work considers several widely-used approaches to exploration from the single-objective reinforcement learning literature, and examines their incorporation into multiobjective Q-learning. In particular this paper proposes two novel approaches which extend the softmax operator to work with vector-valued rewards. The performance of these exploration strategies is evaluated across a set of benchmark environments. Issues arising from the multiobjective formulation of these benchmarks which impact on the performance of the exploration strategies are identified. It is shown that of the techniques considered, the combination of the novel softmax–epsilon exploration with optimistic initialisation provides the most effective trade-off between exploration and exploitation.

History

Journal

Neurocomputing

Volume

263

Pagination

74 - 86

Publisher

Elsevier

Location

Amsterdam, The Netherlands

ISSN

0925-2312

eISSN

1872-8286

Language

eng

Publication classification

C Journal article; C1.1 Refereed article in a scholarly journal

Copyright notice

2017, Elsevier B.V.