Deakin University
Browse

File(s) under embargo

Selective experience replay for lifelong learning

conference contribution
posted on 2023-10-02, 22:58 authored by D Isele, Akan CosgunAkan Cosgun
Deep reinforcement learning has emerged as a powerful tool for a variety of learning tasks, however deep nets typically exhibit forgetting when learning multiple tasks in sequence. To mitigate forgetting, we propose an experience replay process that augments the standard FIFO buffer and selectively stores experiences in a long-term memory. We explore four strategies for selecting which experiences will be stored: favoring surprise, favoring reward, matching the global training distribution, and maximizing coverage of the state space. We show that distribution matching successfully prevents catastrophic forgetting, and is consistently the best approach on all domains tested. While distribution matching has better and more consistent performance, we identify one case in which coverage maximization is beneficial - when tasks that receive less trained are more important. Overall, our results show that selective experience replay, when suitable selection algorithms are employed, can prevent catastrophic forgetting.

History

Pagination

3302-3309

Location

LA, New Orleans

Start date

2018-02-02

End date

2018-02-07

ISSN

2159-5399

eISSN

2374-3468

ISBN-13

9781577358008

Language

English

Title of proceedings

32nd AAAI Conference on Artificial Intelligence, AAAI 2018

Event

32nd AAAI Conference on Artificial Intelligence / 30th Innovative Applications of Artificial Intelligence Conference / 8th AAAI Symposium on Educational Advances in Artificial Intelligence

Publisher

ASSOC ADVANCEMENT ARTIFICIAL INTELLIGENCE

Place of publication

New York, N.Y.

Series

AAAI Conference on Artificial Intelligence

Usage metrics

    Research Publications

    Categories

    No categories selected

    Exports

    RefWorks
    BibTeX
    Ref. manager
    Endnote
    DataCite
    NLM
    DC