Deakin University
Browse

EMOTE: An Explainable Architecture for Modelling the Other through Empathy

conference contribution
posted on 2024-09-02, 01:19 authored by Manisha Senadeera, Thommen Karimpanal George, Stephan Jacobs, Sunil GuptaSunil Gupta, Santu RanaSantu Rana
Empathy allows us to assume others are like us and have goals analogous to our own. This can also at times be applied to multi-agent games - e.g. Agent 1's attraction to green balls is analogous to Agent 2's attraction to red balls. Drawing inspiration from empathy, we propose EMOTE, a simple and explainable inverse reinforcement learning (IRL) approach designed to model another agent's action-value function and from it, infer a unique reward function. This is done by referencing the learning agent's own action value function, removing the need to maintain independent action-value estimates for the modelled agents whilst simultaneously addressing the ill-posed nature of IRL by inferring a unique reward function. We experiment on minigrid environments showing EMOTE: (a) produces more consistent reward estimates relative to other IRL baselines (b) is robust in scenarios with composite reward and action-value functions (c) produces human-interpretable states, helping to explain how the agent views other agents.

History

Pagination

4876-4884

Location

Jeju, South Korea

Start date

2024-08-03

End date

2024-08-09

ISBN-13

978-1-956792-04-1

Language

en

Publication classification

E1 Full written paper - refereed

Title of proceedings

IJCAI-24 : Proceedings of the 33rd International Joint Conference on Artificial Intelligence 2024

Event

International Joint Conference on Artificial Intelligence. (33rd : 2024 : Jeju, South Korea)

Publisher

International Joint Conferences on Artificial Intelligence Organization

Usage metrics

    Research Publications

    Exports

    RefWorks
    BibTeX
    Ref. manager
    Endnote
    DataCite
    NLM
    DC