Deakin University
Browse

File(s) under permanent embargo

Evaluating Human-like Explanations for Robot Actions in Reinforcement Learning Scenarios

conference contribution
posted on 2023-02-22, 03:36 authored by F Cruz, C Young, Richard DazeleyRichard Dazeley, P Vamplew
Explainable artificial intelligence is a research field that tries to provide more transparency for autonomous intelligent systems. Explainability has been used, particularly in reinforcement learning and robotic scenarios, to better understand the robot decision-making process. Previous work, however, has been widely focused on providing technical explanations that can be better understood by AI practitioners than non-expert end-users. In this work, we make use of human-like explanations built from the probability of success to complete the goal that an autonomous robot shows after performing an action. These explanations are intended to be understood by people who have no or very little experience with artificial intelligence methods. This paper presents a user trial to study whether these explanations that focus on the probability an action has of succeeding in its goal constitute a suitable explanation for non-expert end-users. The results obtained show that non-expert participants rate robot explanations that focus on the probability of success higher and with less variance than technical explanations generated from Q-values, and also favor counterfactual explanations over standalone explanations.

History

Volume

2022-October

Pagination

894-901

Start date

2022-10-23

End date

2022-10-27

ISSN

2153-0858

eISSN

2153-0866

ISBN-13

9781665479271

Title of proceedings

IEEE International Conference on Intelligent Robots and Systems

Event

2022 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)

Publisher

IEEE

Usage metrics

    Research Publications

    Categories

    No categories selected

    Exports

    RefWorks
    BibTeX
    Ref. manager
    Endnote
    DataCite
    NLM
    DC