Deakin University
Browse

File(s) under embargo

A Deep Reinforcement Learning Based Motion Cueing Algorithm for Vehicle Driving Simulation

journal contribution
posted on 2024-04-12, 05:24 authored by H Scheidel, Houshyar AsadiHoushyar Asadi, T Bellmann, A Seefried, Shady MohamedShady Mohamed, S Nahavandi
In the field of motion simulation, the level of immersion strongly depends on the motion cueing algorithm (MCA), as it transfers the reference motion of the simulated vehicle to a motion of the motion simulation platform (MSP). The challenge for the MCA is to reproduce the motion perception of a real vehicle driver as accurately as possible without exceeding the limits of the workspace of the MSP in order to provide a realistic virtual driving experience. In case of a large discrepancy between the perceived motion signals and the optical cues, motion sickness may occur with the typical symptoms of nausea, dizziness, headache and fatigue. Existing approaches either produce non-optimal results, e.g. due to filtering, linearization, or simplifications, or the required computational time exceeds the real-time requirements of a closed-loop application. This work presents a new solution to the motion cueing problem, where instead of a human designer specifying the principles of the MCA, an artificial intelligence (AI) learns the optimal motion by trial and error in interaction with the MSP. To achieve this, a well-established deep reinforcement learning (RL) algorithm is applied, where an agent interacts with an environment formulated as a Markov decision process (MDP). This allows the agent to directly control a simulated MSP to obtain feedback on its performance in terms of platform workspace usage and the motion acting on the simulator user. The RL algorithm used is proximal policy optimization (PPO), where the value function and the policy corresponding to the control strategy are learned and both are mapped in artificial neural networks (ANN). This approach is implemented in Python and the functionality is demonstrated by the practical example of pre-recorded lateral maneuvers. The subsequent validation on a standardized double lane change shows that the RL algorithm is able to learn the control strategy and improve the quality of the immersion compared to an established method while enhancing the realistic driving motion sensation. Thereby, both the perceived translational accelerations and rotational angular velocities determined under consideration of the vestibular system are more accurately reproduced, and the resources of the MSP are used more economically.

History

Journal

IEEE Transactions on Vehicular Technology

Volume

PP

Pagination

1-11

Location

Piscataway, N.J.

ISSN

0018-9545

eISSN

1939-9359

Language

eng

Publication classification

C1 Refereed article in a scholarly journal

Issue

99

Publisher

Institute of Electrical and Electronics Engineers

Usage metrics

    Research Publications

    Exports

    RefWorks
    BibTeX
    Ref. manager
    Endnote
    DataCite
    NLM
    DC