Optimising discrete event simulation models using a reinforcement learning agent
Creighton, Douglas and Nahavandi, Saeid 2002, Optimising discrete event simulation models using a reinforcement learning agent, in WSC 2002 : Exploring new frontiers : Proceedings of the 34th Conference on Winter Simulation, IEEE Xplore, Piscataway, N.J., pp. 1945-1950.
(Some files may be inaccessible until you login with your Deakin Research Online credentials)
WSC 2002 : Exploring new frontiers : Proceedings of the 34th Conference on Winter Simulation
Yucesan, E. Chen, C.-H. Snowdon, J.L. Charnes, J.M.
Place of publication
A reinforcement learning agent has been developed to determine optimal operating policies in a multi-part serial line. The agent interacts with a discrete event simulation model of a stochastic production facility. This study identifies issues important to the simulation developer who wishes to optimise a complex simulation or develop a robust operating policy. Critical parameters pertinent to 'tuning' an agent quickly and enabling it to rapidly learn the system were investigated.
Unless expressly stated otherwise, the copyright for items in Deakin Research Online is owned by the author, with all rights reserved.
Every reasonable effort has been made to ensure that permission has been obtained for items included in DRO.
If you believe that your rights have been infringed by this repository, please contact firstname.lastname@example.org.