Bias-regularised neural-network metamodelling of insurance portfolio risk

Luo, Wei, Mashrur, Akib, Robles-Kelly, Antonio and Li, Gang 2020, Bias-regularised neural-network metamodelling of insurance portfolio risk, in IJCNN : Proceedings of the 2020 International Joint Conference on Neural Networks, Institute of Electrical and Electronics Engineers (IEEE), Piscataway, N.J., doi: 10.1109/ijcnn48605.2020.9207375.

Attached Files
Name Description MIMEType Size Downloads

Title Bias-regularised neural-network metamodelling of insurance portfolio risk
Author(s) Luo, WeiORCID iD for Luo, Wei orcid.org/0000-0002-4711-7543
Mashrur, Akib
Robles-Kelly, AntonioORCID iD for Robles-Kelly, Antonio orcid.org/0000-0002-2465-5971
Li, GangORCID iD for Li, Gang orcid.org/0000-0003-1583-641X
Conference name IJCNN Neural Networks. International Joint Conference (2020 : Glasgow, United Kingdom)
Conference location Online : Glasgow, United Kingdom
Conference dates 19 - 24 July. 2020
Title of proceedings IJCNN : Proceedings of the 2020 International Joint Conference on Neural Networks
Publication date 2020
Total pages 8
Publisher Institute of Electrical and Electronics Engineers (IEEE)
Place of publication Piscataway, N.J.
Keyword(s) variable annuity metamodelling
expected bias
percentage error
CORE2020 A
Summary Deep learning models have attracted considerable attention in metamodelling of financial risks for large insurance portfolios. Those models, however, are generally trained in disregard of the collective nature of the data in the portfolio under study. Consequently, the training procedure often suffers from slow convergence, and the trained model often has poor accuracy. This is particularly evident in the presence of extreme individual contracts. In this paper, we advocate the view that the training of a meta-model for a portfolio should be guided by portfolio-level metrics. In particular, we propose an intuitive loss regulariser that explicitly accounts for the portfolio-level bias. Further, this training regulariser can be easily implemented with the minibatch stochastic gradient descent commonly used in training deep neural networks. Empirical evaluations on both simulated data and a benchmark dataset show that the regulariser yields more stable training, resulting in faster convergence and more reliable portfolio-level risk estimates.
ISBN 978-1-7281-6926-2
Language eng
DOI 10.1109/ijcnn48605.2020.9207375
Indigenous content off
HERDC Research category E1 Full written paper - refereed
Copyright notice ©2020, IEEE
Persistent URL http://hdl.handle.net/10536/DRO/DU:30143921

Connect to link resolver
 
Unless expressly stated otherwise, the copyright for items in DRO is owned by the author, with all rights reserved.

Versions
Version Filter Type
Citation counts: TR Web of Science Citation Count  Cited 0 times in TR Web of Science
Scopus Citation Count Cited 0 times in Scopus
Google Scholar Search Google Scholar
Access Statistics: 13 Abstract Views, 1 File Downloads  -  Detailed Statistics
Created: Mon, 12 Oct 2020, 12:52:31 EST

Every reasonable effort has been made to ensure that permission has been obtained for items included in DRO. If you believe that your rights have been infringed by this repository, please contact drosupport@deakin.edu.au.