Scalar reward is not enough: a response to Silver, Singh, Precup and Sutton (2021)
Version 2 2024-06-06, 10:18Version 2 2024-06-06, 10:18
Version 1 2022-09-29, 09:30Version 1 2022-09-29, 09:30
journal contribution
posted on 2024-06-06, 10:18authored byP Vamplew, BJ Smith, J Källström, G Ramos, R Rădulescu, DM Roijers, CF Hayes, F Heintz, P Mannion, PJK Libin, Richard DazeleyRichard Dazeley, C Foale
AbstractThe recent paper “Reward is Enough” by Silver, Singh, Precup and Sutton posits that the concept of reward maximisation is sufficient to underpin all intelligence, both natural and artificial, and provides a suitable basis for the creation of artificial general intelligence. We contest the underlying assumption of Silver et al. that such reward can be scalar-valued. In this paper we explain why scalar rewards are insufficient to account for some aspects of both biological and computational intelligence, and argue in favour of explicitly multi-objective models of reward maximisation. Furthermore, we contend that even if scalar reward functions can trigger intelligent behaviour in specific cases, this type of reward is insufficient for the development of human-aligned artificial general intelligence due to unacceptable risks of unsafe or unethical behaviour.