File(s) under permanent embargo
Human-aligned artificial intelligence is a multiobjective problem
journal contribution
posted on 2018-03-01, 00:00 authored by P Vamplew, Richard DazeleyRichard Dazeley, C Foale, S Firmin, J Mummery© 2017, Springer Science+Business Media B.V. As the capabilities of artificial intelligence (AI) systems improve, it becomes important to constrain their actions to ensure their behaviour remains beneficial to humanity. A variety of ethical, legal and safety-based frameworks have been proposed as a basis for designing these constraints. Despite their variations, these frameworks share the common characteristic that decision-making must consider multiple potentially conflicting factors. We demonstrate that these alignment frameworks can be represented as utility functions, but that the widely used Maximum Expected Utility (MEU) paradigm provides insufficient support for such multiobjective decision-making. We show that a Multiobjective Maximum Expected Utility paradigm based on the combination of vector utilities and non-linear action–selection can overcome many of the issues which limit MEU’s effectiveness in implementing aligned AI. We examine existing approaches to multiobjective AI, and identify how these can contribute to the development of human-aligned intelligent agents.
History
Journal
Ethics and information technologyVolume
20Issue
1Pagination
27 - 40Publisher
SpringerLocation
New York, N.Y.Publisher DOI
ISSN
1388-1957eISSN
1572-8439Language
engPublication classification
C1.1 Refereed article in a scholarly journal; C Journal articleCopyright notice
2017, Springer Science+Business Media B.V.Usage metrics
Keywords
Social SciencesScience & TechnologyArts & HumanitiesTechnologyEthicsInformation Science & Library SciencePhilosophySocial Sciences - Other TopicsAligned artificial intelligenceValue alignmentMaximum Expected UtilityReward engineeringOPTIMIZATIONLIMITATIONSArtificial Intelligence and Image ProcessingPhilosophy
Licence
Exports
RefWorks
BibTeX
Ref. manager
Endnote
DataCite
NLM
DC