Challenges of Aligning Artificial Intelligence with Human Values
Challenges of Aligning Artificial Intelligence with Human Values
Author(s): Margit SutropSubject(s): Ethics / Practical Philosophy, Philosophy of Science, Social development, Social Informatics
Published by: Tallinna Tehnikaülikooli õiguse instituut
Keywords: artificial intelligence (AI); artificial general intelligence (AGI); AI ethics; moral agent; moral principles; superintelligence (SAI); value; value alignment; value pluralism;
Summary/Abstract: As artificial intelligence (AI) systems are becoming increasingly autonomous and will soon be able to make decisions on their own about what to do, AI researchers have started to talk about the need to align AI with human values. The AI ‘value alignment problem’ faces two kinds of challenges—a technical and a normative one—which are interrelated. The technical challenge deals with the question of how to encode human values in artificial intelligence. The normative challenge is associated with two questions: “Which values or whose values should artificial intelligence align with?” My concern is that AI developers underestimate the difficulty of answering the normative question. They hope that we can easily identify the purposes we really desire and that they can focus on the design of those objectives. But how are we to decide which objectives or values to induce in AI, given that there is a plurality of values and moral principles and that our everyday life is full of moral disagreements? In my paper I will show that although it is not realistic to reach an agreement on what we, humans, really want as people value different things and seek different ends, it may be possible to agree on what we do not want to happen, considering the possibility that intelligence, equal to our own, or even exceeding it, can be created. I will argue for pluralism (and not for relativism!) which is compatible with objectivism. In spite of the fact that there is no uniquely best solution to every moral problem, it is still possible to identify which answers are wrong. And this is where we should begin the value alignment of AI.
Journal: Acta Baltica Historiae et Philosophiae Scientiarum
- Issue Year: 8/2020
- Issue No: 2
- Page Range: 54-72
- Page Count: 19
- Language: English