Questioning the Role of Moral AI as an Adviser within the Framework of Trustworthiness Ethics
Questioning the Role of Moral AI as an Adviser within the Framework of Trustworthiness Ethics
Author(s): Silviya SerafimovaSubject(s): Anthropology, Philosophy, Social Sciences, Education, History of Philosophy, Philosophical Traditions, Epistemology, Semiology, Ethics / Practical Philosophy, Social Philosophy, Special Branches of Philosophy, Contemporary Philosophy, Philosophy of Science, Culture and social structure , Vocational Education, Adult Education, Phenomenology, Hermeneutics, Inclusive Education / Inclusion, Distance learning / e-learning
Published by: Национално издателство за образование и наука „Аз-буки“
Keywords: trustworthiness ethics; rational, affective and normative accounts of trust; moral AI as an adviser; HA-AA trust relationships
Summary/Abstract: The main objective of this article is to demonstrate why despite the growing interest in justifying AI’s trustworthiness, one can argue for AI’s reliability. By analyzing why trustworthiness ethics in Nickel’s sense provides some well-grounded hints for rethinking the rational, affective and normative accounts of trust in respect to AI, I examine some concerns about the trustworthiness of Savulescu and Maslen’s model of moral AI as an adviser. Specifically, I tackle one of its exemplifications regarding Klincewicz’s hypothetical scenario of John which is refracted through the lens of the HLEG’s fifth requirement of trustworthy artificial intelligence (TAI), namely, that of Diversity, non-discrimination and fairness.
Journal: Философия
- Issue Year: 30/2021
- Issue No: 4
- Page Range: 402-412
- Page Count: 11
- Language: English
- Content File-PDF