Interdependence, Morality and Human-Machine Teams: The Revenge of the Dualists Cover Image

Interdependence, Morality and Human-Machine Teams: The Revenge of the Dualists
Interdependence, Morality and Human-Machine Teams: The Revenge of the Dualists

Author(s): W.F. Lawless
Subject(s): Methodology and research technology
Published by: Scientia Moralitas Research Institute
Keywords: Interdependence; teams; subadditivity;

Summary/Abstract: Experience teaches that appearances can mislead, that deception frequents human affairs and that even reliable people misbehave. But for social scientists, based on their idea that the convergence of concepts derived from the intuitions of individuals (observations, self-reports, interviews) about social reality determine their primary model of the rational (social) world; i.e., what humans say they see is what exists; or, words matter; or, humans act as they cognitively think. But based on these models, the social sciences have accrued so many failures across the decades in building predictive theory that a theory of teams has until now been unimaginable, including in economics where results re-labeled as irrational have won Nobel prizes but without a foundational theory. Seemingly, concepts based on the individual promote transient norms by which to judge morality; e.g., the passing fad of self-esteem; the newest fad of implicit racism; the old fad of positive thinking. And yet, irrational and biased humans in freely organized and competitive teams manage to innovate year after year. In contrast to traditional social science, the most predictive theory in all of science is the quantum theory, each prediction confirmed by new discoveries leading to further predictions and discoveries, but the dualist nature of the quantum theory renders the meaning of physical reality meaningless despite more than a century of intense debate. By ignoring meaning, we introduce to the science of teams the quantum-like dualism of interdependence where social objects co-exist in orthogonal states. To judge the ethics of Artificial Intelligence (AI), our theory of interdependence makes successful predictions and new discoveries about human teams that account for the poor performance of interdisciplinary science teams; explain why highly interdependent teams cannot be copied; and begin to address the newly arising problem of shared context for human-machine teams.

  • Issue Year: 4/2019
  • Issue No: 1
  • Page Range: 31-50
  • Page Count: 19
  • Language: English
Toggle Accessibility Mode