Thinking about an Article on AI

In “Importance and limitations of AI ethics in contemporary society” Tomas Hauer (August 22 Humanities and Social Sciences Communications) says AI research raises some thorny ethical issues.

  1. Treating software agents as persons “breaks down the entire existing legal notion of what defines personhood” (2.7). This is really just not true. The invention of the corporation has put these things into question for a long time and we have lots of ways of dealing with it. One might have opinions about whether those are good things or not but there is not much “this changes everything” here.
  2. “The ethical requirements for trustworthy AI should be incorporated into every step of the AI algorithm development process, from research, data collection, initial design phases, system testing, and deployment and use in practice” (3a.4). This is a proclamation, not an argument. What is it about AI and the way it is developed that suggests it ought to be regulated in a particular way?
  3. He next makes the point that some problems are too hard to actually solve. Well, OK. But this doesn’t indict any particular technology; it implies that some folks are promising too much and should be humbled (strawperson argument).
  4. At 4b.6 he suggests authorship of AI produced artifacts is a problem. Nothing special about computer generated works here. If I create something when I’m an employee my company might own it, if I collaborate with others I may or may not make them co-authors, if I create a film, complex negotiations among the agents of the producers, directors, actors, and technicians will determine how credit for creation is given.
  5. The issue of liability for actions and the problem of the distribution of (5a.5) unavoidable harm that cannot be solved purely technically.(5a.6) Liability will be an issue; it always is.  Autonomous or semi-autonomous devices may need to “decide” how to distribute unavoidable harm; we do this all the time. “…cannot help but to turn to ethicists…” but the ethicists don’t have THE answer.
  6. He proposes, indirectly, that the more autonomous the product the less likely the manufacturer could be held responsible for harms the product might do. But then he says this simply cannot stand in the face of the principle of strict liability. But liability regimes are socially negotiated things. They are not laws of physics.
  7. Extended discussion of self-driving cars. Question of what kind of values to build in. But all of these are really quite standard policy choice questions.
  8. This piece, like so many others, takes a sort of “hair on fire” attitude and suggests that AI raises such thorny problems that maybe we should put the breaks on. But the context is one that’s ignorant of the way society actually structures its response to risk and harm and liability and responsibility and credit. Not really understanding the technology and not really understanding the social science and institutional science.

Author: Dan Ryan

I'm currently an Academic Program Director at I've been a professor at University of Toronto, University of Southern California, and Mills College teaching things like human centered design, computational thinking, modeling for policy sciences, and social theory. I'm driven by the desire to figure out how to teach twice as many twice as well twice as easily.

Leave a Reply

%d bloggers like this: