Thinking about an Article on AI

In “Importance and limitations of AI ethics in contemporary society” Tomas Hauer (August 22 Humanities and Social Sciences Communications) says AI research raises some thorny ethical issues.

  1. Treating software agents as persons “breaks down the entire existing legal notion of what defines personhood” (2.7). This is really just not true. The invention of the corporation has put these things into question for a long time and we have lots of ways of dealing with it. One might have opinions about whether those are good things or not but there is not much “this changes everything” here.
  2. “The ethical requirements for trustworthy AI should be incorporated into every step of the AI algorithm development process, from research, data collection, initial design phases, system testing, and deployment and use in practice” (3a.4). This is a proclamation, not an argument. What is it about AI and the way it is developed that suggests it ought to be regulated in a particular way?
  3. He next makes the point that some problems are too hard to actually solve. Well, OK. But this doesn’t indict any particular technology; it implies that some folks are promising too much and should be humbled (strawperson argument).
  4. At 4b.6 he suggests authorship of AI produced artifacts is a problem. Nothing special about computer generated works here. If I create something when I’m an employee my company might own it, if I collaborate with others I may or may not make them co-authors, if I create a film, complex negotiations among the agents of the producers, directors, actors, and technicians will determine how credit for creation is given.
  5. The issue of liability for actions and the problem of the distribution of (5a.5) unavoidable harm that cannot be solved purely technically.(5a.6) Liability will be an issue; it always is.  Autonomous or semi-autonomous devices may need to “decide” how to distribute unavoidable harm; we do this all the time. “…cannot help but to turn to ethicists…” but the ethicists don’t have THE answer.
  6. He proposes, indirectly, that the more autonomous the product the less likely the manufacturer could be held responsible for harms the product might do. But then he says this simply cannot stand in the face of the principle of strict liability. But liability regimes are socially negotiated things. They are not laws of physics.
  7. Extended discussion of self-driving cars. Question of what kind of values to build in. But all of these are really quite standard policy choice questions.
  8. This piece, like so many others, takes a sort of “hair on fire” attitude and suggests that AI raises such thorny problems that maybe we should put the breaks on. But the context is one that’s ignorant of the way society actually structures its response to risk and harm and liability and responsibility and credit. Not really understanding the technology and not really understanding the social science and institutional science.

Author: Dan Ryan

I've been an Academic Program Director at MinervaProject.com, a professor at University of Toronto, University of Southern California, and Mills College teaching things like human centered design, computational thinking, modeling for policy sciences, and social theory. My current mission is to figure out how to reorganize higher education and exploit technology so that we can teach twice as many twice as well twice as easily.

One thought on “Thinking about an Article on AI”

  1. My thoughts on (some of) your comments:
    1.) Althoughyour comment is somehow correct, you’reomitting the fact that Corporations always have reliable natural persons on the steering wheel, which in the end makes them liable in case opf harm (depending on national laws, of course). The scene with AI doing saomething harmful is quite different, at least it is more comlpex to find out who is responsible/liable.
    2.) True. But the “proclamation” is important, nevertheless.
    4.) Again, his point is not false. It is /may be difficult to decide on who is to be credited if any thing like a “creation” is the outcome of AI work or AI based work. The other side of liability in case of harm.Lots of labor for the legal people.
    5.) if I understand this correctly, youz do concede his point.
    6.) Again, yo do not factually argue against his point. Liability laws are like all societal laws made as part of the social contract and changed according to what we regard as “progress”. So he points towards the question of the growing distance between “manufacturer” of AI devices and the “outcome” of the use of this device. Similar argument to the liability question of weapons manufacturers. And dealt with already extensively in the sphere of AI and military equipment…
    7.) True again. Here we are approaching the question of “reliability” of AI devices in fulfilling its purpose. How far is the drivers responsibility reduced, what can he trust the driving device to do. Still, regarding the algorithms there has to be some decisionmaking beforehand, e.g. in case of some unavoidable accident discovered by the system, to whom is the primary loyalty of the system oriented: The driver or the person on the passenger seat or the person or the people endangered ahead? Leave the road and endanger the people in the AI driven car or collide with whatever is ahead (only a “soft” person), because this gives maximum protection to the people in the car?
    8.) Perhaps my hints make it possible for you to understand, why I find myself quite on the side of the authors “be cautious” arguments.

Leave a Reply

Discover more from Innoeduvation

Subscribe now to keep reading and get access to the full archive.

Continue reading