Thinking about an Article on AI

In “Importance and limitations of AI ethics in contemporary society” Tomas Hauer (August 22 Humanities and Social Sciences Communications) says AI research raises some thorny ethical issues.

  1. Treating software agents as persons “breaks down the entire existing legal notion of what defines personhood” (2.7). This is really just not true. The invention of the corporation has put these things into question for a long time and we have lots of ways of dealing with it. One might have opinions about whether those are good things or not but there is not much “this changes everything” here.
  2. “The ethical requirements for trustworthy AI should be incorporated into every step of the AI algorithm development process, from research, data collection, initial design phases, system testing, and deployment and use in practice” (3a.4). This is a proclamation, not an argument. What is it about AI and the way it is developed that suggests it ought to be regulated in a particular way?
  3. He next makes the point that some problems are too hard to actually solve. Well, OK. But this doesn’t indict any particular technology; it implies that some folks are promising too much and should be humbled (strawperson argument).
  4. At 4b.6 he suggests authorship of AI produced artifacts is a problem. Nothing special about computer generated works here. If I create something when I’m an employee my company might own it, if I collaborate with others I may or may not make them co-authors, if I create a film, complex negotiations among the agents of the producers, directors, actors, and technicians will determine how credit for creation is given.
  5. The issue of liability for actions and the problem of the distribution of (5a.5) unavoidable harm that cannot be solved purely technically.(5a.6) Liability will be an issue; it always is.  Autonomous or semi-autonomous devices may need to “decide” how to distribute unavoidable harm; we do this all the time. “…cannot help but to turn to ethicists…” but the ethicists don’t have THE answer.
  6. He proposes, indirectly, that the more autonomous the product the less likely the manufacturer could be held responsible for harms the product might do. But then he says this simply cannot stand in the face of the principle of strict liability. But liability regimes are socially negotiated things. They are not laws of physics.
  7. Extended discussion of self-driving cars. Question of what kind of values to build in. But all of these are really quite standard policy choice questions.
  8. This piece, like so many others, takes a sort of “hair on fire” attitude and suggests that AI raises such thorny problems that maybe we should put the breaks on. But the context is one that’s ignorant of the way society actually structures its response to risk and harm and liability and responsibility and credit. Not really understanding the technology and not really understanding the social science and institutional science.

Automobile Ethics

Imagine you were around when cars first came on the scene. You were a person of vision and influence. You could see that these machines would change the world. And so you assembled other visionary and influential people and together you formulated the automobile industry code of ethics. At their initial meeting, the group coalesced around the idea that automobiles should be developed, deployed, and used with an ethical purpose and based on respect for fundamental rights, taking into account societal values and ethical principles of beneficence, non-maleficence human autonomy, justice, and explainability. The group agreed to come back in a year and develop an ethical and legal framework for automobiles.

We Breakfast

An astronomer, a novelist, a painter, a physicist/intelligence analyst, an anthropologist, an economist/law professor and I walk into a bar …. Well, actually not a bar and we weren’t walking – we just sat down to breakfast.

One of our number mentions the elephant in the room – is what AI can, or may some day be able to, do something we might categorize as creative? As new knowledge? Or is it only ever going to be able to recapitulate what humans have already produced? At best being a tool that helps human genius, or even just human ingenuity, do more than it might otherwise be able to do?

One take at the table, if I understood it correctly, was that the “greatness” of a scientific theory or work of art (think Moby Dick as one example) is something that emerges in/over time as humans appreciate its significance, its meaning, what it says about the human condition. This reminds me of W. Iser’s reader-response theory: the meaning of a work is not IN the text, but rather an event in which the fixedness of the text interacts with the user’s world of experience to yield meaning. Extending Iser a bit we might note that the meaning is collective rather than merely subjective because it is constrained by the artifact of the text (and it’s relationship to other texts and language and it’s production by a person with experience in a shared world) and because the world/self/experience that the reader brings to the process is itself partly shared. These two sets of constraints embed the object in a rich web of shared meaning.

Continuing this line of thinking, we might posit that the artifact produced by a human has on or in it traces of the creator as a entity that is simultaneously biographically unique and a co-participant in the social, where “social” ranges from shared experience of embodiment to the daily weather of micro interactions to the macro drama of history/culture short and history/culture long.

Point number one from this is the idea that the work has something special IN it that a machine could not put into something it created. [It can’t put it in because it is not human and I can’t get it out because it is not human.]

This raises two questions for me. How is that something special inserted or included in the artifact that I encounter? And do I experience that something special in a manner that transcends processing the sensory content (pixels, notes, narratives, etc.)?

Question 1. Do I think that even though it is encoded in the words or the brush strokes or the notes – all patterns that can be explicitly described and could in theory be learned – do I think that it, the human magic, could not be generated by a machine because it comes FROM something with human difference, that is, it’s the human character of the creator that generates the something special and I do not believe machines can “be human” and so they cannot generate this something special and include it in their output. Even if you figure out what it was that Picasso “put into” Portrait of Dora Maar, you can’t just put that into another artifact and thereby transform it into “art.” And, further, even if you could study a lot of Picasso’s paintings and figured out what it was that generated the “artness” of the work, the next creative piece does not just recapitulate what came before, it extends it and creates new zones in meaning space.

Machines can’t be creative because they do not bring to the act of production the experience of being human.

What about on the reception side? Do I experience the something special in some manner that transcends the (e.g., digitizable) materiality of the artifact? I think I might do this by virtue of my taking the artifact as the product of another human mind. I apprehend it from the get-go as meaning-containing. This can be at the prosaic level of “what is the creator trying to say?” or the more lofty “what does this tell us about the human condition (regardless of whether or not the creator had fully appreciated it)?” Regardless of how one sees “the problem of other minds,” we can, perhaps, stipulate that taking an artifact as the product of another mind/self/world with properties like the one I am familiar with (my own) imbues it with “something special.”

But it’s very easy for humans to be wrong about such things; I can imbue something with meaning, hearing what its author is saying to me, even when that thing is not in fact a human creation. We anthropomorphize things all the time and although we are taken aback when we discover that a thing with which we interacted assuming it to be the product of another mind is in fact not, I don’t think we want to characterize this as a simple error. To me it suggests that the reception is itself a human creative act. Echoes of the Thomas theorem: if people take something as real and it thereby real in its consequences then it is real. I’m not going that far, but I do think this establishes the idea that the question of what we are to make of the output of an AI won’t be answered only by looking into the essences of the output itself and the processes that gave rise to it.

[Giant swath of 20th/21st century literary theory coming to mind here.]

I started with the title “We Breakfast” because the place where the conversation left me pondering was around the question of how the “we” that talks about AI and about how it ought to be handled and treated and what it should be allowed to do and what projects its enthusiasts should pursue is organized. I think we almost always too blithely project the idea of the reality of “we” (“we shouldn’t let this happen” or “we know that we value X”) as being well constructed, at the ready as a collective intelligence and agency that’s free (free in the sense of not needing to be paid for). In fact, I think “we” is a gigantic and ongoing construction project that absorbs a large part of our energy budget and mostly is a rickety machine with only very locally optimized or even reasonably well-running instances.

But more on all these things to come.

Machine Learning and Teaching

I just responded to an unsolicited email from a consultant working for Pearson publishing – perhaps you received one too. The sender was requesting my participation in the following scheme:

They provide five essay questions that I can assign to my students. My students enter the essays through an online portal. The essays will then be graded by “subject area experts” and the grades and comments will be returned to me – I am free to pass these on to students or use as I like. For my trouble: “you would have a couple of essays graded for you. Also, Pearson will pay you $100.” 

They will use the students’ work to “build the bank of student essays needed to develop the product.” The product is a “computer-assisted grading program that will support you and your students when assigning short writing assignments.”

What they are up to, one suspects, is developing a training corpus for machine learning algorithms.  It’s a relatively straight-forward classification problem.  They don’t need to figure out what makes a good answer to a given essay question – if they have enough human evaluated examples, they can train the machine to do just as well as the humans.  Just as well, that is, as the “subject area experts” they hire.

In my email response to the consultant I raised a different question: how much are they planning to compensate the students whose copyrighted intellectual property they are asking me to facilitate them obtaining for the development of a commercial product. I asked what advice their lawyers had given them regarding the commercial use of material that students are compelled to produce and submit as a requirement of a class.

Would you require your students to send their work to Pearson?  Would you accept payment for doing so?   Even if this is considered fair use under copyright law*, should institutions and instructors be in the business of building up Pearson’s content for a product that Pearson will then turn around and sell back to us?

Personally, I say no thanks. Seems to me just one more step toward making colleges mere franchises and store fronts for educational publishers. It’s too bad we are not collectively producing tools like this for the public benefit rather than being coopted into contributing to the progressive privatization of pedagogy.

And I think I’ll start recommending that my students consider appending a CC BY-NC 4.0 license to work they are willing to share.

*A similar question has arisen in connection with Turn-It-In a service that checks for plagiarism. That company has prevailed so far in lawsuits that claim it makes illegal use of copyrighted student material.

See Also