An astronomer, a novelist, a painter, a physicist/intelligence analyst, an anthropologist, an economist/law professor and I walk into a bar …. Well, actually not a bar and we weren’t walking – we just sat down to breakfast.
One of our number mentions the elephant in the room – is what AI can, or may some day be able to, do something we might categorize as creative? As new knowledge? Or is it only ever going to be able to recapitulate what humans have already produced? At best being a tool that helps human genius, or even just human ingenuity, do more than it might otherwise be able to do?
One take at the table, if I understood it correctly, was that the “greatness” of a scientific theory or work of art (think Moby Dick as one example) is something that emerges in/over time as humans appreciate its significance, its meaning, what it says about the human condition. This reminds me of W. Iser’s reader-response theory: the meaning of a work is not IN the text, but rather an event in which the fixedness of the text interacts with the user’s world of experience to yield meaning. Extending Iser a bit we might note that the meaning is collective rather than merely subjective because it is constrained by the artifact of the text (and it’s relationship to other texts and language and it’s production by a person with experience in a shared world) and because the world/self/experience that the reader brings to the process is itself partly shared. These two sets of constraints embed the object in a rich web of shared meaning.
Continuing this line of thinking, we might posit that the artifact produced by a human has on or in it traces of the creator as a entity that is simultaneously biographically unique and a co-participant in the social, where “social” ranges from shared experience of embodiment to the daily weather of micro interactions to the macro drama of history/culture short and history/culture long.
Point number one from this is the idea that the work has something special IN it that a machine could not put into something it created. [It can’t put it in because it is not human and I can’t get it out because it is not human.]
This raises two questions for me. How is that something special inserted or included in the artifact that I encounter? And do I experience that something special in a manner that transcends processing the sensory content (pixels, notes, narratives, etc.)?
Question 1. Do I think that even though it is encoded in the words or the brush strokes or the notes – all patterns that can be explicitly described and could in theory be learned – do I think that it, the human magic, could not be generated by a machine because it comes FROM something with human difference, that is, it’s the human character of the creator that generates the something special and I do not believe machines can “be human” and so they cannot generate this something special and include it in their output. Even if you figure out what it was that Picasso “put into” Portrait of Dora Maar, you can’t just put that into another artifact and thereby transform it into “art.” And, further, even if you could study a lot of Picasso’s paintings and figured out what it was that generated the “artness” of the work, the next creative piece does not just recapitulate what came before, it extends it and creates new zones in meaning space.
Machines can’t be creative because they do not bring to the act of production the experience of being human.
What about on the reception side? Do I experience the something special in some manner that transcends the (e.g., digitizable) materiality of the artifact? I think I might do this by virtue of my taking the artifact as the product of another human mind. I apprehend it from the get-go as meaning-containing. This can be at the prosaic level of “what is the creator trying to say?” or the more lofty “what does this tell us about the human condition (regardless of whether or not the creator had fully appreciated it)?” Regardless of how one sees “the problem of other minds,” we can, perhaps, stipulate that taking an artifact as the product of another mind/self/world with properties like the one I am familiar with (my own) imbues it with “something special.”
But it’s very easy for humans to be wrong about such things; I can imbue something with meaning, hearing what its author is saying to me, even when that thing is not in fact a human creation. We anthropomorphize things all the time and although we are taken aback when we discover that a thing with which we interacted assuming it to be the product of another mind is in fact not, I don’t think we want to characterize this as a simple error. To me it suggests that the reception is itself a human creative act. Echoes of the Thomas theorem: if people take something as real and it thereby real in its consequences then it is real. I’m not going that far, but I do think this establishes the idea that the question of what we are to make of the output of an AI won’t be answered only by looking into the essences of the output itself and the processes that gave rise to it.
[Giant swath of 20th/21st century literary theory coming to mind here.]
I started with the title “We Breakfast” because the place where the conversation left me pondering was around the question of how the “we” that talks about AI and about how it ought to be handled and treated and what it should be allowed to do and what projects its enthusiasts should pursue is organized. I think we almost always too blithely project the idea of the reality of “we” (“we shouldn’t let this happen” or “we know that we value X”) as being well constructed, at the ready as a collective intelligence and agency that’s free (free in the sense of not needing to be paid for). In fact, I think “we” is a gigantic and ongoing construction project that absorbs a large part of our energy budget and mostly is a rickety machine with only very locally optimized or even reasonably well-running instances.
But more on all these things to come.