Authentic Interdisciplinarity

What if we took the production of technologically literate humanists and humanistically enlightened engineers seriously? 

Just a Thought

One month before I was born, C.P. Snow delivered a lecture in which he lamented the wide gulf between “two cultures” – science and the humanities.  Today it is easy to find enough scientists who are humanistically ignorant and humanists who are scientifically ignorant to conclude that the scene has not much changed in six decades.

To be fair, we talk a lot about how scientists and engineers need to learn about ethics and social impact and how everyone should learn more STEM (sometimes because JOBS! and sometimes because democratic participation in a tech-drenched world).   

But our methods for implementing these things are pretty trite.  The stereotypical approach to the former is to require an ethics or sociotechnical systems course in the undergraduate engineering curriculum, and for the latter we suggest people learn to code.

The ethics course approach is reminiscent of the same move in business schools a generation ago. The logic is akin to adding niacin to wheat to prevent pellagra. Teaching people to code as a way of generating some sort of tech literacy doesn’t even have that much logic behind it. While coding skills are a prerequisite for many, if not most, other tech courses, it’s not like our curricula actually contain many second courses that would be appropriate for non-tech students who want to be more technically literate.

And we have to admit that much of what we teach in response to concerns like those of C P Snow is more the product of university budgets, promotion regimes, faculty politics, and other non-intellectual and non-pedagogical concerns.

But what if we took interdisciplinarity seriously? Not Noah’s ark interdisciplinarity – where we bring one each of several different intellectual species into the same room or administrative unit and assume magic will happen – but actual interdisciplinarity where the thinkers and teachers actually crossed boundaries, held multiple passports, spoke more than one language.  What kind of programs could we offer to what kinds of audiences if that were our starting point?

A few years ago I developed a prototype of a course of that ilk. It was called “Introduction to Computational Reasoning” and it reflected my own multidisciplinary training as much as any principled pedagogical design.  My shorthand description was “the greatest hits of computer science taught as critical thinking.”  

I knew there were greatest hits because I had been using them over the course of my career as a social scientist teaching sociology, social theory, cartography, and public policy. Over and over again I found myself teaching problem decomposition, abstraction, flow charts, logic, iterative solutions, recursion, feedback loops, optimization, backtracking, encryption, compression, automation, pattern generation and pattern matching, problem size estimation, data structures, functional abstraction, system dynamics, asymptotic thinking, learning, and alignment. And this is just a partial list.  Each of these is a skill or perspective or tool that has utility outside the computer science or engineering major, but only a few are in the learning outcomes for that first programming course we typically offer non-majors.  Why not pop more of them out and share them?

Here’s an outline of my course:

  1. 1. Think Slowly (primitive operations, sequence, choice and repetition, algorithms, abstraction, stepwise refinement)
  2. 2. Flow and Modularity (flowcharts, abstraction and decomposition, black boxes, stepwise refinement)
  3. 3. Numbers and Logic (binary, logic, gates, circuits, doing math with electricity, KMaps)
  4. 4. Information Jigs (memory hardware, encoding, primitive data types, abstract data types, JSON)
  5. 5. APIs (metaphors, servers and clients, CRUD, endpoints, documentation) 
  6. 6. Repetition (iteration, loops, stopping conditions, complexity, recursion, sorting, searching)
  7. 7. Patterns (creating patterns with loops, weaving, tiling, dancing, regex, AI)
  8. 8. Automation (machines, rule following vs. goal seeking, automatons, open/closed loop, feedback, PID control)
  9. 9. How Machines Keep Secrets (cryptography, hashing, public/private keys, blockchain)
  10. 10. How Machines See (neurons, perceptrons, NN, DNN, convolutions, filters, training/test, Teachable Machine)
  11. 11. How Machines Learn (reinforcement learning, explore/exploit, policies)
  12. 12. Alignment 

This was a crazy, dense, overwhelming course both to teach and to take. But it was a favorite of mine to teach and a favorite of many a student.  Masters students who had been CS undergrads often said “I’ve seen this stuff before but this was the first time I got to actually think about it.”  

I am playing with rendering the course as a “book” (more likely a creative multimedia digital learning adventure experience of some sort).  I envision something like a curriculum of several courses or modules framed simultaneously as intense general interest courses that display a rigorous interdisciplinarity; the learning outcomes would be the actual insights, skills, problem solving techniques from CS along with tangible connections and applications to real world, non-CS problems.  These could be assembled into “course flights” – sets of three or four courses that built on and reinforced one another (in modern ed jargon, perhaps, a certificate or some such).

Maybe this only works (or maybe “works best”) when it can be taught (delivered!) by authentically interdisciplinary thinkers – people who earnestly see the connections and the out of domain utility of disciplinary ideas.  They will be able both to sell the particulars of the portable knowledge content in the courses AND to model the mind that is not constrained by disciplinary blinders.  But if we can train a generation of experts free of those blinders, we will have made the world a better place.

Learning from Every Keystroke

I suspect that this is a naive question already thought through by people much smarter than I, but here goes.

We’ve heard some concerns expressed lately about large language models training on texts and images on the internet whose authors/producers might not consent to such use. Some commentators focus on copyright but that might not really be the right angle given the policy purpose of copyright. But the angle is not so important for my question.

So, generation one, so to speak, of large language models have trained on a corpus drawn from the open internet. Some like to call this “everything humans have ever written” though that’s surely an overstatement. And one application of LLMs that we have seen tried out is “fine tuning” on an organization’s data – perhaps all of a company’s products or its entire policy manual or its internal knowledge management resources (wikis, etc.).

But what happens when we try out training or fine tuning a model on every single keystroke within an organization. And maybe supplement that with a transcript of every single conversation in an organization. So now the model can learn not just from the “work product” of the organization’s employees, but also the work process of those employees. It learns how problems are posed for an employee and how that person goes about figuring out how to solve it as well as learning the solution itself.

Professionals are accustomed to their employers owning their work product and what many knowledge professionals spend their days doing is analyzing problems and coming up with a process that can be implemented to make handling the situation that was a problem by routine action so that it becomes just a matter to deal with following standard operating procedures. In practice, the professional moves on to the next challenge once their process has been successfully implemented.

But what happens when the model starts to understand how that’s done. When it figures out how people figure out things. And not just how I do, but how all my colleagues do to. And it can assemble the best practices from among all of our practices. In other words, the model starts to execute on the very activity of organization, the very stuff of management. Does my sense of what I’m doing as a highly paid professional change when I come to recognize that not only is my work product owned by my employer – something I’m quite used to – but now too all the cognitive work I do to produce that product can be used to train a tool that my employer owns? Is this business as usual? Simply the invention of a better tool that enables higher productivity that yields a bigger pie for all to share? Or is something qualitatively different going on here?

Computational Thinking as a Graphic Novel: Next Iteration

ChatGPT and I worked on some additional format experiments yesterday. Here’s a next iteration. The platform here is slides.com which is built around reveal.js. Advancing panel to panel uses space bar or arrow keys or clicking the blue arrow lower right. Embedded video is mix of YouTube iframes and video uploaded to slides.com.

The Old Alignment Problem

Google the phrase “alignment problem” and the first result might well be the very excellent book of that title by Brian Christian. And the next one might be a definition such as “the challenge of steering AI systems toward human goals and values.”

AI alignment is one facet of an emerging field called “AI safety.” It’s controversial in some quarters, but it’s arguably among the most important issues at the nexus of technology and society these days.

If you read a little further you might encounter a thinker who seems to want to derail the conversation by asking which humans’ values we have in mind, suggesting that talk of generic human values usually involves assuming that some particular group’s values are everyone’s values and so maybe we have to talk about that first.

But even before getting to that conundrum, there is plenty to chew on: AI can be thought of as goal-seeking or objective-maximizing automation; but even if it could do those things perfectly, it turns out that specifying goals and objectives for machines can be a tricky business. You might know what you want and what you care about, but it is challenging to translate that for the machine. And, further, it turns out it is also hard for you to know what you care about and what you want. And if there is more than one of you or if you have some responsibility for other people, it is very hard to ascertain what all of you want and care about and so getting the machine on board with “human values” (no matter whose they are) is anything but straightforward.

But while AI safety is a relatively new field, the alignment problem is anything but new. Humans have been struggling with value alignment for pretty much as long as they have been around; the problem of human social organization is the problem of value alignment. From two hunter-gatherers struggling to cooperate to find food to one spouse sending the other to do the shopping to a corporation trying to get marketing and product to coordinate around the company’s mission, the challenge of steering other agents toward our goals and values has bedeviled us forever.

And that problem of “whose values” has been here the whole time. Families, friendships, communities, clans, companies, and countries always face the dual challenge of steering agents toward a set of goals and values AND figuring out what those goals and values are.

Collaborating with ChatGPT on writing my “computational thinking” book as a graphic novel

Over drinks at the faculty club last night I mentioned that I was trying to come up with an innovative medium for my book on computational thinking. Something interactive or multimedia or some combination that’s not been tried before I said. My colleague, S., said “why not do it as a graphic novel?” I was intrigued, but I’m more an artist manqué than artist and so it felt like a stretch. But an intriguing one.

So this morning I sat down with another colleague, ChatGPT, and asked whether it might be able to help. Ever eager, it said it could and showed me an example of its work.

This looked like fun, so I looked for a short passage from the book that we might experiment with. I prompted ChatGPT with the following. The “text” is actually a transcript of a video I did for a class sometime during the pandemic.

Excellent. I’m working on a book that is an introduction to computational reasoning. So it won’t be a graphic novel with a plot, per se, but an instructional, didactic document. I want to experiment with different approaches to telling the “story.” Let’s just see how it goes with this short passage. It’s from a chapter called “thinking slowly” that introduces algorithmic thinking and generating simple instructions based on a set of commands that a machine can follow. In this “scene” I’m musing about how one looks up a word in the dictionary. The didactic goal is to have the reader (student) really stop and think about the underlying process that one automatically and intuitively applies (it’s sort of a variation of binary search).

Here’s the existing text: Let’s stop and think: how DOES ONE look up a word in the dictionary? I need to look up the word “concatenate” so I open the dictionary randomly. The top word is “macroeconomic,” so concatenate is in front of that. So now I split the pages in front of this page in about half and examine the page. The top word here is “cyclamate” so concatenate is still in front of where I am. So I divide those pages in half, and now I’ve got “California Poppy.” My word is after this so I’m gonna go backwards a bit. Now I have “cast” so concatenate is still further back. Next I read “churl” which is still not far enough and then “commodity” so concatenate is still further on. But now I’ve got “Corinthians,” so I’ve gone too far. I go back a few pages and I’ve got “contemplative”; concatenate is still in front of that. A few more pages back, “congruently,” not quite. Now “Conemaugh” and “concerto” so concatenate should be in here somewhere. It is not on the right side page because it goes from “concerto” to “concretize” so my word is on the left between “con amore” and “concentrate” so my word should be on this page. I just start at the top here with “con amore” “con anima” “conation” “con brio” “concanavalin”?! Now that’s a new word to me: “either of two crystalline globulin is occurring in the Jack bean.” Wow, who knew? And then there’s concatenate “to link together in a series or chain.” There we are; job done. What can you come up with in the graphic novel format for this?

There was some back and forth and it never managed to get the captions and speech/thought bubbles quite right so I edited the text of those, but here’s what we got after about 45 minutes of work. Promising, to say the least.

Write Me a Skit

DR: Write a short skit in the style of monty python in which mao tse tung tries to talk an uber driver out of giving him a bad rating.

AI:(Mao Tse Tung is standing on the sidewalk, holding a smartphone and looking frustrated.)

Mao Tse Tung: I can’t believe it! I just got a one-star rating from my Uber driver!

(An Uber driver pulls up in a car and rolls down the window.)

Uber Driver: Hey, are you Mao Tse Tung?

Mao Tse Tung: Yes, that’s me.

Uber Driver: I’m your driver. Hop in.

Mao Tse Tung: Actually, I wanted to talk to you about your rating. Why did you give me a one-star rating?

Uber Driver: Well, you were a bit difficult to deal with. You kept insisting on taking the long way to your destination, and you kept trying to engage me in political debates.

Mao Tse Tung: But I’m Mao Tse Tung! I’m the founding father of the People’s Republic of China! Surely my political insights are worth something?

Uber Driver: Sorry, mate. I’m just an Uber driver. I don’t really care about politics. I just want to get you to your destination safely and efficiently.

Mao Tse Tung: (sighs) I suppose you’re right. I apologize for any inconvenience I may have caused.

Uber Driver: No worries. Just try to be a bit more reasonable next time, okay?

Mao Tse Tung: Of course. Thank you for understanding.

(Mao Tse Tung gets into the car and the Uber driver drives away.)

DR: what about that skit is in the style of monty python?

AI: There are a few elements of the skit that are in the style of Monty Python.

First, the skit uses absurd and unexpected humor to create a humorous and surprising situation. In the skit, Mao Tse Tung is upset about receiving a one-star rating from his Uber driver, and he tries to argue with the driver to get a better rating. This unexpected situation is similar to the type of humor that is often used in Monty Python sketches.

Second, the skit uses exaggerated and exaggerated characters and dialogue to create humorous effects. For example, the Uber driver is portrayed as a mundane and uninterested character, while Mao Tse Tung is portrayed as a pompous and self-important figure. This contrast between the characters is exaggerated for comedic effect, which is a common technique in Monty Python sketches.

Third, the skit has a playful and irreverent tone that is reminiscent of Monty Python’s comedy. Throughout the skit, the characters engage in humorous banter and the dialogue is filled with comedic jokes and references. This lighthearted and playful tone is characteristic of Monty Python’s comedy.

Overall, the skit uses humor, exaggeration, and a playful tone to create a humorous and absurd situation, which are all elements that are commonly found in Monty Python sketches.

DR: Any sense of which monty python members would play which part in this skit?
AI: It is difficult to say which Monty Python members would play which part in the skit, as it would ultimately depend on the actors’ individual abilities and the director’s vision for the sketch. However, some possible casting choices could be:

Mao Tse Tung: This role could potentially be played by John Cleese, who is known for his ability to portray pompous and self-important characters.
Uber Driver: This role could potentially be played by Eric Idle, who is known for his ability to portray mundane and uninterested characters.
Of course, these are just suggestions, and the actual casting decisions would depend on the director and the actors involved in the production.

Thinking about an Article on AI

In “Importance and limitations of AI ethics in contemporary society” Tomas Hauer (August 22 Humanities and Social Sciences Communications) says AI research raises some thorny ethical issues.

  1. Treating software agents as persons “breaks down the entire existing legal notion of what defines personhood” (2.7). This is really just not true. The invention of the corporation has put these things into question for a long time and we have lots of ways of dealing with it. One might have opinions about whether those are good things or not but there is not much “this changes everything” here.
  2. “The ethical requirements for trustworthy AI should be incorporated into every step of the AI algorithm development process, from research, data collection, initial design phases, system testing, and deployment and use in practice” (3a.4). This is a proclamation, not an argument. What is it about AI and the way it is developed that suggests it ought to be regulated in a particular way?
  3. He next makes the point that some problems are too hard to actually solve. Well, OK. But this doesn’t indict any particular technology; it implies that some folks are promising too much and should be humbled (strawperson argument).
  4. At 4b.6 he suggests authorship of AI produced artifacts is a problem. Nothing special about computer generated works here. If I create something when I’m an employee my company might own it, if I collaborate with others I may or may not make them co-authors, if I create a film, complex negotiations among the agents of the producers, directors, actors, and technicians will determine how credit for creation is given.
  5. The issue of liability for actions and the problem of the distribution of (5a.5) unavoidable harm that cannot be solved purely technically.(5a.6) Liability will be an issue; it always is.  Autonomous or semi-autonomous devices may need to “decide” how to distribute unavoidable harm; we do this all the time. “…cannot help but to turn to ethicists…” but the ethicists don’t have THE answer.
  6. He proposes, indirectly, that the more autonomous the product the less likely the manufacturer could be held responsible for harms the product might do. But then he says this simply cannot stand in the face of the principle of strict liability. But liability regimes are socially negotiated things. They are not laws of physics.
  7. Extended discussion of self-driving cars. Question of what kind of values to build in. But all of these are really quite standard policy choice questions.
  8. This piece, like so many others, takes a sort of “hair on fire” attitude and suggests that AI raises such thorny problems that maybe we should put the breaks on. But the context is one that’s ignorant of the way society actually structures its response to risk and harm and liability and responsibility and credit. Not really understanding the technology and not really understanding the social science and institutional science.

Automobile Ethics

Imagine you were around when cars first came on the scene. You were a person of vision and influence. You could see that these machines would change the world. And so you assembled other visionary and influential people and together you formulated the automobile industry code of ethics. At their initial meeting, the group coalesced around the idea that automobiles should be developed, deployed, and used with an ethical purpose and based on respect for fundamental rights, taking into account societal values and ethical principles of beneficence, non-maleficence human autonomy, justice, and explainability. The group agreed to come back in a year and develop an ethical and legal framework for automobiles.

We Breakfast

An astronomer, a novelist, a painter, a physicist/intelligence analyst, an anthropologist, an economist/law professor and I walk into a bar …. Well, actually not a bar and we weren’t walking – we just sat down to breakfast.

One of our number mentions the elephant in the room – is what AI can, or may some day be able to, do something we might categorize as creative? As new knowledge? Or is it only ever going to be able to recapitulate what humans have already produced? At best being a tool that helps human genius, or even just human ingenuity, do more than it might otherwise be able to do?

One take at the table, if I understood it correctly, was that the “greatness” of a scientific theory or work of art (think Moby Dick as one example) is something that emerges in/over time as humans appreciate its significance, its meaning, what it says about the human condition. This reminds me of W. Iser’s reader-response theory: the meaning of a work is not IN the text, but rather an event in which the fixedness of the text interacts with the user’s world of experience to yield meaning. Extending Iser a bit we might note that the meaning is collective rather than merely subjective because it is constrained by the artifact of the text (and it’s relationship to other texts and language and it’s production by a person with experience in a shared world) and because the world/self/experience that the reader brings to the process is itself partly shared. These two sets of constraints embed the object in a rich web of shared meaning.

Continuing this line of thinking, we might posit that the artifact produced by a human has on or in it traces of the creator as a entity that is simultaneously biographically unique and a co-participant in the social, where “social” ranges from shared experience of embodiment to the daily weather of micro interactions to the macro drama of history/culture short and history/culture long.

Point number one from this is the idea that the work has something special IN it that a machine could not put into something it created. [It can’t put it in because it is not human and I can’t get it out because it is not human.]

This raises two questions for me. How is that something special inserted or included in the artifact that I encounter? And do I experience that something special in a manner that transcends processing the sensory content (pixels, notes, narratives, etc.)?

Question 1. Do I think that even though it is encoded in the words or the brush strokes or the notes – all patterns that can be explicitly described and could in theory be learned – do I think that it, the human magic, could not be generated by a machine because it comes FROM something with human difference, that is, it’s the human character of the creator that generates the something special and I do not believe machines can “be human” and so they cannot generate this something special and include it in their output. Even if you figure out what it was that Picasso “put into” Portrait of Dora Maar, you can’t just put that into another artifact and thereby transform it into “art.” And, further, even if you could study a lot of Picasso’s paintings and figured out what it was that generated the “artness” of the work, the next creative piece does not just recapitulate what came before, it extends it and creates new zones in meaning space.

Machines can’t be creative because they do not bring to the act of production the experience of being human.

What about on the reception side? Do I experience the something special in some manner that transcends the (e.g., digitizable) materiality of the artifact? I think I might do this by virtue of my taking the artifact as the product of another human mind. I apprehend it from the get-go as meaning-containing. This can be at the prosaic level of “what is the creator trying to say?” or the more lofty “what does this tell us about the human condition (regardless of whether or not the creator had fully appreciated it)?” Regardless of how one sees “the problem of other minds,” we can, perhaps, stipulate that taking an artifact as the product of another mind/self/world with properties like the one I am familiar with (my own) imbues it with “something special.”

But it’s very easy for humans to be wrong about such things; I can imbue something with meaning, hearing what its author is saying to me, even when that thing is not in fact a human creation. We anthropomorphize things all the time and although we are taken aback when we discover that a thing with which we interacted assuming it to be the product of another mind is in fact not, I don’t think we want to characterize this as a simple error. To me it suggests that the reception is itself a human creative act. Echoes of the Thomas theorem: if people take something as real and it thereby real in its consequences then it is real. I’m not going that far, but I do think this establishes the idea that the question of what we are to make of the output of an AI won’t be answered only by looking into the essences of the output itself and the processes that gave rise to it.

[Giant swath of 20th/21st century literary theory coming to mind here.]

I started with the title “We Breakfast” because the place where the conversation left me pondering was around the question of how the “we” that talks about AI and about how it ought to be handled and treated and what it should be allowed to do and what projects its enthusiasts should pursue is organized. I think we almost always too blithely project the idea of the reality of “we” (“we shouldn’t let this happen” or “we know that we value X”) as being well constructed, at the ready as a collective intelligence and agency that’s free (free in the sense of not needing to be paid for). In fact, I think “we” is a gigantic and ongoing construction project that absorbs a large part of our energy budget and mostly is a rickety machine with only very locally optimized or even reasonably well-running instances.

But more on all these things to come.

Another Angle on Hybrid

What do we think of when we think of hybrid (learning and teaching)? Some face-to-face teaching plus some online teaching? Some synchronous + some asynchronous? Flipping the classroom? Drosos and Guo (2021)* offer another perspective on a kind of teaching that can be included in the category. They show how what streamers teaching do can be seen as a form of cognitive apprenticeship. The authors do not explicitly talk about “hybrid,” but the practices they identify – real time problem solving, improvised examples, insightful tangents, and high level advice – are relevant to hybrid for two reasons. First, they are the kinds of things often cited as why remote or asynchronous instruction is necessarily inferior (the claim being they are absent). Second, they are useful challenges: how can these virtues be built into various hybrid scenarios?

Powered By EmbedPress

*2021 IEEE Symposium on Visual Languages and Human-Centric Computing (VL/HCC)