Defining human centered problem solving

I just got a text asking for language for a big grant proposal, one part of which involves bringing concepts and tools from human centered design to the realm of AI governance and regulation. At first there was the urge to find the right passage in a reputable source (Horst Rittel? Herb Simon? Dick Buchanan?), but none of the ones I reviewed seemed to be hitting the nail on the head. So I figured I’d just write one. Here’re the notes that came out of the exercise.

Human centered design is a discipline for tackling complex problems that keeps the actual humans who have the problem in the frame, builds into the process a recognition that humans and human institutions are doing the problem solving, and that grapples with the fact that solutions have consequences for the rest of present and future humanity.

Human Centered Problem Solving is a complement to deductive and inductive technological problem solving whose virtues include being bound by constraints of disciplinary rules of relevance and protocols that move from fixed definitions of problem spaces and solution spaces in a linear process of convergence on optimal solutions.

Human Centered Problem Solving:

  • deliberately curates encounters between multiple disciplinary expertises;
  • treats assumptions as assumptions, not givens;
  • choreographs an oscillation between convergent and divergent thinking;
  • produces prototypes that can tap the deep wisdom of the world and users instead of testing models in simulated, computed, or theoretical worlds;
  • proceeds iteratively, without prejudice against back-tracking, pivoting, or unreasonableness.

All of these are important manifestations of a discipline of transcending a conventional seeing the forest for the trees: not losing sight of the real forest for all the parochial forests that spring into our field of view. We attempt to not be distracted by the forest of disciplinary imperialism, organizational self-interest, myopic disciplinary aesthetics.

The point about prototyping is critical. Prototypes are like experiments in that they can be used to test an idea. But there’s a fundamental difference too. The classic experiment is run ceteris paribus – all things are held equal except one experimental variable. This mindset effectively backs the world into a corner in order to have it give up its secrets. It’s a very powerful technique when we are completely familiar with the room in which we have managed to trap the world. Human centered problem solving offers an alternative not for that situation but for the other cases when a problem either has not reached that stage or just is not that kind of problem.

The premise is the same: the world does know the answer. We just have to get it talking. Intentional prototyping as a component of the discipline of human centered problem solving exhorts us to recognize the world and potential users as better sources of insight than our mental simulations of that world and those users and to seek feedback by letting them try out things as we are doing them rather than just showing them later what we have done.

Conventional collaborative problem solving is analogous to conventional optimization algorithms: a solution space is defined and a “best” solution is zeroed in on. But machine learning algorithms depend as well on cultivating an urge that is contrary to “zeroing in on.”

One parameter in a machine learning training regime affects how much “jostle” there should be in a function between rounds of training.  Too much and the solution never converges; too little and it gets stuck in bad places. The same dynamic plays out when we talk about an algorithm’s trade-offs between exploitation and exploration.  In biological evolution, too, optimal adaptation requires the right tradeoff between genetic integrity and genetic experimentation. 

Human Centered Problem Solving borrows ideas, methods, and practices from design that help us to build a team and a process that can transcend the hurdles that we put in our own way with the disciplinary expertise we bring to our collaborations.

So much of the blowback against the technologies of computational intelligence represents an uneasiness with a perceived absence of “the human” in technological siloes producing the great “intelligence” achievements of the day.  “They ignore ‘fairness’!” “They elide ‘difference’!” In response the technologists say “OK, we’ll model fairness. We’ll model difference.”  Too often everyone is thinking “but that’s not what we mean by fairness” and “that’s not what we mean by difference.” We think that these things need to go beyond interest groups debating on the Tower of Babel.

The practices and mindsets we borrow from design thinking allow us to build an network of the new interdisciplinarity (new interdisciplinary collaboration) that will be able to generate new insights precisely because it is trained on a new and better synthesis of its component disciplines.

Author: Dan Ryan

I'm currently an Academic Program Director at MinervaProject.com. I've been a professor at University of Toronto, University of Southern California, and Mills College teaching things like human centered design, computational thinking, modeling for policy sciences, and social theory. I'm driven by the desire to figure out how to teach twice as many twice as well twice as easily.

Leave a Reply

%d bloggers like this: