abstract image showing two partial face profiles facing in opposite directions

Alignment as a Vocation?

I just re-read Max Weber’s essay “Science as a Vocation” in the context of thinking about human and machine intelligence alignment. In particular, I’m thinking about the techniques humans have invented to ensure the alignment of expert intelligence. The broader context is a course that examines alignment of human intelligence, organizational intelligence, expert intelligence, and machine intelligence.

Weber’s essay is the text of a lecture he gave in Munich in 1917. His audience was students including aspiring scientists. Weber acknowledged their search for values in troubling times, but warned them not to expect science to provide answers to ultimate questions. Just the same, he assured them, despite its goal of “disenchantment” of the world, a scientist can find inner satisfaction from a vocational dedication to rational clarity.

Weber is speaking to the role of the scientist/intellectual in modern society. Weber argues that science is rooted in rationalization and the progressive elimination of magical or religious worldviews—what he calls “disenchantment.” He says that the ethical duty of the scientist is to state facts clearly and distinguish between empirical claims and personal values. Science is always silent on moral ends and existential meaning – it can reveal how things work, but it can never tell you how you ought to live. The students to whom Weber was speaking want more from their teachers, but, he says, they need to recognize the difference between hunger for leaders and hunger for teachers. Scientists themselves, qua scientist, live as if everything depends on getting it right, even as they recognize that their work will be superseded by subsequent science. It is challenging to live up to this ethic given the uncertainty, competition, and structures within which one must work. The title “als Beruf” (as a calling or vocation) communicates that science must be a commitment despite the obstacles.

What can we glean about intelligence alignment from Weber’s essay?

The practice of science, as a vocation, does not seek to align with human ends; it methodically excludes them. Its only internal commitment is to methodical rational clarity. But this creates its alignment problem: the scientist lives in a world full of incentives and values and ends that they can not adjudicate qua scientists.

The project of disenchantment – science’s shedding of value and meaning – produces amoral authority that is technically rigorous but morally agnostic. By its very nature, it risks becoming misaligned with the society it serves; its findings can be wielded for any ends.

The ethical obligation of the scientist is to not smuggle values and beliefs into their work, or at least to recognize and acknowledge when this happens. So too with machine intelligence: we aspire to prevent values from seeping into data and design, but our capacity to see and govern that seepage is limited.

If Weber is correct, perhaps alignment can never be reduced to algorithm or architecture; an ethos that depends on aspirational self-reflection cannot be installed. Perhaps we must always build structures of control and alignment around intelligences and invest on building an ethic of responsibility into those who design, build, and deploy such systems.

Weber is a tensional thinker here: “[old gods] resume their eternal struggle with one another” (149.2). There is no finite algorithm that can “solve” alignment. The dialectical character of an “ethos” or “ethic” presents something that is undecidable, but which the professional must none the less take responsibility for deciding. We see a similar theme in his essay on politics – rationality will never deliver the answer to ultimate questions but human agents cannot duck those questions. Ultimate accountability is key.

A key question for us is whether machine intelligence can adopt and practice the kind of epistemic stance that Weber demands of the scientist. Can a machine experience vocation? Could adopt the stance that “the fate of [its] soul depends upon whether or not [it] makes the correct conjecture at this passage of the manuscript…” (135.2) as Weber describes the passion necessary to do science?

Can a model trained on the sediment of all human thought satisfactorily parse out from the superpositioning of all that meaning something that is not still a blend of hidden values? Can machines mimic the human capacity to conceptualize and strive for an ideal? Or is that drawing of the line a human prerogative? We cultivate that capacity through scientific training (method and transparency as well as ethic) and group processes (peer norms) and these have parallels in fine tuning and post-training of machine intelligences. But maybe the takeaway from Weber’s essay is a humanistic one: alignment is not automatable.

One comment

  1. Perhaps the main difference is that science is descriptive, while AI is a technology. We organize the work of scientists to understand how infections work, not to come up with nice ethically convenient theories – viruses wouldn’t care.

    Machine intelligence is a technology that _should_ be aligned with human goals. The way cars are a technology that should have been aligned with human goals. It’s likely we’ll miss out on aligning AI as we missed out on aligning cars to our needs.

    Perhaps more accurately, we aligned cars with the wishes of the powerful, not the needs of all, and we’re on the way of doing the same with AI. Obvious points, perhaps.

    Like

Leave a Reply