The Rubrikization of Higher Education

The rubricization of education has always rubbed me the wrong way but I’ve never been able to put my finger on concrete flaws beyond the obvious. This past January I attended the AAC&U conference in San Francisco. A few more problems became clear.

There are three obvious methodological/measurement problems that have long stood out:

1. Almost every rubric I have ever seen has exhibited gads of multi-dimensionality in the different skills/items/categories/rows. Another way to say this is that the rows typically posed double or multi-barreled questions to the evaluator. Or, even if the construct named in the row was simple, the description of the different scale levels would be multi-dimensional. Example:

Category Advanced (4) Competent (3) Developing (2) Underdeveloped (1)
Structure Sections fit together in logical sequence; claims, evidence, analysis, conclusions distinguished; logic of argument telescoped and reviewed

One argument that this is not a problem is that all the things listed here typically go together and that they are all indicators of the same underlying skill. Maybe. But it seems to be a stretch that all these skills nicely fall into a simple four level linear scale.

2. The second problem here is just that four point scale. What evidence is there to support the idea that “Advanced” level structure is two times as much structure (or as much skill) as “Developing”? This does not matter much when we are simply looking at these four levels, but the first thing that that folks with just a little quantitative skill do is come up with average ratings for a group of students on a skill rating like this.

Let us be clear: computing the average of a scale that has not been shown to have the arithmetic properties of what we call an interval scale PRODUCES MEANINGLESS RESULTS.

3. The third problem with rubriks like this is that the items (rows) are not necessarily exhaustive or mutually exclusive. In other words, they do not always include all the components of learning that might be (or should be) happening and the individual items often tap into the same underlying skill. The former is a substantive problem to be solved by better conversations about the goals of education. The latter, though, lead to bad data. Suppose three items X, Y, and Z are listed in a rubric and that the elaborate operationalizations of the different levels of these involve underlying skills a, b, c, d, and e.

Category Advanced (4) Competent (3) Developing (2) Underdeveloped (1)
X Blah blah blah {a} blah blah blah {c}
Y Blah blah blah {b} blah blah blah {c}
Z Blah blah blah {d} blah blah blah {a} blah blah blah {e} blah blah blah {c}

Where we’ve put in curly brackets the underlying skill that the description “blah blah blah” refers to. In this rubrik, skills a and b get counted twice, skill c three times. When data is aggregated, success on a, b, or c will easily mask lack of progress on d or e.

4. But here is the most serious problem of rubricization. It completely drives out of the teaching and learning process any response to individual variations in understanding. The role of the teacher as offering constructive criticism about the wide range of variability in learning is driven out in favor of a set of categories.

One great irony in this is that so many of the champions of this approach to educational reform are the very folks who preach about variability of learning styles.

Another is the high level of concern about students who “fall between the cracks.” Here we are developing a system with explicitly designed cracks between which they can fall.

Yet another is that a mantra of the rubrik crowd is “evidence based” and “data driven” decisions. And yet the very devices that lie at the heart of the enterprise are custom-built to degrade information and result in misleading data.

The fundamental absence of critical thinking in the rubrik/assessment literature – and total lack of interest in critical discourse about these techniques – is the final irony.

One can conclude that what we have here is a bunch of middle-brow thinkers designing a system that will maximize the production of people like themselves and guarantee their own employment in higher education industry. If only there were some evidence that this is what the world will need in the 21st century.

Author: Dan Ryan

I'm currently an Academic Program Director at I've been a professor at University of Toronto, University of Southern California, and Mills College teaching things like human centered design, computational thinking, modeling for policy sciences, and social theory. I'm driven by the desire to figure out how to teach twice as many twice as well twice as easily.

Leave a Reply

%d bloggers like this: