Better Teaching Through a Financial Stake in the Outcome

In an Inside Higher Ed article this week (“Performance Pay for College Faculty“) K Basu and P Fain describe how the new contract signed between City Colleges of Chicago and a union representing 459 adult education instructors links pay raises to student outcomes.

Administrators lauded the move in part because it gets faculty “to take a financial stake in student success.” The details of the plan are not clear from the article, but the basic framework is to use student testing to determine annual bonus pay for groups of instructors working in various areas. That is, in this particular plan it does not sound like the incentive pay is at the level of individual instructors.

Still, should the rest of higher education be paying attention? Adult education at CCC is, after all, a markedly different beast than full time liberal arts institutions or 4 year state schools or research universities. One reason we should because it’s precisely the tendency to elide institutional differences that is one of the hallmarks of the style of thought endemic among some higher education “reformers.” Those who think it’s a good idea for adult education institutions are likely to champion it elsewhere.

But most germane for the subject of this blog is the question of what data would inform such pay for performance decisions when they are proposed for other parts of American higher education. Likely it will be something that grows out of what we now know as learning assessment. I ask the reader: given what you have seen of assessment of learning outcomes in your college, how do you feel about having decisions about your pay check based upon it?

But, your opinion aside, there are several fundamental questions here. One is whether you become a more effective teacher by having a financial stake in the outcome. The industry where this incentive logic has been most extensively deployed is probably the financial services industry, especially investment banking.  How has that worked for society?  It would be easy to cook up scary stories of how this could distort the education process, but that’s not even necessary to debunk the idea.  The amounts at play in the teacher pay realm are so small that one can barely imagine even a nudge effect on how people approach their work.

But what about the data?  Consider the prospect of assessment as we know it as input to ANY decision process, let alone personnel decisions.  Anyone who has spent any time at all looking at how assessment is implemented knows that the error bars on any datum emerging from it dwarf the underlying measurement. The conceptual framework is thrown together on the basis of dubious theoretical model of teaching and learning and forced collaboration between instructors and assessment professionals.  The process sacrifices methodological rigor in the name of pragmatism, a culture of presentation (vis a vis accreditation agencies), and the tail of design limitations of software systems that wags the dog of pedagogy and common sense.  At every step of the process information is lost and distorted. But it seems that the more Byzantine that process is, the more its champions think they have scientific fact as product.

It could well be that the arrangement agreed to in Chicago will lead to instructors talking to one another about teaching, coordinating their classroom practices, and all sorts of other things that might improve the achievements of their students.  But it will likely be a rather indirect effect via the social organization of teachers (if I understood the article, the good thing about the Chicago plan is that it rewards entire categories of instructors for the aggregate improvement).  To sell it at the level of individual incentive is silly and misleading.  And, if we think more broadly about higher education, the notion that you can take the kinds of discrimination you get from extremely fuzzy data and multiply it by tiny amounts of money to produce positive change at the level of the individual instructor is probably best called bad management 101.