The Problem with Departmental Revenue/Cost (non)Analysis

Originally published June 2017.
All across the country struggling colleges (and universities) are hiring one of several academic consulting firms to help them get a handle on their finances. The ACTUAL problem the institutions face is low enrollment but this is experienced as “this place costs too much to run” (because tuition revenue is below expenses) and like management everywhere their brains turn to cutting labor costs. In the absence of a vision for what the academic program should look like (or in the presence of an unwillingness to put such a vision on the table), they turn to consultants to help them identify where to cut academic programs. One element informing decisions about academic restructuring in general and instructional personnel in particular is the so-called program cost structure analysis.

The basic logic of this analysis is to identify all the faculty FTE that staffs courses in a given area, identify the compensation of these individuals and then add in the cost of the program’s share of administrative support and operating budget and then compare this with the “revenue” the unit generates through crediting a fraction of effective per student tuition for each credit hour earned by students in the program’s courses. Then we either subtract one from the other or form a ratio and characterize the program as “in the black” or “in the red” a “net revenue generator” or a “net cost center,” etc.

Distortion and Bias on the Cost Side

When such analyses use actual faculty salaries rather than average faculty salaries they bias the result by faculty seniority.  Since faculty are sometimes on leave and since senior faculty retire and get replaced by new assistant professors this introduces big year to year distortions that make comparisons problematic.

Suppose biology, for example, has had three senior retirements in recent years all of whom have been replaced by new junior faculty. If we look at the department 4 years back it looks very expensive, if we look at it today it looks very inexpensive.

Now, some will answer this observation saying you have to budget for the actuality of today. Point taken. But the stated purpose of this analysis is to understand the relationship between cost and demand.  We are trying to understand something about the liberal arts college of today. If we do the analysis and find that philosophy is more expensive per student than marketing but the reason is marketing is a brand new department that only just hired faculty last year and we make strategic long term decisions on the basis of this information we are going to be making mistakes.

The solution is simple: use weighted average cost that takes into account the actual distribution of the college faculty across pay levels.  This permits program to program comparisons unbiased on the cost-side of the equation.

Distortion and Bias on the Revenue Side

One piece of the demand and revenue side of the analysis is simply looking at student course registrations – how many students do we teach.

This is a fair measure and it’s not hard to zero in on how many students each faculty member has to teach each year to “pay their salary.” When I did this computation a few years ago it came in at around 95 per year.

But using aggregate course registrations as a measure of student interest is problematic.  Many courses in the curriculum have numerous prerequisites and many courses are mandated as part of various minors and majors and general education schemes. And some courses are scheduled in a manner that reduces the number of potential enrollees (not necessarily out of poor scheduling strategy: languages may need to meet 4 times per week, some courses have required labs and labs may need to take up an entire afternoon).

A course that has an absolute prerequisite will almost necessarily never have more students in it than the prerequisite. Departments and programs that are more hierarchical will offer more courses that are necessarily smaller.  Courses with no prerequisites have a natural advantage. English, for example, has dozens of courses with no prerequisites or only English 1 as a prerequisite, a course that every student is required to take.  This gives the English program a huge advantage over, say, biology or biochemistry.

Programs that manage to control general education requirements and get more of their courses to count for GE will have enrollment numbers inflated over “actual student interest.”

The Upshot

The bottom line is that there are a number of structural distortions that make credit hours generated an invalid measure of student interest, especially in comparisons among close cases.

When both the numerator and denominator in a metric are subject to biases moving in different directions the metric is not a valid measurement of what you think it is a measure of. Employing such a metric for comparisons between programs, development of curricular strategy, and ending instructors’ careers is, at best, problematic.

Sometimes an analysis has a data problem (“garbage in, garbage out”) and that’s probably true here. But the far more serious problem lies in the methodology.

How to Fix

There really is no excuse for not using average faculty compensation, unless we do not care about chopping out a part of the curriculum simply because of when we hired the faculty who teach it. The other problem is much hairier.  The very nature of knowledge affects the results here, as do contemporary ideas about assessment that encourage a pedagogical trajectory from “introduction” through “practice” to “mastery.” Taking into account how different programs manifest these is not easy.  But failing to take them into account undercuts the believability of one’s results.

Bad Methods Yield Non-Actionable Answers

Originally published June 2017

Having drunk the KoolAid of rubrics and assessment, many the untrained academic administrator epitomizes that old saw about a knowing just enough to be dangerous. Suppose a manager wants to make a decision based on multiple criteria. An academic manager, for example, might consider

  • Employee Type
  • Organization Needs and Employee Expertise
  • Employee Productivity
  • Employee Versatility
  • Engagement in Critical Roles

The plan is to rate each employee on each dimension and then add up the ratings to yield a score that will permit comparison between employees for the purpose of decisions about whether to retain the employee or not.

The individual ratings will be some variation on High, Medium, Low.

The use of rubrics such as this is all the rage in higher education. Unfortunately, they are frequently deployed in a manner that reduces

Ratings are not normalized

By having some categories top ranking count 3 and others 2 points we introduce a distortion into the final score. Type, match, and productivity “count” more than versatility and critical role.  If that’s intended, fine, but if not, it skews results.

Ordinal Scales Do Not Contain Distance Information

Any fool, as they say, knows that “high” is more than “medium” which is more than “low” and “low” is more than “none.”  When we have a scale that has this property we call it an “ordinal” scale; the elements of the scale can unambiguously be ordered from low to high.

What we do NOT know, though, is whether the “distance” between a high rating and a medium rating is equal to the distance between a medium rating and a low rating.

Although it is extremely common to look at an ordinal scale like “high, medium, and low” and assign 3 to high, 2 to medium, and 1 to low, this is a serious methodological error.  It invents information out of thin air and inserts it into the assessment. The ways in which this distorts the answers that emerge from the measurement cannot be determined without careful analysis. Just writing 3, 2, 1 next to words is not careful analysis.

Criteria Overlap Double Counts Things

Suppose some of the same underlying traits and behaviors contribute to a needs/expertise match and an employee’s versatility and that this trait is one of many we would like to consider in deciding whether to retain the employee. Since it has an impact on both factors its presence effectively gets counted twice (as would its absence).
Unless we are very careful to be sure that each rating category is separate and distinct, a rubric like this introduces distortion into the final score by unintentionally overweighting some factors and underweighting others.

Sequence Matters

When using rubrics like this we sometimes hear that one or another criteria is only used after the others or is used as a screen before the others. This too needs to be done thoughtfully and deliberately. It is not hard to show how different sequences of applying criteria can result in different outcomes.

Zero is Not Nothing

A final problem with scales like these is that even if the distance between the ratings were meaningful, it is not always the case that we have a well defined “zero” rating.  Assigning zero to the lowest rating category is not the same as saying that those assigned to this category have none of whatever is being measured.
The problem that this introduces is that a scale without a well understood zero measurement yields measurements that cannot be multiplied and divided. This means that we cannot think in terms of average ratings as we often do.

Rankings are Just Rankings

The upshot is that ordinal scales are just rankings, just orderings, and without a more well established underlying numerical scale rankings are very hard to compare and combine in a manner that does not obscure more than it illuminates. Decisions based on naive uses of quantification are as likely as not to be wrong and influenced by extraneous and unacknowledged factors or just be the result of random consequences of choices made along the way.

Managing the Wrong Problem

Originally published June 2017

We have a revenue problem, not a cost problem.

Imagine an educational institution that finds itself running a budget deficit – projected revenues just do not balance projected costs. It’s a very familiar scene in higher education in 2017.

And so what happens?  The Board of Trustees says “balance that budget!” and the Administration hears “tighten your belt!”

Cost Cutting is Easy. Revenue Growth is Hard.

Why don’t we hear “strengthen your revenues”?  The answer is pretty simple: cost cutting is easier work.  Cutting costs means looking inward and relying on bureaucratic authority. One can tell one’s reports to cut costs by X% and then hold them accountable for results. They in turn tell their reports to do the same and wait for results.  And the work is done by poring over budget reports and having meetings with PowerPoint slides full of numbers.  The work flows down the bureaucracy. Bureaucracies are more comfortable when work flows down. This process is NOT rocket science.

On the other hand, to pay attention to and do something about revenues, people have to look outward, become informed about the outside world, take in new ideas, struggle to understand opportunities and communicate them to colleagues, do the very hard work of finding out what the world wants and telling the world what you can do.  This IS rocket sciencey.

What Usually Happens

By my estimation, it’s easy for a college over the course of, say, two years to deploy thousands of hours of its best people’s time and creativity talking about how to nibble away at the margins of the expense side of its budget.  A 20+ person budget committee will meet several times a month, C-suite folks and their staff will meet even more often, faculty meetings, committee meetings, and all-campus meetings are devoted to the task. Consultants are hired to crunch data, in-house people crunch the data again. It’s probably not too far off the mark to imagine the institution puts more energy into this than anything else during this time.

Because.What.We.Are.Good.At

It’s not surprising, though, because most institutions have a management team that has been selected on the basis of their ability to manage the status quo, to keep things running as they are (perhaps with modest expansion and growth). The “technology” of innovation, growth, expansion, rethinking business models, being entrepreneurial, leveraging resources, finding efficiency, building strategic platforms on which new revenue streams can grow, all of these are beyond their ken. It is easy to predict that we will put all our energy into saving and so very little into earning.

And when we DO turn our attention away from cost-cutting, the furthest we usually get is to devote ourselves to RETENTION. We tell ourselves that each retained student is $15k net tuition we have next year that we might have lost. Retention attention activates our missionary zeal and provides concrete focus for building programs and hiring staff. But we are inclined to measure neither the cost effectiveness of these efforts nor their fundamental limitedness – perfect retention will only ever get you back to the already anemic enrollment you started with.  And when your best people are working on this, they are not working on growth.

There is No Smaller Right-Size

This is a very big problem. When most institutional energy and brainpower is devoted to cutting costs and stemming losses, very little is leftover for actual expansion of the revenue pie.  Most colleges that are struggling will not achieve anything close to a sustainable business structure via cuts and retention. They have fundamental structural deficits related to their size and there is not a smaller size that works. All of the efforts at cost management and loss prevention are efforts at managing the wrong problem.

See also

Wedell-Wedellsborg, Thomas. 2017. “Are You Solving the Right Problem?” Harvard Business Review, January-February.