Curriculum Design Lab

By the time curriculum ideas become reality they’ve been so picked over by committees and assistant associate deans that their pedagogical coherence is reduced to something like “at this point nobody objects strongly to what’s left.” A few years back I had in mind to teach a course or workshop that would encourage individuals or small groups to engage in a sprint or hackathon approach to the first draft of a curriculum plan. Here’s an effort I put together, partly as an example of that genre, but also as a vision for a program I thought Mills College, where I taught at the time, could absolutely hit home runs with given their staffing, location, reputation, etc. It was basically a port to the small liberal arts college context of the program I helped get off the ground at USC in 2014, the Iovine Young Academy for Arts, Technology and the Business of Innovation.

Technology, Business, and Design

“Innovation” is the development of creative and sustainable solutions to important problems. The TBD program offers students rigorous training as innovators in the context of a liberal arts education. It hybridizes ^!\\$’ strengths in the arts, business, and technology to produce a new kind of academic program and a new genre of academic programs.

This program is not designed only for students who are currently at or likely to enroll at our college although many of them might find it of interest. Our goal is to attract to our college students who would never have given us a second look. We expect some of these would gain admission to our program and enroll, but others would enroll at the college for other programs. Further, we expect that some students will “transfer” into other majors after the first two years. The program we want to build will put us on the recruitment map in new ways by being a radically forward looking program that is not available anywhere else. It will be a program that builds on legacy strengths of the institution, but goes quantitatively and qualitatively beyond just breathing new life into old programs.

The purpose of this degree is not to get art students to learn more technology or for technology students to minor in business or for business students to learn to talk to coders and designers. We are not after interesting double majors or curious interdisciplinary majors. All that can already be done. We are going to invent something that can’t be done now, but needs to be: a new kind of degree that is unabashedly practical and profound, that yields graduates who can be described as creative critical thinkers, visionary pragmatists, technologists with a social conscience, radicals whose skillsets make them truly dangerous to the status quo.

Unlike similar programs at larger institutions in schools of engineering or design, we envision a program in which we are teaching “innovation as a liberal art” – believing that the core learning goals of the liberal arts are highly resonant with the content of “innovation education.”

What. One way to define “Innovation” is as the development of creative and sustainable solutions to important problems. The Technology, Business, and Design, or TBD, program offers students rigorous training as innovators in the context of a liberal arts education. It hybridizes ^!\\$’ strengths in the arts, business, and technology to produce a new kind of academic program and a new genre of majors.

WHO. The TBD program is designed not for students who are already at our college. Our goal is to attract the attention of students who would never have given us a second look. We expect to recruit some of them to this program, but we expect others will enroll at the college in other programs. Further, we expect that some students will “transfer” into other majors after the first two years. The program we want to build will put us on the recruitment map in new ways by being a radically forward looking program that is not available anywhere else. It will be a program that builds on legacy strengths of the institution, but goes quantitatively and qualitatively beyond just breathing new life into old programs.

Why. The purpose of the TBD degree is not to get art students to learn some technology or for science students to minor in business or for business students to learn to talk to coders and designers. We are not after interesting double majors or curious interdisciplinary majors. All that can already be done. We are going to invent something that can’t be done now, but that the world needs: a new kind of degree that is unabashedly both practical and profound, that yields graduates who can be described as creative critical thinkers, visionary pragmatists, technologists with a social conscience, radicals whose skill sets make them a danger to the status quo.

How. Unlike similar programs at larger institutions in schools of engineering or design, we envision a program in which we are teaching “innovation as a liberal art” – believing that the core learning goals of the liberal arts are highly resonant with the content of what might be called “innovation education.”

A Two Level Curriculum

TBD is a cohort-based degree program. The integrated trans-disciplinary curriculum has the graduated profile of sequential majors but with combinatoric flexibility that will yield several tracks in the major.

The curriculum has two levels. The first two years are completely highly structured and culminate in a sophomore project. Those who successfully complete this “pre-diploma” will continue on to an upper division program that allows for more in depth studies of the three component areas of the program: arts and design; business, organizations, and social science; technology and computer science with an intense capstone experience solving real world problems.

The Pre-Diploma

The curriculum begins with three intense introductory courses. The pre-diploma program is built around the “ABC phase” in which students take an introductory course in each area (A=art/design, B=business/organization, C=computing/technology) during the first two semesters.

  • Culture, Commerce, and Innovation
  • Design, Visualization, and Prototyping
  • Physical science and Coding I

Regardless of what strengths a student had coming into the program, these intense introductory courses lay a disciplinary foundation for subsequent work.

The second phase involves three bridging courses. Starting in spring of the first year, single discipline courses are followed by courses which explicitly tie two of areas together: technology and design; design and business; business and technology. These courses explicitly build on what was learned in the respective introductory courses.

  • Design and Social Innovation
  • Technology and Design
  • Organization and Technology

The third part of the pre-diploma is a seminar in which the three areas converge and a sophomore project in which students work on teams to take an idea from initial problem identification through prototype iteration and testing. The pre-diploma program is designed so that students can either continue onto the upper division curriculum or opt out and pursue other majors.

The Upper Division Curriculum

The upper division of the program starts with an internship that is bookended by entry and exit seminars. The entry seminars will consist of skills and knowledge specific to the internships, professional skills, and priming exercises to maximize the pedagogical impact of internship. The internship itself will start at mid-semester in the fall and continue to mid-semester in the spring. The exit seminar will consolidate lessons and skills learned in the internship and make connections with student’s proposed final year project.

Innovative Infrastructure: Shattering the Semester

Courses in the curriculum will be worth 1, 2, 3, or 4 credits with credit value determined by the number of weeks the course or workshop meets. All classes will meet for the same amount of time each week. This will allow us to stagger course offerings; for example, during the first semester students will have two 3 credit and one 4 credit course but the one 3 credit courses will end 3.5 weeks before the end of the semester and the other will start 3.5 weeks into the semester allowing students a little breathing room at the ends of the semester. The program also consists of digital skills workshops which last 7 weeks (2 credits) and a series of short 3.5 week workshops for 1 credit.

Course Flights

The upper division consists of three “flights,” one in each focal area. A course flight is a sequence of two or more courses in which students experience cumulative skill and knowledge building and progressively higher levels of mastery. Students choose one “long flight” consisting of the introductory course plus three advanced courses, a medium flight with two advanced courses, and short one with a single advanced course.

A student could have a business focus and do the long flight in organization/business courses. A student with a technology focus might make technology her long flight, design her medium length flight, and business her short flight.

A third year course called problems and solutions builds on skills and knowledge. This leads to a fourth year capstone and studio/garage/workshop/fieldwork project to which about half the year’s time will be devoted. Students will be expected to tackle a meaningful problem, assembling a team and seeing it through from start to finish.

Every semester there will be a series of four guest speakers – innovators in all fields drawn from the greater Bay Area – who will meet with students in the program in sessions we call “Innovators Face to Face.” Some of these will be structured as presentations and conversations and some will be structured as critique visits for which students will prepare presentations and visitors will offer commentary, critique, and advice.

DigiTool Courses

Over the course of their first four semesters, students in the program will take a series of toolbox courses we are calling “digiTools.” Many of these will be training in software applications used throughout the curriculum.

The popUp Curriculum

Also every semester will feature a series of “popUp” workshops on topics that complement the other curricular offerings and allow instructors to do more in those classes because common topics are covered outside of their classroom time. The first set of popUps will be used to orient students to the program and to introduce tools that are used throughout the program.

Staffing

Revenue Sharing

As a first approximation, assume 25 students in initial cohort with net tuition of 15k of which 33% is allocated to institutional overhead. The remainder to be apportioned proportional to credit hours delivered. One sample scheme shown below draws on faculty FTE in several existing programs/departments.

A Project Begins

A colleague today said how amazing the young people in her program are, how they almost don’t understand what some of the antinomies that vexed our generation were even about.  We mused that maybe they’d realize some of the human aspirations that had eluded us.  This potential, I think, poses an immense challenge to those of us who continue to collect paychecks for being this generation’s teachers.

One can hear some voices saying “we don’t need you to teach us anything” and certainly some of our colleagues advocate the laissez-faire response. Some folks might have read Tim Kreidermarch’s March 2018 op-ed, “Go Ahead, Millennials, Destroy Us,” as advocating a stand -back-and-let-them-at-it approach, but I don’t think we get to do this as educators. As often as I’ve heard resistance to, and dismissal of, the educational status quo, every signal I’ve ever received from my students is captioned “teach me something, damn it.”

But what?  Another tired defense of the liberal arts won’t do.  Our education turned out to be a good fit for getting to the end of the last century and starting this one.  The challenge is to formulate the education that will turn out to have been a good fit for the next half-century. Some of it will be different, but not all. And there in the challenge.

And then I read an article in the New York Times about tech wunderkinds growing older (and acknowledging that they were also growing wiser) started me thinking.  These brash disruptors become parents and their perspective shifts. Just wait, I thought, until they have teen agers, experience the death of their own parents, become empty nesters, and on and on.

But no, let’s not wait. Let’s not be like the singer in that old song, “Ooh La La” who said

Poor young grandson, there’s nothing I can say
You’ll have to learn, just like me
And that’s the hardest way, ooh la la.

If we are not going to sit back and wait, what of the “received wisdom” should we be evangelizing?  Most of us academics just pick out our favorites or the parts that advance our careers.  But what would we teach if the criteria were “what do they really need to know about?” And their lives, not ours, depended on the answer.

Design Thinking for Higher Education: A Different Take on “Student-Centered”

In the spirit of eating my own dog food, I started asking what would happen if we brought a design thinking sensibility to higher education.  Really using it.

A main idea in human centered design is that good design emerges from deep understanding of a user’s needs.  This turns out to be a really hard thing to do.  It goes against all the urges of genius.  Almost everyone starts with a solution and then burns cognitive energy to defend and market the idea.  So it goes when I think people should read Moby Dick and then sit down to right out the reasons why. Or when I want to require two semesters of calculus and can present a strong argument as to why.

Now there are problems that can be solved this way.  Sometimes we are so thoroughly familiar with a problem arena (the study of mechanical engineering, say) that confidence in the thing we just know to be the solution may be warranted.

But the world of the 2020s, 2030s, and 2040s ain’t like that.  In fact, the world of 2018 and 2019 is not familiar enough either.

So here is the project: study the user; identify her needs; develop criteria for the solution; brainstorm; prototype.

In a sense this will amount to taking seriously and at face value that annoying challenge “how is any of this relevant to what I’m trying to do?” Except for two things: we want to start with the “trying to do” and we don’t want to be limited to “now.” We will use wisdom, dialog, listening, and researching so that we can begin with “a thing you will want to do” (or a hurdle you will face) and then we will dig through our mental archives and find the “this” and then we will figure out how to articulate the relevance in a manner that makes the sale.

Warm Up Exercise

Frankly, I think the project will be too hard.  To break into it I am going to propose a less pure version. In this version each contributor will identify a single work, author, concept, school of thought, finding, or experiment that bears a keen relevance to something our audience are or will be grappling with.  And then we pitch it, maybe TED talk style, as a sort of curricular recommendation.

If done well, this could lead us toward a “student-centered” education worthy of the name.

Bad Methods Yield Non-Actionable Answers

Originally published June 2017

Having drunk the KoolAid of rubrics and assessment, many the untrained academic administrator epitomizes that old saw about a knowing just enough to be dangerous. Suppose a manager wants to make a decision based on multiple criteria. An academic manager, for example, might consider

  • Employee Type
  • Organization Needs and Employee Expertise
  • Employee Productivity
  • Employee Versatility
  • Engagement in Critical Roles

The plan is to rate each employee on each dimension and then add up the ratings to yield a score that will permit comparison between employees for the purpose of decisions about whether to retain the employee or not.

The individual ratings will be some variation on High, Medium, Low.

The use of rubrics such as this is all the rage in higher education. Unfortunately, they are frequently deployed in a manner that reduces

Ratings are not normalized

By having some categories top ranking count 3 and others 2 points we introduce a distortion into the final score. Type, match, and productivity “count” more than versatility and critical role.  If that’s intended, fine, but if not, it skews results.

Ordinal Scales Do Not Contain Distance Information

Any fool, as they say, knows that “high” is more than “medium” which is more than “low” and “low” is more than “none.”  When we have a scale that has this property we call it an “ordinal” scale; the elements of the scale can unambiguously be ordered from low to high.

What we do NOT know, though, is whether the “distance” between a high rating and a medium rating is equal to the distance between a medium rating and a low rating.

Although it is extremely common to look at an ordinal scale like “high, medium, and low” and assign 3 to high, 2 to medium, and 1 to low, this is a serious methodological error.  It invents information out of thin air and inserts it into the assessment. The ways in which this distorts the answers that emerge from the measurement cannot be determined without careful analysis. Just writing 3, 2, 1 next to words is not careful analysis.

Criteria Overlap Double Counts Things

Suppose some of the same underlying traits and behaviors contribute to a needs/expertise match and an employee’s versatility and that this trait is one of many we would like to consider in deciding whether to retain the employee. Since it has an impact on both factors its presence effectively gets counted twice (as would its absence).
Unless we are very careful to be sure that each rating category is separate and distinct, a rubric like this introduces distortion into the final score by unintentionally overweighting some factors and underweighting others.

Sequence Matters

When using rubrics like this we sometimes hear that one or another criteria is only used after the others or is used as a screen before the others. This too needs to be done thoughtfully and deliberately. It is not hard to show how different sequences of applying criteria can result in different outcomes.

Zero is Not Nothing

A final problem with scales like these is that even if the distance between the ratings were meaningful, it is not always the case that we have a well defined “zero” rating.  Assigning zero to the lowest rating category is not the same as saying that those assigned to this category have none of whatever is being measured.
The problem that this introduces is that a scale without a well understood zero measurement yields measurements that cannot be multiplied and divided. This means that we cannot think in terms of average ratings as we often do.

Rankings are Just Rankings

The upshot is that ordinal scales are just rankings, just orderings, and without a more well established underlying numerical scale rankings are very hard to compare and combine in a manner that does not obscure more than it illuminates. Decisions based on naive uses of quantification are as likely as not to be wrong and influenced by extraneous and unacknowledged factors or just be the result of random consequences of choices made along the way.

Managing the Wrong Problem

Originally published June 2017

We have a revenue problem, not a cost problem.

Imagine an educational institution that finds itself running a budget deficit – projected revenues just do not balance projected costs. It’s a very familiar scene in higher education in 2017.

And so what happens?  The Board of Trustees says “balance that budget!” and the Administration hears “tighten your belt!”

Cost Cutting is Easy. Revenue Growth is Hard.

Why don’t we hear “strengthen your revenues”?  The answer is pretty simple: cost cutting is easier work.  Cutting costs means looking inward and relying on bureaucratic authority. One can tell one’s reports to cut costs by X% and then hold them accountable for results. They in turn tell their reports to do the same and wait for results.  And the work is done by poring over budget reports and having meetings with PowerPoint slides full of numbers.  The work flows down the bureaucracy. Bureaucracies are more comfortable when work flows down. This process is NOT rocket science.

On the other hand, to pay attention to and do something about revenues, people have to look outward, become informed about the outside world, take in new ideas, struggle to understand opportunities and communicate them to colleagues, do the very hard work of finding out what the world wants and telling the world what you can do.  This IS rocket sciencey.

What Usually Happens

By my estimation, it’s easy for a college over the course of, say, two years to deploy thousands of hours of its best people’s time and creativity talking about how to nibble away at the margins of the expense side of its budget.  A 20+ person budget committee will meet several times a month, C-suite folks and their staff will meet even more often, faculty meetings, committee meetings, and all-campus meetings are devoted to the task. Consultants are hired to crunch data, in-house people crunch the data again. It’s probably not too far off the mark to imagine the institution puts more energy into this than anything else during this time.

Because.What.We.Are.Good.At

It’s not surprising, though, because most institutions have a management team that has been selected on the basis of their ability to manage the status quo, to keep things running as they are (perhaps with modest expansion and growth). The “technology” of innovation, growth, expansion, rethinking business models, being entrepreneurial, leveraging resources, finding efficiency, building strategic platforms on which new revenue streams can grow, all of these are beyond their ken. It is easy to predict that we will put all our energy into saving and so very little into earning.

And when we DO turn our attention away from cost-cutting, the furthest we usually get is to devote ourselves to RETENTION. We tell ourselves that each retained student is $15k net tuition we have next year that we might have lost. Retention attention activates our missionary zeal and provides concrete focus for building programs and hiring staff. But we are inclined to measure neither the cost effectiveness of these efforts nor their fundamental limitedness – perfect retention will only ever get you back to the already anemic enrollment you started with.  And when your best people are working on this, they are not working on growth.

There is No Smaller Right-Size

This is a very big problem. When most institutional energy and brainpower is devoted to cutting costs and stemming losses, very little is leftover for actual expansion of the revenue pie.  Most colleges that are struggling will not achieve anything close to a sustainable business structure via cuts and retention. They have fundamental structural deficits related to their size and there is not a smaller size that works. All of the efforts at cost management and loss prevention are efforts at managing the wrong problem.

See also

Wedell-Wedellsborg, Thomas. 2017. “Are You Solving the Right Problem?” Harvard Business Review, January-February.

Ten Reflections from the Fall Semester

Notes from this semester. Each semester I jot down observations about organizational practices, usually inspired by events at my place of employment.  Every now and then I try to distill them into advice for myself. Most are obvious, once articulated, but they come to notice, usually, because things happen just the other way round.

  1. Always treat the people you work with as if they are smart; explain why you take a stand or make a decision in a manner that demonstrates that you know they are smart, critical, and open to persuasion by evidence and argument. Set high standards for yourself. Your institutional work should be at least as smart as your scholarly work.
    • “it is better to be wrong than vague.” – Stinchcombe
    • If smart people are opposed to your idea, ask them to explain why. And listen. Remember, your goal is to get it right, not to get it your way.
  2. Do not put people in charge of cost cutting and budget reductions. Put them in charge of producing excellence within a budget constraint.
  3. Make sure everyone is able to say how many Xs one student leaving represents. How much will it cost to do the thing that reduces the chance a student will get fed up with things?
  4. If most of what a consultant tells you is what you want to hear (or already believe), fire her.
  5. Don’t build/design system and policies around worst cases, least cooperative colleagues, people who just don’t get it, or individuals with extraordinarily hard luck situations. Do not let people who deal with “problem students” suggest or make rules/policy.
  6. Be wise about what you must/should put up for a vote and what you should not. And if you don’t know how a vote will turn out, they are are not prepared to put it up for a vote.  Do your homework, person by person.
  7. If a top reason for implementing a new academic program is because there’s lots of interest among current students, pause. Those students are already at your school. What you want are new programs that are attractive to people who previously would not have given you a second look.
  8. If you are really surprised by the reaction folks have to an announcement or decision then just start your analysis with the realization that YOU screwed up.
    • Related: and don’t assume it was just about the messaging; you might actually be wrong and you should want to know whether that’s the case.

     

  9. If you or someone else’s first impulse when asked to get something done is to form a committee, put someone else in charge of getting that thing done.
  10. Persuade/teach folks that teams and committees in organizations are not representative democracies. The team does not want your opinions, feelings, experiences, or beliefs; it wants you enrich the team’s knowledge base by reporting on a part of the world you know something about.  And that usually means going and finding out in a manner that is sensitive to your availability bias.  In the research phase, team members are the sense organs of the team. Be a good sense organ not a jerking knee or pontificator or evangelist or nay sayer.

"But even if they are not valid, they do tell you something…."

Remember, “validity” means “they measure what you think they measure.” “Data driven” can also mean driven right off the side of the road.

From Inside Higher Ed

Zero Correlation Between Evaluations and Learning

New study adds to evidence that student reviews of professors have limited validity.
September 21, 2016 By Colleen Flaherty

 

A number of studies suggest that student evaluations of teaching are unreliable due to various kinds of biases against instructors. (Here’s one addressing gender.) Yet conventional wisdom remains that students learn best from highly rated instructors; tenure cases have even hinged on it.
What if the data backing up conventional wisdom were off? A new study suggests that past analyses linking student achievement to high student teaching evaluation ratings are flawed, a mere “artifact of small sample sized studies and publication bias.”
“Whereas the small sample sized studies showed large and moderate correlation, the large sample sized studies showed no or only minimal correlation between [student evaluations of teaching, or SET] ratings and learning,” reads the study, in press with Studies in Educational Evaluation. “Our up-to-date meta-analysis of all multi-section studies revealed no significant correlations between [evaluation] ratings and learning.”

House of Cards

A Facebook post called my attention to a neat little article about why swimming rules only recognize hundredths of seconds even though modern timing technology allows much more precise measurements. The gist is this: swimming rules recognize that construction technology limits the precision with which pools can be built to something like a few centimeters in a 50 meter long pool.  At top speed a swimmer moves about 2 millimeters in a thousandth of a second.  So, if you award places based on differences of thousandths of a second, you can’t know if you are rewarding faster swimming or the luck of swimming in a shorter lane.

This observation points to the more general phenomena of false precision, misplaced concreteness (aka reification, hypostatization), and organizational irrationality rooted in sloppy and abusive quantification.

These are endemic in higher education.

Students graduate with a GPA and it’s taken as a real, meaningful thing. But if you look at what goes into it (exams designed less and more well, subjective letter grades on essays, variable “points off” for rule infractions, quirky weighting of assignments, arbitrary conversions of points to letter grades, curves, etc.), you’d have to allow for error bars the size of a city block.

Instructors fret about average scores on teaching evaluations.

“Data driven” policies are built around the analysis of tiny-N samples that are neither random nor representative.

Courses are fielded or not and faculty lines granted or not based on enrollment numbers with no awareness of the contribution of class scheduling, requirement finagling, course content overlap, perceptions of ease, and the wording of titles.

Budgets are built around seat-of-the-pants estimates and negotiated targets.

One could go on.

The bottom line is that decision makers need to recognize how all of these shaky numbers are aggregated to produce what they think are facts about the institution and its environment.  This suggests two imperatives. First, we should reduce individual cases of crap quantification.  Second, when we bring “facts” together (e.g., enrollment estimates and cost of instruction) we should adopt an “error bar” sensibility – in it’s simplest form, treat any number as being “likely between X and Y” – so that each next step is attended by an appropriate amount of uncertainty rather than an inappropriate amount of fantasized certainty.

"Free College" and the System of Higher Education

Finally, someone is writing about the consequences of “free” college for the system of higher education in the US.

From The Chronicle of Higher Education 1 August 2016

How Clinton’s ‘Free College’ Could Cause a Host of Problems

By SCOTT CARLSON
AND BECKIE SUPIANO

The policy proposals of presidential campaigns aren’t often burdened by details or even realism. A candidate’s ideas are supposed to represent vision, ambitions, principles — all while taking on the latest American anxiety.

These days, some of that anxiety concerns the cost of college and the notion that student debt burdens young people as they head out to get jobs, buy homes, and start families. Hillary Clinton’s answer is her “New College Compact,” which includes a plan — adapted from her tenacious primary opponent, Sen. Bernie Sanders — that would cover tuition for students from families earning up to $125,000 a year.

“College used to be pretty affordable,” says a fact sheet on Mrs. Clinton’s compact. “For millions of Americans, that’s not the case anymore.” Colleges’ systems of grants and other financial assistance are complicated, and “free tuition” is a lot easier to pitch than a plan to tweak the existing patchwork of aid. Simple messages tend to resonate best.

And this message is a particularly resonant one. Higher education is widely seen as a necessary step on the road to a middle-class lifestyle, and most policy makers agree that the country needs a more educated work force. But as more of the burden of paying for college shifts to students and their families, proposals like Mrs. Clinton’s make a powerful suggestion: that higher education is a public good, which deserves to be treated as such.

The plan is grand — and very likely dead on arrival in Washington. Although the notion of free college is popular among progressives and young people, conservatives — who will probably retain control of the House of Representatives and many state governments after November — have balked at the cost of various free-college plans. Even some left-leaning policy wonks have questioned whether the plan would drive up tuition, put new burdens on the tax system, or even undermine college access.

Let’s set aside for a moment the question of whether the plan could ever become reality and treat it as a thought experiment: If Mrs. Clinton’s plan passed, what would happen to the higher-ed landscape? Many of the specifics aren’t known yet. But one thing is clear: Policy makers could write a free-college plan that does significant harm and questionable good.

PRIVATE COLLEGES IN PERIL

First in line for harm, most experts agree, would be private colleges. Although many people (and some policy makers) picture elite, wealthy institutions at the mention of “private colleges,” the category also includes hundreds of small, remote institutions, with tiny endowments.
“These colleges are concentrated in rural areas in the Midwest and Northeast, where high-school populations have been fairly stagnant,” says Robert Kelchen, an assistant professor of higher education at Seton Hall University. What’s more, he says, high-school graduates are increasingly minority or first-generation college students with lower incomes. “Because of that, these students might be more price-sensitive and may be interested in going to a public college rather than a private college.”

There’s a big variable here: Mrs. Clinton’s free-college plan does not make clear whether students at private colleges could still get grants and loans from the federal government. And while free tuition would surely appeal to many families, students don’t choose colleges on price alone. They also care about finding a strong academic program and a good fit. Geography, too, is key: Most students go to college relatively close to home.

But if public colleges became free for those lower-income students, says Kent John Chabotar, a former president of Guilford College, “small private colleges without endowments in states with highly regarded public universities — particularly the flagship universities — would be in trouble.”

The private colleges would have to compete to attract students who would be less prepared for college and have lower expected family contributions. “You’re going to see a combination of dropping enrollments and skyrocketing tuition discounting, killing off the weaker, private, unendowed colleges,” Mr. Chabotar says.

PUBLIC COLLEGES PRESSURED

So let’s say that the migration happens, and a new crop of students chooses public institutions over the private ones. It’s unclear that regional public and community colleges have enough capacity to meet that demand.

Public two- and four-year colleges already enroll more than three-quarters of the nation’s undergraduates. Even if a college had been planning to grow when Mrs. Clinton’s policy took effect, government funding probably would not keep pace with its needs over time, says Donald Hossler, a senior scholar at the Center for Enrollment, Research, Policy & Practice at the University of Southern California.

Colleges, he says, would be expected to educate more people with fewer resources per student. The quality of public education could erode. When enrollment is high and funding is tight, it can be hard for students to get all the classes they need to graduate on time.

At flagships and other selective public colleges, the picture would be more complicated. Flagships already tend to enroll more relatively affluent students, whose socioeconomic advantages give them an edge in admissions. Unless the government were to give the flagships some incentive to grow, they’d have little reason to take on more students. That would mean even more competition for a fixed number of seats.

So while free in-state tuition might sound like a boon to low-income students, it doesn’t help them much if they can’t get into the public college they want to attend, says Donald E. Heller, provost and vice president for academic affairs at the University of San Francisco.

In fact, some experts worry that free tuition for most families could exacerbate existing inequalities and further stratify higher education. While poor students would attend crowded, lower-tier public colleges at no cost, affluent students could buy their way into elite colleges, public or private.

Flagships have long worked to bring in more revenue from sources beyond state appropriations, like tuition — by enrolling more out-ofstate students, for instance. That’s unlikely to change. One big question is how much flexibility the institutions would retain in those efforts. What would students whose families make $125,000 or more be asked to pay?

If the policy applies to out-ofstate students, that eliminates a source of additional revenue. But if it applies only to in-state students, enrolling out-of-staters with family incomes below $125,000 would get harder when those students could attend their in-state colleges free, says Robert K. Toutkoushian, a professor in the Institute of Higher Education at the University of Georgia.

BETTER STUDENT OUTCOMES?

Free in-state tuition might also change when some students enroll. Mrs. Clinton has proposed that the program start out covering families making $85,000 or less, with the cap rising $10,000 annually for the next four years, until all families making less than $125,000 are covered. A family making $104,000 in the first year of the program might hold off on sending their children to college for a couple of years, Mr. McPherson says.

You might think that a plan that saves students money, possibly reducing how much they must work outside of class, ought to help them graduate, Mr. Hillman says. But graduation rates are higher at private four-year colleges than at public ones. That probably can’t be chalked up entirely to the colleges themselves — the students who enroll matter, too — but it makes it harder to think of the plan as a boon to college completion.
In the end, the free-college proposal is about one thing: mitigating debt. “Every student should have the option to graduate from a public college or university in their state without taking on any student debt,” says Mrs. Clinton’s website.

Sure, students from families making up to $125,000 wouldn’t have to borrow for tuition, but that doesn’t mean they wouldn’t have to borrow. They would still have to pay their living expenses, which can be a bigger burden than tuition, especially for needy students. Studies have shown that students on a low hourly wage have a hard time covering those bills.

Barring sizable government investment, many students would still take out loans, a pattern already established in other countries that have tried “free college.” Even at the handful of wealthy American colleges that meet students’ full financial need — accounting for the full cost of attendance, without loans — some students still borrow.

Here’s one more unanswered question: Does “free” mean tuition alone, or does it include fees? That’s no small detail: If colleges can’t get more tuition out of most students, they might look to increase fees instead.

ECONOMIC ENGINES

Colleges are economic engines in their towns — machines that move money around, particularly in rural communities. In many parts of the Northeast, Rust Belt, Midwest, and beyond, small colleges are anchor institutions, helping to prop up communities that long ago lost the manufacturers and farmers that helped create them in the first place.

Let’s assume that students chase free tuition at the public colleges, abandoning fragile private colleges and leading to their closure. What would happen to a place like Rensselaer, Ind., home of Saint Joseph’s College?

Saint Joseph’s is a Roman Catholic institution with 2,000 students; 45 percent are first-generation students, most of whom would be covered by the Clinton plan. “If you take 45 percent of our population, and you allow them to go to Purdue or Indiana University or any of the state schools in Indiana for free, more than likely they are not going to be coming here,” says Robert A. Pastoor, the college’s president. “The viability of the institution is going to be seriously called into question.” Indiana has 31 private institutions, he adds, and many of them would find themselves in the same situation.

In a town of 6,000, the college employs about 250 people. and is a significant economic engine. Students, parents, and alumni shop at the grocery store, eat at the restaurants, sleep in the hotels. Locals go to sports games, celebrate Mass in the college’s Romanesque chapel, and hold wedding receptions and meetings in college facilities.

“All of that would go away,” says Mr. Pastoor, “and there is nothing to take its place.”

Student Evaluations of Teaching: WHY is this still a thing?

My institution just created a data science major. But it doesn’t care about using data in honest and robust ways any more than other institutions.

It’s gotten to the point that it’s intellectually embarrassing and ethically troubling that we are still using student evaluations of teaching (SET) in their current form for assessing instructor job performance. It is laughable that we do so with numbers computed to two decimal places. It is scandalous that we ignore the documented biases (most especially gender-based). But we do.

Why isn’t this an active conversation between faculty and administrators?  I certainly find teaching evaluations helpful – trying to understand why I got a 3.91 on course organization but a 4.32 on inspiring interest is a useful meditation on my teaching practice.  I have to remind myself that the numbers themselves do not mean much.

Telling me where my numbers stand vis a vis my colleagues or the college as a whole FEELS useful and informative, but is it? I THINK I must be doing a better job than a colleague who has scores in the 2.0 – 3.0 range. But doing a better job at what? If you think hard about it, all you can probably take the bank is that I am better at getting more people to say “Excellent” in response to a particular question. The connection between THAT student behavior and the quality of my work is a loose one.

Maybe I am on solid ground when I compare my course organization score to my inspires interest score. MAYBE I am on solid ground when I compare my course organization score in one class to the same score in another the same semester or the same class in another year. I might, for example, think about changes I could make in how I organize a course and then see if that score moves next semester.

But getting seduced by the second decimal place is ludicrous and mad. Even fetishizing the first decimal place is folly. For that matter, even treating this as an average to begin with is bogus.

If you also use these numbers to decide whether to promote me, you’ve gone off into the twilight zone where the presence of numbers gives the illusion of facticity and objectivity. Might as well utter some incantations while you are at it.

Some new research adds another piece of evidence to the claim that the validity of the numbers in student evaluations of teachers is probably pretty low. Validity means “do they measure what you think they measure?” The answer here is that they do not. Instead, they measure things like “what gender is your instructor?” and “what kind of grade do you expect in this course?”

These researchers even found gender differences in objective practices like “how promptly were assignments graded” and these persisted when the students were misinformed about gender of instructors.

Let’s start implementing a policy we can have some respect for. No more averaging. No more use of numerical scores in personnel review. No more batteries of questions that ask more or less the same thing (thus distorting the positivity or negativity of the overall impression).

As John Oliver asks, “why is this still a thing?”

From the "Good Grief!" Department

U of California Criticized for Extending Transfer Deadline
December 14, 2015
The University of California announced early this month that transfer applicants to system campuses — who thought they had to finish applications by the end of November — could apply as late as Jan. 4. The university said it was acting because UC campuses recently committed to admitting more transfer applicants. For students who still want to apply, this is, of course, good news.
But the Los Angeles Times reported that many of those who met the standard deadline, and the counselors who helped them, are frustrated. Mihai Gherghina, who met the regular deadline, said, “They didn’t tell anyone about this extension until after the deadline. It’s unfair how some lazy people were given another chance.” Adding to the frustration: those who submitted their applications for the early deadline will receive no preference and will not be permitted to edit their applications between now and Jan. 4.