Know (and be smarter than) Your Enemy

This post is not specifically about assessment, but it relates to the larger conversation of which assessment is but one component : the future of American higher education.  Thanks to tweet from Cedar Reiner for turning me onto it.

You’ve possibly already seen this D Levy opinion piece in the Washington Post from March (or certainly other examples of the genre) example of what “they” are saying and reading (spoiler: it’s the standard “we pay them 100 grand and they only work 15 hours a week” tirade): ” Do college professors work hard enough?

It’s a tired bit of rhetoric, to be sure, but sung over and over like church hymns, it comes to define reality for a certain set.  That needs to be countered by smart talk widely repeated; smirking won’t do.  Here’s one reasoned rebuttal by Swarthmore’s Tim Burke that casts the problem in terms of larger arc of private capture of value through de-professionalization: “The Last Enclosures.”

The real challenge here is that most representatives of “the other side” (e.g., administrators, trustees, legislators) have not actually thought things through carefully but have bought into a well-crafted rhetoric and catchy simplifications, while “our side” takes a fundamentally conservative approach (same as it ever was) and puts its finger in its ears and goes “la la la la I cannot hear you….”  Higher education has a broken economic model, but too many of us are content to just demonize those with really bad ideas about how to fix it.  I agree with most of Burke’s critique, but I think we need to move beyond critique.  There is a romantic valor in identifying the corruption in the current wave of education reform, but it won’t be stopped by mere resistance.  Bad new ideas need to be defeated by good new ideas (as can be found in some of Burke’s other posts).

Is There a Right to Data Collection?

What’s more socially harmful: politicians not knowing what sound bite will play well or voters being mislead by scurrilous misinformation?

New Hampshire is one state where legislators listened when voters complained about “push-polling” — the practice of making campaign calls that masquerade as surveys or polls.  Perhaps the most infamous example is George Bush’s campaign calling South Carolinians to ask what they think if John McCain were to have fathered an illegitimate black baby.

The gist of M. D. Shear’s article, Law Has Polling Firms Leery of Work in New Hampshire” (NYT 1 March 2012) is that pollsters and political consultants are whining that “legitimate” operations are getting gun-shy about polling in New Hampshire for fear of being fined.  Actual surveys won’t get done, they suggest, because poorly worded legislation creates too much illegitimate legal liability.

They do not take issue with what the law requires and some even call it well-intentioned. Paragraph 16a of section 664 of Title  53 of New Hampshire statutes requires those who administer push-polls to identify themselves as doing so on behalf of a candidate or issue. In other words, if that’s what you are up to, you need to say so.

The problem, they say, is that the law is poorly written — good intentions gone bad, they suggest.  So, what does the statute actually say?  Not so ambiguous, really.  It says if you call pretending to be taking a survey but really you are spreading information about opposition candidates then you are push-polling:

XVII. “Push-polling” means:

  1. Calling voters on behalf of, in support of, or in opposition to, any candidate for public office by telephone; and
  2. Asking questions related to opposing candidates for public office which state, imply, or convey information about the candidates’ character, status, or political stance or record; and
  3. Conducting such calling in a manner which is likely to be construed by the voter to be a survey or poll to gather statistical data for entities or organizations which are acting independent of any particular political party, candidate, or interest group.

And so, the question arises: why aren’t pollsters themselves taking steps to stamp out the practice?  One supposes the answer is that they still want to use it, even if the “good guys” would not stoop to the level of sleaziness that Bush and Lee Atwater practiced.

Interestingly, one of the objections that the pollsters raised was that “complying with the law by announcing the candidate sponsoring the poll would corrupt the data being gathered.”  It’s interesting because they don’t think that constantly adjusting question wording and techniques that are technically push-polling even if they could stay inside the New Hampshire law would corrupt the data.

But this brings me to my real point.  As a practicing social scientist I am consistently disheartened and often angered at the abuse of survey research engaged in by political parties and organizations.   I receive “surveys” from the DNC, DCCC, Greenpeace, Sierra Club, MoveOn.org, etc. etc. that triply insult me:

  • They are, in fact, often push-polls (if gentle ones) whose real purpose is to inform and incite not collect data.
  • They are couched disingenuously in terms of providing me an opportunity for input, to have my voice heard.
  • As research instruments they are almost always C- or worse, violating the most basic tenets of survey construction.

Perhaps I should just humor them and wink since we do both know what’s really going on.  Sometimes the political actor in me is content to do so.  But at other times the information order pollution that they represent really gets to me.  These things corrupt the data of other legitimate research efforts. If the results are used, they amplify the error in the information order.  These things undermine social information trust.  They cheapen the very idea of opinion research.  Imagine a certain amount of what passes as clinical trials is really just PR for pharmaceutical companies.  Or imagine that the “high stakes testing” used to study the education system was really just a ploy to indoctrinate children.  Or that marine biologists were just sending a message to the mollusks they study.

As a consultant helping organizations do research I used to ask “are you trying to find out something or are you trying to show something?” To this we could add “or are you just putting on a show?”

There’s something disturbing when an industry like political polling can’t do better than suggest that the one state that has taken steps to address a real democracy-threatening practice within that industry is somehow “the problem.”  A republican pollster whined that the law has “a harmful effect on legitimate survey research and message testing that really impairs our ability to do credible polling,” as if we should care.  It doesn’t take a Ph.D. to see that a little ignorance on the part of politicians about attitudes in New Hampshire as the price for stopping a practice that corrupts public deliberation is a tradeoff well worth making.

White House College Scorecard Suggestions

The White House asked for some feedback on their proposed “scorecard” for higher education cost and value which is intended “to make it easier for students and their families to identify and choose high-quality, affordable colleges that provide good value.” Below are their questions and my (quick, off the top of my head, answering-an-online-survey level of analysis) responses.

What information is absolutely critical in helping students and their families choose a college:

You shouldn’t be asking this question here. It’s a researchable, empirical question. First, on what basis DO people decide? Then, to what degree do they have the appropriate information to do so?

As someone who studies things like this, I don’t think the info presented here as it is here presented will provide much added value or better decisions. In terms of presenting information, probably better to summarize in simpler terms: “On metric one, college X is above/at/below average for it’s sector.” But then don’t just stop there — we also need to global comparison because people don’t get how the sectors vary.

Note that costs are in fact a distribution and presenting average after grants still leaves family very much in the dark if they’ve no way to know where they’re likely to fall on the distribution.

Graduation rates does not suffer from this problem.

Percent of loan repayment is too crude to be useful. It’s useful for a banker who may want to finance loans for a student at a given school, but very unclear how this number helps student/family shopping for a college.

Average loan amount is useful.

As important as earning potential is, it’s a really stupid number here. Just do a tiny bit of due diligence and you’ll see screamingly wide variations across majors, careers, and even within majors. Lawyers, for example, have a certain average starting salary, to be sure, but really big range of variation. Frankly I think putting a single number or even a distribution of incomes next to the name of a school would be nothing but phony quantification. Either that or have a really big footnote explaining statistical significance of differences in means.

What other information would be helpful:

Rather than average loan amount and discount rate what would be useful would be ratios. Tell me (1) list price cost of attendance is X; (2) distribution of discounts is … and (3) range of debt at graduation is …

Interesting that you don’t really have any room for general comments on doing this at all. You are going to end up diverting an incredible amount of resources toward a project that will in all likelihood produce at best some only moderately useful numbers with huge error bars on them. You will feed into the illusion that choice produces improvement (can you cite any actual evidence?). And you will do absolutely nothing that actually lowers or controls costs, increases graduation rates or lowers indebtedness. In short, not a drop of innovation here. Lots of window dressing, but very little that deserves the name policy.

I’m left wondering why this administration is so confident that “better than the alternative” will continue to be a reason people like me support you.

Does the scorecard cause you to think about things you might not have otherwise considered when choosing a college:

Not in the slightest. It makes me think that whoever made it up has never actually been through the process. It reads more like it is informed by a need to respond to conservative activists who are trying to make hay about higher education. As an Obama supporter and contributor I have to admit it’s really a little bit embarrassing to read this as part of the administration’s policy proposal. If you can’t do better than this, I wonder how bad it would really be to have a republican in the WH as well as in control of congress.

How should this version be modified for 2-year colleges:

Look, it’s pretty obvious that there are two issues with two year colleges: (1) to what degree does it lead to successful and timely completion of a four year degree, OR (2) to what degree does it yield serious, usable job training.

So, a start would be to provide rate of students who seek admission to four year who actually graduate from a four year. But really easy to get garbage data on this if you don’t set up the categories and the tracking really smart.

On the job side, again, gonna be really serious data quality problems that will likely as not make the information worthless (mostly because you are going to see massive variation from program to program WITHIN schools). That said, let’s start with simple “how many people are working in a full-time non-temporary job in or related to the field of their AA degree within X years?”

How should comparison groups for colleges be made? What are important things to consider in grouping institutions together that serve similar students:

Catch-22 here. You are asking people to choose — if you separate it out too well, the really important thing gets lost: we want people to better understand what the different “rungs” represent. One of the big crimes in higher education is that crappy institutions with minimal value added get to promise people a college degree. And if you only compare within groups each one gets to, in a sense, set the standards. What you need is a tool that more clearly lets people see the payoff differences between the tiers (to the degree there are some).

A most important thing that you’ll probably leave out is the effect of what you bring to college on the college outcomes. Huge naivete in college assessment world that the college output has only to do with what the college did. Gigantic effects of origins still at work in higher education. Just be sure your new tool doesn’t simply do more to perpetuate the myth.

What search and comparison features would you like the online tool to have:

Something that shows schools in context and behind that groups in context (where does this school sit within its group and where does its group sit in the larger picture).

What should we call this tool? Would a different name better explain the service being provided:

One name would be “Republican Higher Education Policy as Adopted by Obama Administration.”

Sociology of Information in the New York Times

Published: September 3, 2011
Why all the sharp swings in the stock market? To Robert J. Shiller, it’s a case of investors trying to guess what other investors are thinking….

Seeking not what is the case, but what others probably think is, or even what others think that others think is…

Published: September 2, 2011
When Rick Perry, the governor of Texas and a presidential hopeful, debates his rivals, his assertions on climate change, Social Security and health care could put him to the test….

Once it’s out there, it’s out there…

Published: August 29, 2011
The antisecrecy organization WikiLeaks published nearly 134,000 diplomatic cables, including many that name confidential sources….

Developing story — a leak, a revelation, or just a mistake?  (See also previous posts on Wikileaks.)

Sociology of Information Gaffes

Much has been made of VP candidate Joe Biden’s capacity to put his fut in his mouth. In this morning’s paper, reporter John Broder (“Hanging On to Biden’s Every Word”) reviews the issue and highlights a few recent events. In one of them, Biden either did not know or forgot an important bit of information about someone:

In Columbia, Mo., this week, Mr. Biden urged a paraplegic state official to stand up to be recognized. “Chuck, stand up, let the people see you,” Mr. Biden shouted to State Senator Chuck Graham, before realizing, to his horror, that Mr. Graham uses a wheelchair.

“Oh, God love ya,” Mr. Biden said. “What am I talking about?”

How is this kind of gaffe is different from those which amount to inelegant diction or impolitic revelations? The “offense” here is certainly not anti-disability bigotry or insensitivity, and the sociologist of information should not get distracted by (either republican or disability-rights) activists who might want to make hay about the event. Rather, it’s a failure to be aware of, or keep track of, a relevant piece of information about someone. As such, it is, before all else, relationally revealing : a basic norm of relationships is to keep track of relevant information about the other. When one utters the phrase “my friend,” even if it is ritualized political speech, it triggers some informational expectations. When these aren’t met, we find it jarring or even offensive (consider the simple case of getting a form letter that mis-addresses you as Mr. or Ms. — it quickly becomes even junkier mail than it already was).

Normally, politicians can synthesize relationships such as “my friend…” because their handlers can remind them of information-you-ought-to-know-about-the-other as they make their way toward a handshake. Getting such things right may not mean anything in an objective sense, but in terms of the relational work it does, it can certainly be consequential.

The take-away is that relationships, even those created artificially for the purposes of the moment, always come with informational expectations and obligations.