How to Get a Job With a Philosophy Degree: Career Services and the Liberal Arts

From New York Times Magazine.  

Ostensibly a profile of Andy Chan, Wake Forest’s VP for “Personal and Career Development,” this article suggests a conversation about the role of career services in the context of liberal arts education. On the one side is the idea that pairing vigorous career services with liberal arts has three results: 1) students DO major in liberal arts subjects, 2) they get jobs, 3) donors (especially parents) love it. On the other is the concern that “[i]t reduces an education to the marketplace.”  The comments on the article make for interesting reading.

How to Get a Job With a Philosophy Degree


Published: September 13, 2013

On a Friday in late August, parents of freshmen starting at Wake Forest University, a small, prestigious liberal-arts school in Winston-Salem, N.C., attended orientation sessions that coached them on how to separate, discouraged them from contacting their children’s professors and assured them about student safety. Finally, as their portion of orientation drew to a close, the parents joined their students in learning the school song and then were instructed to form a huge ring around the collective freshman class, in a show of support. 
For years, most liberal-arts schools seemed to put career-services offices “somewhere just below parking” as a matter of administrative priority, in the words of Wake Forest’s president, Nathan Hatch. But increasingly, even elite, decidedly non-career-oriented schools are starting to promote their career services during the freshman year, in response to fears about the economy, an ongoing discussion about college accountability and, in no small part, the concerns of parents, many of whom want to ensure a return on their exorbitant investment.

See Also

Website of the Office of Career and Personal Development at Wake Forest

Know (and be smarter than) Your Enemy

This post is not specifically about assessment, but it relates to the larger conversation of which assessment is but one component : the future of American higher education.  Thanks to tweet from Cedar Reiner for turning me onto it.

You’ve possibly already seen this D Levy opinion piece in the Washington Post from March (or certainly other examples of the genre) example of what “they” are saying and reading (spoiler: it’s the standard “we pay them 100 grand and they only work 15 hours a week” tirade): ” Do college professors work hard enough?

It’s a tired bit of rhetoric, to be sure, but sung over and over like church hymns, it comes to define reality for a certain set.  That needs to be countered by smart talk widely repeated; smirking won’t do.  Here’s one reasoned rebuttal by Swarthmore’s Tim Burke that casts the problem in terms of larger arc of private capture of value through de-professionalization: “The Last Enclosures.”

The real challenge here is that most representatives of “the other side” (e.g., administrators, trustees, legislators) have not actually thought things through carefully but have bought into a well-crafted rhetoric and catchy simplifications, while “our side” takes a fundamentally conservative approach (same as it ever was) and puts its finger in its ears and goes “la la la la I cannot hear you….”  Higher education has a broken economic model, but too many of us are content to just demonize those with really bad ideas about how to fix it.  I agree with most of Burke’s critique, but I think we need to move beyond critique.  There is a romantic valor in identifying the corruption in the current wave of education reform, but it won’t be stopped by mere resistance.  Bad new ideas need to be defeated by good new ideas (as can be found in some of Burke’s other posts).

What If Administrator Pay Were Tied to Student Learning Outcomes

The recent negotiation in Chicago (“Performance Pay for College Faculty“) of a tie between student performance and college instructor pay brought this accolade from an administrator:  it gets faculty “to take a financial stake in student success.”

It got me wondering why we don’t hear more about directly tying administrator pay to student success.  If we did, I’ll bet the students would have a lot more success.  At least, that’s what the data released to the public (and Board of Trustees) would show.  There’d be far less of a crisis in higher education.

Thought experiment. What would  happen if we were to tie administrator pay to student success — much the way corporate CEOs have their pay packages designed — especially administrators of large multi-campus systems.

Prediction 1.  The immediate response to the very proposal would be “oh, no, you can’t do that because we do not have the same kind of authority to hire and fire and reward and punish that a corporate CEO has.”  But think about this…

  1. Private sector management has a lot less flexibility than those looking in from the outside think.  Almost all of the organizational impediments to simple, rational management are endemic to all organizations.
  2. Leadership is not primarily about picking the members of your team. It’s about what you manage to get the team you have to accomplish.
  3. Educational administrators do not start the job ignorant of how these educational institutions work. It is tremendously disingenuous to say “if only I had a different set of tools.”  People who do not think they can manage with the tools available and within the culture as it exists should not take these jobs in the first place.
  4. This, it turns out, is what some people mean when they say that schools should be run like a business. The first impulse of unsuccessful leaders is to blame the led. The second one is to engage in organizational sector envy: “if I had the tools they have over in X industry….”  What this ignores is the obvious evidence that others DO succeed in your industry with your tools.  And plenty of leaders “over there” fail too.  It is not the tools’ fault.

Prediction 2.  Learning would be redefined in terms of things produced by inputs administrators had more control over.  And resources would flow in that direction too.

Prediction 3. Administrators would get panicky when they looked at the rubrics in the assessment plans they exhort faculty to participate in and that are included in reports they have signed off on for accreditation agencies.  They’d suddenly start hearing the critics who raise questions about methodologies.  They would start to demand that smart ideas should drive the process and that computer systems should accommodate good ideas rather than being a reason for implementing bad ones.

Prediction 4. In some cases it would motivate individuals to start really thinking “will this promote real learning for students” each time they make a decision.  And they’ll look carefully at all that assessment data they’ve had the faculty produce and mutter, “damned if I know.”

Prediction 5. Someone will argue that the question is moot because administrators are already held responsible for institutional learning outcomes.   Someone else will say “Plus ça change, plus c’est la même chose.”

Better Teaching Through a Financial Stake in the Outcome

In an Inside Higher Ed article this week (“Performance Pay for College Faculty“) K Basu and P Fain describe how the new contract signed between City Colleges of Chicago and a union representing 459 adult education instructors links pay raises to student outcomes.

Administrators lauded the move in part because it gets faculty “to take a financial stake in student success.” The details of the plan are not clear from the article, but the basic framework is to use student testing to determine annual bonus pay for groups of instructors working in various areas. That is, in this particular plan it does not sound like the incentive pay is at the level of individual instructors.

Still, should the rest of higher education be paying attention? Adult education at CCC is, after all, a markedly different beast than full time liberal arts institutions or 4 year state schools or research universities. One reason we should because it’s precisely the tendency to elide institutional differences that is one of the hallmarks of the style of thought endemic among some higher education “reformers.” Those who think it’s a good idea for adult education institutions are likely to champion it elsewhere.

But most germane for the subject of this blog is the question of what data would inform such pay for performance decisions when they are proposed for other parts of American higher education. Likely it will be something that grows out of what we now know as learning assessment. I ask the reader: given what you have seen of assessment of learning outcomes in your college, how do you feel about having decisions about your pay check based upon it?

But, your opinion aside, there are several fundamental questions here. One is whether you become a more effective teacher by having a financial stake in the outcome. The industry where this incentive logic has been most extensively deployed is probably the financial services industry, especially investment banking.  How has that worked for society?  It would be easy to cook up scary stories of how this could distort the education process, but that’s not even necessary to debunk the idea.  The amounts at play in the teacher pay realm are so small that one can barely imagine even a nudge effect on how people approach their work.

But what about the data?  Consider the prospect of assessment as we know it as input to ANY decision process, let alone personnel decisions.  Anyone who has spent any time at all looking at how assessment is implemented knows that the error bars on any datum emerging from it dwarf the underlying measurement. The conceptual framework is thrown together on the basis of dubious theoretical model of teaching and learning and forced collaboration between instructors and assessment professionals.  The process sacrifices methodological rigor in the name of pragmatism, a culture of presentation (vis a vis accreditation agencies), and the tail of design limitations of software systems that wags the dog of pedagogy and common sense.  At every step of the process information is lost and distorted. But it seems that the more Byzantine that process is, the more its champions think they have scientific fact as product.

It could well be that the arrangement agreed to in Chicago will lead to instructors talking to one another about teaching, coordinating their classroom practices, and all sorts of other things that might improve the achievements of their students.  But it will likely be a rather indirect effect via the social organization of teachers (if I understood the article, the good thing about the Chicago plan is that it rewards entire categories of instructors for the aggregate improvement).  To sell it at the level of individual incentive is silly and misleading.  And, if we think more broadly about higher education, the notion that you can take the kinds of discrimination you get from extremely fuzzy data and multiply it by tiny amounts of money to produce positive change at the level of the individual instructor is probably best called bad management 101.

Peer to Peer Education: Can Students Teach One Another?

One of society’s major “information institutions” is, of course, the university (and colleges, too). In these institutions information is generated, classified, evaluated, sanctioned, organized, and systematically disseminated.

There are lots of interesting experiments going on in and around the university connected with its various fundamental information functions (e.g.,, wikibooks, OpenCourseWare, and, of course, all manner of distance learning). Each of these experiments plays with changing how we think about one piece of the education equation.

I’ve just come across one that takes the university itself out of the picture: The Peer 2 Peer University (P2PU). P2PU is structured as an online community of open study groups whose members engage one another in short university-level courses. Their model is to connect open educational resources and small groups of motivated learners. P2PU supports the endeavor with a course infrastructure that facilitates course design by an “organizer,” interaction among participants, access to materials, and methods for recognition of students’ and tutors’ work. Initially focused on more technical skills, the organization seems very committed to making sure that P2PU is an ongoing, distributed research project on the topic of new ways to organize learning.

The video below is a bit amateurish on the production side, but gives some idea of the why and the how behind P2PU. The project also maintains a wiki that gives you a sense of how they do what they do.

Peer 2 Peer University 2010 from P2P University on Vimeo.

Information and Educational Assessment I

In a letter to the NYT about an article on radiation overdoses, George Lantos writes:

My stroke neurologists and I have decided that if treatment does not yet depend on the results, these tests should not be done outside the context of a clinical trial, no matter how beautiful and informative the images are. At our center, we have therefore not jumped on the bandwagon of routine CT perfusion tests in the setting of acute stroke, possibly sparing our patients the complications mentioned.

This raises an important, if nearly banal, point: if you don’t have an action decision that depends on a piece of information, don’t spend resources (or run risks) to obtain the information.  The exception, as he suggests, is when you are doing basic science of some sort.

Now consider, for a moment, the practice of “assessment” in contemporary higher education.  An industry has built up around the idea of measuring educational outcomes in which a phenomenal amount of energy (and grief) is invested to produce information that is (1) of dubious validity and (2) does not, in general, have a well articulated relationship to decisions.

Now the folks who work in the assessment industry are all about “evidence based change,” but they naively expect that they can, a priori, figure out what information will be useful for this purpose.

They fetishize the idea of “closing the loop” — bringing assessment information to bear on curriculum decisions and practices — but they confuse the means and the ends.  To show that we are really doing assessment we have to find a decision that can be based on the information that has been collected.  Not quite the “garbage can model of decision-making,” but close.

Perhaps a better approach (and one that would demonstrate an appreciation of basic critical thinking skills) to improving higher education would be to START by identifying opportunities for making decisions about how things are done and THEN figuring out what information would allow us to make the right decision and THEN how we would best collect said information.  Such an approach would involve actually understanding both the educational process and the way educational organizations work.  My impression is that it is precisely a lack of understanding and interest in these things on the part of the assessment crowd that leads them to get the whole thing backwards.  Only time will tell whether these scientist-manqués manage to mediocritize higher education or not.

The Liberal Arts and Time’s Arrow

“Liberal Arts Education” as a concept is unfortunately dominated by its own legacy.

When most of us think about the liberal arts our thoughts tend to look backwards.  Some of us fondly recall our own liberal arts educations and the value we perceive it to have had for us.  Or we think about what we’ve been teaching for years and years.  Or we hearken back to the invention of the modern liberal arts in the late nineteenth century or to the classical liberal arts of the middle ages.

If you listen carefully, you can almost hear us thinking, “If it was good enough then, it’s good enough now…”

But it’s easy to miss something important.  To understand what a liberal arts education is we should not simply look at the lists in the course catalogs of bygone eras.  Instead we should look functionally at how those lists fit into their time.  Generically, what the liberal arts are is a collection of intellectual disciplines that are appropriate to the training of generalists in their time, of subjects, the mastery of which provides a foundation, a launching platform, for the leaders of an age.

I think that, often, both those who feel an imperative to discard the liberal arts and those who feel the imperative to preserve them come at the question with the wrong idea.

A higher education system that well serves the society that supports it will have a diverse array of parts. Some parts need to be tuned to producing experts at delivering current practice in the professions. Some parts need to be highly specialized, training people to be experts at producing the things of today and solving immediate problems to create the things of tomorrow. And some parts need to prepare people to solve the problems we don’t yet know that we have. And we need to train people who can move back and forth among the various experts, who can consolidate their work into emergent solutions for emerging problems. And we need people who have broad capacities to examine and understand the very system in which all the above perform. And some parts of it train people broadly prior to their becoming one of those specialists so that the narrowness of their training does not become a liability.

The mistake that the discarders make is to see the importance of the practically trained and the expertly trained as telling us that we do not need the more generally trained.

The mistake of the defenders is to think that yesterday will always tell us how to train those generalists of tomorrow.

Our challenge as educators is to look forward to figure out what the liberal arts for the 21st century should look like. It’s not an easy task, to be sure. But the first right step to take is to be sure we are facing in the correct direction.

Let’s Take It Seriously

Let’s take assessment and accountability seriously AS AN INSTITUTION. There is a tendency to equate assessment with measuring what professors do to/with students. The buzz word is “accountability” and there’s this unspoken assumption that the locus of lack of accountability in higher education is the faculty. I think that assumption is wrong.

We should broaden the concept of assessment to the whole institution. Course instructors get feedback on an almost daily basis — students do or don’t show up for class; instructors face 20 to 100 faces projecting boredom or engagement several times per week; students write papers and exams that speak volumes about whether they are learning anything; advisees tell faculty about how good their colleagues are. By contrast, the rest of the institution has little, if any, opportunity for feedback. It’s important: one substandard administrative act can affect the entire faculty, so even small things can have a big negative effect on learning outcomes.

In the name of accountability throughout the institution I propose something simple, but concrete: every form or memo should have a “feedback button” on it. Clicking on this button will allow “users” anonymously to offer suggestions or criticism. These should be recorded in a blog format — that is, they accumulate and are open to view. At the end of each year, the accountable officer would be required in her or his annual report to tally these comments and respond to them, indicating what was learned, what changes have been made or why changes were not made.

The important component of this is that the comments are PUBLIC so that constituents can see what others are saying. Each “user” can see whether her ideas are commonly held or idiosyncratic and the community can know what kind of feedback an office is receiving and judge its responsiveness accordingly.

Why anonymous? This is feedback, not evaluation. This information cannot be used to penalize or injure anyone. The office has opportunity to respond either immediately or in an annual report. Crank comments will be weeded out by sheer numbers and users who will contradict them. In the other direction, it is clear that honest feedback can be compromised by concerns about retribution, formal or informal. Further analysis along these lines would further support the idea that comments should be (at least optionally) anonymous.

We should note that we already do all of this in principle — many offices around campus have some version of a “suggestion box.” What is missing is (1) systematic and consistent implementation so that users get accustomed to the process of providing feedback, and (2) a protocol for using the feedback to enrich the community knowledge pool and to build it into an actual accountability structure.

The last paragraph makes the connection to a sociology of information. Information asymmetries (as when the recipient knows what the aggregate opinion is, but the “public” does not) and the atomization of polities (this is what happens when opinion collection is done in a way that minimizes interactions among the opinion holders — cf. Walmart not wanting employees to discuss working conditions — preventing the formation of open, collective knowledge*) are a genuine obstacle to organizational improvement. Many, many private organizations have learned this; it’s not entirely surprising that colleges and universities are the last to get on board.

* as opposed, say, to things that might be called “open secrets”

Validity and Such

An AACU blogpost referred me to the National Institute for Learning Outcomes Assessment website which referred me to an ETS website about the Measure of Academic Proficiency and Progress (MAPP) where I would be able to read an article titled “Validity of the Measure of academic Proficiency and Progress .”

And here’s the upshot of that article: The MAPP is basically the same as the test it replaced and research on that test showed

…that the higher scores of juniors and seniors could be explained almost entirely by their completion of more of the core curriculum, and that completion of advanced courses beyond the core curriculum had relatively little impact on Academic Profile scores. An earlier study (ETS, 1990) showed that Academic Profile scores increased as grade point average, class level and amount of core curriculum completed increased.

In other words, the test is a good measure of whether students took more GenEd courses. And we suppose that in GenEd courses students are acquiring GenEd skills. And so these tests are measures of the GenEd skills we want students to learn.

A tad circular? What exactly is the information value added by this test?