Closing the Loop in Practice: Does Assessment Get Assessment?

At a liberal arts college with which I am familiar, the administration recently distributed “syllabus guidelines” with 34 items for inclusion on course syllabi. Faculty leaders balked and asked for clarification: which of the 34 items were mandates (and from whom on what authority) and which were someone’s “good idea”? The response was that guidelines are merely guidelines and most of the content were indeed good ideas. Most were.

A subsequent examination of a sample of syllabi revealed that most syllabi did not contain all 34. More specifically, there was not universal inclusion of several that, apparently, are important for accreditation purposes. 

The semester has begun.  The syllabi are printed.  The administration disseminated the guidelines — their obligation is fulfilled.  If faculty choose not to comply, that’s their decision. Overall, the situation is alarming because the school could appear to be non-compliant to its accreditors.  And it’s the faculty’s fault.  And folks are wondering how to fix it.

THIS COULD BE TURNED INTO SOMETHING POSITIVE, a shining example of assessment, closing the loop, and evidence-based change.

But first, WAIT A MINUTE! Do faculty get to say “We told them what to do; if they can’t comply and don’t learn, it’s not my fault.”?  Of course not.  If students aren’t learning, faculty are doing something wrong.  Lack of learning = feedback, and feedback must lead to change.

Here we have a case of an institution ignoring unambiguous feedback. The feedback is simple: the promulgation of a list of 34 things one should do on a syllabus does not produce the uniform inclusion of the small handful of actually really important things to include on a syllabus.  That’s it; that’s what the evidence tells you.  It doesn’t tell you faculty are bad; it tells you that this method of changing what syllabi look like was ineffective.

Never mind that any good teacher knows that you cannot motivate change with a list of 34 fixes.

The correct response? Close the loop: listen, learn, change the way syllabus guidelines are handled.

The unfortunate thing here is that folks who know (faculty) brought this immediately to the attention of the folks in charge. Faculty noted that the list was too long, its provenance ambiguous, its authority unclear, its applicability variable, its tone insulting. A solution was suggested. All this was met with, basically, a brush off — they’re just guidelines not requirements, what’s the big deal?

And, it turns out, that is precisely how faculty understood them. No need for alarm.  Some adjusted their syllabi to some of the suggestions in the guidelines. But apparently, the faculty didn’t all implement a few of the guidelines that really do matter (to someone).  Arrrrrrrgh.

And now for a little forward looking fantasy of what the outcome of this situation COULD be.

Since administrations and the assessment industry are apparently NOT really ready to adopt the underlying premise of assessment — pay attention to feedback and change accordingly — the faculty will.

From now on, only the faculty will disseminate syllabi guidelines. They will very clearly distinguish between legally mandated content, accreditation relevant functionality, college-specific custom and standards, and good pedagogical practice in general. They will invite all parties who become aware of syllabi-related mandates (or new good ideas) to communicate them to the faculty’s educational policy committee for consideration for inclusion in their next semester’s guidelines.

Those guidelines will explicitly articulate general goals (exactly which ones to be determined) such as syllabi are to be interesting documents that are useful to students and that permit colleagues to get a sense of what a course is about and at what level it is being taught as well as suggestions of particular features, boilerplate and examples that might be useful, and fully explained required items. They will include examples of an array of syllabi that explicitly demonstrate a variety of forms that meet their standards. And, all suggestions will be referenced where possible and requirements will be documented in terms of on what authority they are an obligation.

For assessment purposes the faculty will adapt* any externally supplied “rubrics” to their own intellectually and pedagogically defensible standards and practices and encourage our colleagues to make use of these college-specific tools in developing their syllabi.

Educators really committed to the stated goals of assessment would see in this affair an opportunity for an achievement they could boast about.  Those committed to one directional, top-down, assessor-centered, non-interactive, deaf-to-feedback approaches will see in it only faculty reluctance to get with the program. 

One lesson learned here is that institutional processes need adjustment. The amount of faculty and administrative time, emotional energy, and the augmentation of frustration and mistrust that this little thing has engendered was a phenomenal waste of precious institutional resources. Alas, accountability for THIS is unlikely ever to be reckoned.

 * For the assessment sticklers who think twiddling with a rubric undermines its comparability with external standards: worry not!  The validity of these things is so much in doubt and the scaling so arbitrary that the improved fit to institutionally unique values and practices will far outweigh any disadvantages caused by departure from mindless standardization.

Author: Dan Ryan

I'm currently an Academic Program Director at I've been a professor at University of Toronto, University of Southern California, and Mills College teaching things like human centered design, computational thinking, modeling for policy sciences, and social theory. I'm driven by the desire to figure out how to teach twice as many twice as well twice as easily.

Leave a Reply

%d bloggers like this: