Facilitating Course Level R&D in an Educational Bureaucracy

My institution developed an online tool for proposing new courses and/or revising existing courses.  It is called the Course Approval and Revision Process (CARP).  The software presents 8 screens on which the instructor has to enter information that goes into a course database.  The sections are:

  1. Basic Course Info.
  2. Related Courses
  3. Instruction Schedule
  4. Grading & Restrictions
  5. Learning Goals
  6. Measures of Student Learning
  7. Assessment Plan
  8. Library Resources (Optional)

Who or what is this operation for?  It is built around the needs of a committee (the Educational Policy Subcommittee of the Faculty Executive Committee) that reviews courses before they are set before the faculty for approval.  A few other entities have also found themselves “represented” by the software.  These include the learning assessment operation run by the Office of Institutional Research and the General Education committee.

Fundamental Design Flaw

This system is designed to serve the bureaucracy.  For it to function, instructors have to be coerced to transform what they do in thinking about a course into the form that CARP can digest.
The right design would be to produce a tool that instructors would find useful and from which CARP could transparently extract what it needed.

How It Works in Practice

Instructors need to specify things like course title and description, what sorts of instruction it will include, how often it will be offered, whether there are pre-requisites, and how it will be graded.  These constitute items 1 through 4 above.  Then, items 5, 6, and 7 are about assessment.  The program picks up learning goals from the institution and from the department’s records in the assessment database.  That’s actually pretty useful as it means each person who proposes a new course gets to see how lame the existing learning goals are.  Unfortunately, since the goal is to get the course approved, the result is usually just that the instructor picks a few.

Then, for each goal checked, the instructor has to specify “measurable criteria.”  Fortunately, the software does not yet have any AI components that can reject work in progress; one can move on by entering “TBA” as long as one repeats it enough times to hit the minimum text length requirement.

Then you haver to enter an “assessment plan.”  This means describing assignments that will be used to measure student learning and then a plan for assessing student learning.  The difference between these is pretty vague, but they do have examples.  Here is what you learn about assignments:

The purpose of assessment is to improve learning. The focus of the learning is the established goals. The assessment plan is the way in which the course will be measured to determine the level of learning in the course relevant to the goals and criteria that are established for the course in order to improve the learning in the course. What evidence is collected (e.g., papers, exams, presentations, etc.)? How is the evidence evaluated against the goals for the course (e.g. rubrics, performance evaluations, etc.)? How will the evaluations be recorded (e.g., scores tabulated, written summaries, etc.)? On what basis will conclusions be drawn about the learning taking place? What will be done with the findings?

And here is an example of the assessment plan.  Or, no, we will not quote that here because it is so lame we can just describe it:  final papers will be rated on a four point scale for each learning goal.  The aggregate results will be discussed at a department meeting and plans will be made to improve.

That last step is called in the assessment business “closing the loop.”

But what if this process were actually designed, start to finish, with actual teachers in mind?  What would it look like?  That’s where we will go next.

Course Level R&D

Where do new (innovative, exciting) courses come from?  How do we encourage the continuous improvement of existing courses?  What do we even mean by that?
The improvement of an existing course might mean:
  • adjusting the topics covered by the course so as to be more up to date, more useful to students, better coordinated with other courses (reducing duplication or improving sequencing)
  • making it more attractive to students to increase enrollment
  • making it more fun for those who take it
  • making it more effective for those who take it
  • making it easier to teach
Some of the ways that these things could be achieved are sometimes mistaken as being course improvements in and of themselves–using the latest teaching techniques; using more technology–but we are only interested in this as possible means to the above ends.
The usual source of the kind of changes we have in mind are: imitation, inspiration, and mandate.  An instructor adopts what sounds like a good practice from a colleague, suddenly gets an idea for a cool thing to do in the class, or she is told by a chair or dean to start doing X.
As Bill McKay once intoned, “There’s got to be a better way!”
And as teachers by vocation we should be really interested in exploring these.  

An App Proposal

What if there were an application environment that would allow an instructor to organize her materials, present the material to students, track their performance, perform experiments, engage in continuous course revision, collaborate with others teaching the same course at her institution, a related course at another institution, and those teaching this course’s pre-requisites and courses for which this course is a pre-requisite, AND have a current version of the course always on file with the central administration.

Features

  1. Standard course management system (CMS)
  2. The “Lecture Studio” – a platform for lecture/presentation preparation
    1. Standard lectures and slide shows but also video/audio, clickers, etc.
    2. Record a screen capture
    3. Record a video
    4. Record an audio
    5. Slide library
    6. Graphics library
  3. Exercise/problem bank
    1. Templates for problem production (allows show/don’t show answer)
      1. Graphic production tools and data driven problems
    2. Shared problems
    3. Item testing facility
  4. Community with others teaching same or similar course
  5. Syllabus management system with revision and forking capability (GitHub for courses)
  6. Course module management: break courses into free-standing modules with clear pre-requisites and “antigens” that help determine where they will fit.
  7. Integrated web bookmark organizing (social bookmarking by others teaching same course)
  8. Integrated custom textbook capability
  9. Integrated with library databases (contract with content providers to make journal articles available on course by course basis)
  10. Integrate with your Zotero bibliography (and your associated PDF library) making it easy to include correct bibliographic information in course materials AND to assemble readers.
  11. Narrative evaluation diary
  12. Maybe throw in a good calendar function too.  One that had the capacity to track hours spent on teaching.
  13. Social media window?  You-tube channel channel?
First stage of development to be a clustering of existing sources in an orderly fashion.  Start to develop organizational tools that would help someone deploying them keep things coordinated, organized, and minimize transaction costs and duplication of effort.

Author: Dan Ryan

I'm currently an Academic Program Director at MinervaProject.com. I've been a professor at University of Toronto, University of Southern California, and Mills College teaching things like human centered design, computational thinking, modeling for policy sciences, and social theory. I'm driven by the desire to figure out how to teach twice as many twice as well twice as easily.

Leave a Reply

%d bloggers like this: