Another Angle on Hybrid

What do we think of when we think of hybrid (learning and teaching)? Some face-to-face teaching plus some online teaching? Some synchronous + some asynchronous? Flipping the classroom? Drosos and Guo (2021)* offer another perspective on a kind of teaching that can be included in the category. They show how what streamers teaching do can be seen as a form of cognitive apprenticeship. The authors do not explicitly talk about “hybrid,” but the practices they identify – real time problem solving, improvised examples, insightful tangents, and high level advice – are relevant to hybrid for two reasons. First, they are the kinds of things often cited as why remote or asynchronous instruction is necessarily inferior (the claim being they are absent). Second, they are useful challenges: how can these virtues be built into various hybrid scenarios?

Powered By EmbedPress

*2021 IEEE Symposium on Visual Languages and Human-Centric Computing (VL/HCC)

Is it real? Can we win? Is it worth doing?

Originally posted 16 June 2017

I saw the best minds of my institution distracted by madness, meeting endlessly in vain, poring themselves over balance sheets all day looking for the do-able fix…

I’ve watched and listened for the last several years as my institution thrashed and muttered things about reinventing itself and formulating a new business model and becoming financially sustainable.

Most disturbing has been what strikes me as an almost fanatic commitment to not running the college like a business because of a fear of “running the college like a business.”

What I mean by that is that people who are rightfully concerned about those who would turn education into a financialized commodity, an institution that serves corporate overlords, and all the rest chased away ordinary business and organizational management sense driving higher education toward amateurish and disastrous practices.

I’ve spent the last few years looking for translatable lessons that might be useful for those of us who actually understand education to build our institutions into robust, successful, and sustainable enterprises so that we can hold off the small-minded interests that would be delighted to take advantage of our incompetence.


In a 2007 Harvard Business Review article George Day describes a powerful method for assessing risk and reward tradeoffs in innovation. I think Day’s ideas can be adapted to the situation of a small college like ours and I a lot of the decisions and discussions that have occurred over the last several years, and especially in the last few months, demonstrate the pathology of an organization behaving in precisely the opposite direction from these ideas.

Day is writing about how an organization should evaluate innovation opportunities. He advocates for any potential innovation to be evaluated by asking “Is it real? Can we win? Is it worth doing?”

These questions can be adapted to an evaluation of both ongoing operations, innovations, and remedial moves taken in response to emergency conditions.

Let’s start with “is it real?”

The question refers both to products and markets. In our case the two are closely related but should be assessed separately.

When we say “product” we mainly mean the programs we offer – majors, degrees, courses, credentials. When we say “market” we mean both the market of people who want to buy our product – enroll – and a post-graduation market for people with the credentials we offer.

Is the product real? In industry this means “does the technology to build this thing actually exist?” In higher education we have to ask whether there are courses, or whether courses could be designed, that would add up to some credential or program we ponder.  We have to ask is this the kind of thing that one can do in four years or two years or alongside the rest of one’s education? Is it coherent? Legible?

Is the market real? Is there actually a desire/need for what we are thinking of doing?  Can the student for whom it is perfect actually purchase it?  Is the size of the market big enough for us to be able to get this off the ground?  WILL the potential “customer” buy it (at the price at which we will need to ask)?

Can we win?

Again, the question has two sides: the “product” and the “company.”

Can the product win? Is this thing we want to offer better than the alternatives? How established are the alternatives?  If the people who would want our thing are currently using something else, why would they switch? Can we survive expected responses from the competition?

Can the organization win?  Do we have superior faculty and staff who can work on this? Do we have the necessary experience and skills to do this well at the necessary scale and over the necessary time frame?  Are there effective internal champions to create and sustain interest and enthusiasm?  Do we really understand the market and have the capacity to listen to its signals?

Is it worth doing?

Is this move likely to be profitable? “Profitable” is a simple idea – do returns exceed expenses – but we need to think carefully about what goes into this. When are we going to have to invest how much capital?  What marketing expenditures are necessary to give the idea a chance? What future development and revision will starting this commit us to?  What are we doing now that we will do less of as we divert personnel and resources to this new endeavor?

Does this project make strategic sense? Does this new program or change fit with our organizational growth strategy? Or is it taking us off in a direction that will distract us from what we are trying to do?

An extremely important part of thinking about whether something makes strategic sense is whether the project will generate a platform on which other things can be built. Does the initiative allow us to develop policies and practices that can be used for other things? Are we building up skills and experiences among our faculty and staff that we can use to build and enhance other programs?

The “is-it-real-can-we-win-is-it-worth-doing” filter is not a magic bullet, but it is an example of some adaptable wisdom that could make a gigantic difference in the ways faculty and administrators think about change and renewal.

Journalism and Research Again

Lots of Twitter and blog activity in response to NYT article about Chetty, Friedman, and Rockoff research paper on effects of teachers on students’ lives.

No small amount of the commentary is about how when journalists pick “interesting” bits out of research reports to construct a “story” they often create big distortions in the social knowledge-base.

So what can reporters do when trying to explain the significance of new research, without getting trapped by a poorly-supported sound bite?

Sherman Dorn has an excellent post on the case, “When reporters use (s)extrapolation as sound bites,” that ends with some advice:

  1. “If a claim could be removed from the paper without affecting the other parts, it is more likely to be a poorly-justified (s)implification/(s)extrapolation than something that connects tightly with the rest of the paper.”
  2. “If a claim is several orders of magnitude larger than the data used for the paper (e.g., taking data on a few schools or a district to make claims about state policy or lifetime income), don’t just reprint it. Give readers a way to understand the likelihood of that claim being unjustified (s)extrapolation.”
  3. “More generally, if a claim sounds like something from Freakonomics, hunt for a researcher who has a critical view before putting it in a story.”

See also Matthew Di Carlo on ShankerBlog, Bruce Baker on SchoolFinance 101, and Cedar Reiner on Cedar’s Digest

On Getting Scooped : TEOTUAWKI*

They closed the comments on Mark Taylor’s OpEd “End the University as We Know It” before I could get one in, so I’m posting it here. I often tell my students it’s a good thing when you find out that your favorite idea has already been written down by some famous philosopher since that means you were on the right track. I am telling myself today that seeing something you have been thinking about show up on the NYT oped page is similar.

Like many of the other comments, mine amounts to a mixed review. Taylor, to my mind, conflates several issues relevant to the state of today’s universities and thereby reduces the punch of the piece. I’m with him on the university as an enterprise in which it is far too easy to continue business as usual. But he may be overgeneralizing from his experience in a religion department in some of his other criticisms.

The point with which I resonate most strongly is that our course/major offerings are too department/discipline-bound and insufficiently dynamic and responsive to the needs of the world around us. In this blog and elsewhere I’ve argued for uncoupling undergraduate majors and departments as a solution. His suggestion that departments be abolished and programs reformulated around topics and then “sunsetted” every seven years is provocative and worth thinking about. I think, though, that it’s not quite the right approach. Why seven years? How worthwhile to rebuild the administrative apparatus on a regular basis?

I think something more dynamic and dialectical is called for: Maybe keep departments, but abolish majors (or maybe leave the majors but create set of collaborative and cross-disciplinary majors on the fly). We faculty should have to reformulate “majors.” We might, for example, cook up a program in sustainability or in institutional disruption or innovation. We’d have to really think about what portfolio of the courses we currently offer could be mixed with courses we ought to be offering to put together an educational curriculum that would have both currency and staying power. We’d force ourselves to get beyond the usual departmentally-self-serving horse-trading (I’ll require one of your courses if you require one of mine). We’d stop thinking of requirements in terms of the education we wish we could have had 20 years ago and start thinking about what insights are likely to be the building blocks of how people are thinking and solving problems 20 years from now.

Most importantly, we would have to “sell” these programs to undergraduates on the basis of evidence and argument about why a particular curriculum constitutes good preparation for the world to come. And we’d develop the majors with knowing that would be our ongoing task. We would have to be not only inventors of new majors, but innovators who learned from what we were doing to make the operation a going concern in a dynamic structure that was not protected from change by decades of disciplinary convention. In short, we’d build curricular R&D into the very fabric of the university, bringing to higher education a function it’s really never had.

*A variation on one of my favorite acronyms “TEOTWAWKI” = the end of the world as we know it.