Research on Effectiveness (sic) of Lecturing

paper (pdf below) published this week in the Proceedings of the National Academy of Science reports a meta-analysis of some 250 studies of effectiveness of lecturing vs. various forms of “active learning” in STEM fields. Upshot is that active learning is associated with 6% improvement in exam scores and that lectures yield a 50% increase in likelihood of failing the course. Interestingly the effect was stronger in classes under 50.

From the National Science Foundation
Press Release 14-064
Enough with the lecturing

May 12, 2014
A significantly greater number of students fail science, engineering and math courses that are taught lecture-style than fail in classes incorporating so-called active learning that expects them to participate in discussions and problem-solving beyond what they’ve memorized.

Active learning also improves exam performance in some cases enough to change grades by half a letter or more–so a B-plus, for example, becomes an A-minus.

Those findings are from the largest and most comprehensive analysis ever published of studies comparing lecturing to active learning in undergraduate education, said Scott Freeman, a University of Washington principal lecturer in biology. He’s lead author of a paper in the Proceedings of the National Academy of Sciences the week of May 12.

Freeman and his co-authors based their findings on 225 studies of undergraduate education across all of the “STEM” areas: science, technology, engineering and mathematics. Many of the studies analyzed were funded by the National Science Foundation (NSF).

The researchers found that 55 percent more students fail lecture-based courses than classes with at least some active learning. Two previous studies looked only at subsets of the STEM areas and none before considered failure rates.

On average across all the studies, a little more than one-third of students in traditional lecture classes failed–that is, they either withdrew or got Fs or Ds, which generally means they were ineligible to take more advanced courses. On average with active learning, a little more than one-fifth of students failed.

“If you have a course with 100 students signed up, about 34 fail if they get lectured to but only 22 fail if they do active learning, according to our analysis,” Freeman said. “There are hundreds of thousands of students taking STEM courses in U.S. colleges every year, so we’re talking about tens of thousands of students who could stay in STEM majors instead of flunking out every year.”

This could go a long way toward meeting national calls like the one from the President’s Council of Advisors on Science and Technology (PCAST) saying the U.S. needs a million more STEM majors in the future, Freeman said.

“Freeman’s study reinforces the conclusion of PCAST [President’s Council of Advisors on Science and Technology] that widespread implementation of these evidence-based practices will increase retention and persistence in STEM fields and further supports the findings of the National Research Council’s Discipline-based Education Research report, funded by NSF,” said Susan Singer who leads NSF’s Division of Undergraduate Education.

It is encouraging news as NSF convenes an interagency team to implement the undergraduate goals of the Federal STEM Education 5-year Strategic Plan. One of the four goals is to “Identify and broaden implementation of evidence-based instructional practices and innovations to improve undergraduate learning and retention in STEM and develop national architecture to improve empirical understanding of how these changes relate to key student outcomes.”

Attempts by college faculty to use active learning, long popular in K-12 classrooms, started taking off in the mid-1990s, Freeman said, though lecturing still dominates.

“We’ve got to stop killing student performance and interest in science by lecturing and instead help them think like scientists,” he said.

For the paper, more than 640 studies comparing traditional lecturing with some kind of active learning were examined by Freeman, Wenderoth and their other co-authors, Sarah Eddy, Miles McDonough, Nnadozie Okoroafor and Hannah Jordt, all with the UW biology department, and Michelle Smith with the University of Maine, whose research was funded by NSF. The studies, conducted at four-year and community colleges mainly in the U.S., appeared in STEM education journals, databases, dissertations and conference proceedings.

Some 225 of those studies met the standards to be included in the analysis including: assurances the groups of students being compared were equally qualified and able, instructors or groups of instructors were the same, and exams given to measure performance were either exactly alike or used questions pulled from the same pool of questions each time.

The data were considered using meta-analysis, an approach long used in fields such as biomedicine to determine the effectiveness of a treatment based on studies with a variety of patient groups, providers and ways of administering the therapy or drugs.

Regarding grade improvement, the findings showed improvements on exams increased an average of 6 percent, which might raise students half a grade, for example from a B+ to an A-.

If the failure rates of 34 percent for lecturing and 22 percent in classes with some active learning were applied to the 7 million U.S. undergraduates who say they want to pursue STEM majors, some 2.38 million students would fail lecture-style courses vs. 1.54 million with active learning. That’s 840,000 additional students failing under lecturing, a difference of 55 percent compared to the failure rate of active learning.

“That 840,000 students is a large portion of the million additional STEM majors the president’s council called for,” Freeman said.

-NSF-

Active learning improves grades, reduces failure among undergrads in STEM

See Also
Bajak, Aleszu. “Lectures Aren’t Just Boring, They’re Ineffective, Too, Study Finds.” Science Insider


https://dl.dropboxusercontent.com/u/13897743/PNAS-2014-Freeman-1319030111.pdf

Single Sex Education, Science, and Belief

One’s scientific “spidey sense” should tingle when a report on research talks about “opponents of” and “proponents of,” but just the same, with grains and dashes of salt taken as needed, this article brings us some updates on the conversation about research into the advantages of single sex education (mostly, here, in the pre-college context). 

Those of us who teach in single sex environments know it makes a difference, but we should admit that our convictions are convictions not knowledge, and we should be open to conver-sations not just about whether it makes a difference but how it might be making a difference.

“Why is there such disagreement over the benefits of single-sex education? Methodology is the key sticking point.”

“Last month a meta-analysis of 184 studies covering 1.6 million students from 21 countries indicated that any purported benefits to single-sex education over coeducation, when looking at well-designed, controlled studies, are nonexistent to minimal.”

“The methodology is challenging.”

FROM
 

Journalism and Research Again

Lots of Twitter and blog activity in response to NYT article about Chetty, Friedman, and Rockoff research paper on effects of teachers on students’ lives.

No small amount of the commentary is about how when journalists pick “interesting” bits out of research reports to construct a “story” they often create big distortions in the social knowledge-base.

So what can reporters do when trying to explain the significance of new research, without getting trapped by a poorly-supported sound bite?

Sherman Dorn has an excellent post on the case, “When reporters use (s)extrapolation as sound bites,” that ends with some advice:

  1. “If a claim could be removed from the paper without affecting the other parts, it is more likely to be a poorly-justified (s)implification/(s)extrapolation than something that connects tightly with the rest of the paper.”
  2. “If a claim is several orders of magnitude larger than the data used for the paper (e.g., taking data on a few schools or a district to make claims about state policy or lifetime income), don’t just reprint it. Give readers a way to understand the likelihood of that claim being unjustified (s)extrapolation.”
  3. “More generally, if a claim sounds like something from Freakonomics, hunt for a researcher who has a critical view before putting it in a story.”

See also Matthew Di Carlo on ShankerBlog, Bruce Baker on SchoolFinance 101, and Cedar Reiner on Cedar’s Digest