"But even if they are not valid, they do tell you something…."

Remember, “validity” means “they measure what you think they measure.” “Data driven” can also mean driven right off the side of the road.

From Inside Higher Ed

Zero Correlation Between Evaluations and Learning

New study adds to evidence that student reviews of professors have limited validity.
September 21, 2016 By Colleen Flaherty

 

A number of studies suggest that student evaluations of teaching are unreliable due to various kinds of biases against instructors. (Here’s one addressing gender.) Yet conventional wisdom remains that students learn best from highly rated instructors; tenure cases have even hinged on it.
What if the data backing up conventional wisdom were off? A new study suggests that past analyses linking student achievement to high student teaching evaluation ratings are flawed, a mere “artifact of small sample sized studies and publication bias.”
“Whereas the small sample sized studies showed large and moderate correlation, the large sample sized studies showed no or only minimal correlation between [student evaluations of teaching, or SET] ratings and learning,” reads the study, in press with Studies in Educational Evaluation. “Our up-to-date meta-analysis of all multi-section studies revealed no significant correlations between [evaluation] ratings and learning.”

Validity and Such

An AACU blogpost referred me to the National Institute for Learning Outcomes Assessment website which referred me to an ETS website about the Measure of Academic Proficiency and Progress (MAPP) where I would be able to read an article titled “Validity of the Measure of academic Proficiency and Progress .”

And here’s the upshot of that article: The MAPP is basically the same as the test it replaced and research on that test showed

…that the higher scores of juniors and seniors could be explained almost entirely by their completion of more of the core curriculum, and that completion of advanced courses beyond the core curriculum had relatively little impact on Academic Profile scores. An earlier study (ETS, 1990) showed that Academic Profile scores increased as grade point average, class level and amount of core curriculum completed increased.

In other words, the test is a good measure of whether students took more GenEd courses. And we suppose that in GenEd courses students are acquiring GenEd skills. And so these tests are measures of the GenEd skills we want students to learn.

A tad circular? What exactly is the information value added by this test?