Two stories in NYT today about data crunching. One on mapping neuron connections in mice to understand how brains work. The other on using statistics to detect possible cheating on standardized tests.
The brain research takes thin slices of brain tissue and maps connections between neurons in a really BIG (petabyte per mm3) data mining operation. The research is in the infancy stage, but eventually one can imagine having a full circuit diagram of a brain. Interesting implications possibly grasped by either researchers or the articles author:
Neuroscientists say that a connectome could give them myriad insights about the brain’s function and prove particularly useful in the exploration of mental illness. For the first time, researchers and doctors might be able to determine how someone was wired — quite literally — and compare that picture with “regular” brains.
Experts quoted in the article debate whether the research is promising enough to spend millions on. But this comment about defining normal or regular brains is not one of the concerns they mention. What are the informational implications of having a data set that describes the connections of a “normal” person?
The second article, “Cheaters Find an Adversary in Technology,” reads as a shameful bit of commercial promotion masquerading as journalism, but does usefully illuminate the worldview of the standardized test industry. The story is about a company that uses statistics to detect cheaters. Their algorithms are designed to detect things like similar patterns of wrong answers, changed answers, and big improvements in test scores. If a group of students all misunderstood something in the same way it would look like cheating. And a test taker who “saw the light” at one point and went back and changed several answers will look like a cheater. And the thing we do most in school, attempt to teach people stuff, if successful would lead to big improvements in test scores. But that too, according to the experts, would look like cheating.
There is an arrogance about testers (the gentleman profiled calls himself (unselfconsciously, notes the journalist) “an icon” — (those who have never heard of him are poorly informed)) that consistently rankles. And their self-promotion as agents of fairness and meritocracy (recall The Big Test) is simple hypocrisy. More problematic, though, is the influence on teaching, learning, and scholarship of a regime that bases its authority and legitimacy on science and objectivity, but that shrouds itself in secrecy and lives OFF rather than FOR education.
Why these two articles together? They suggest a sort of pincer maneuver against “the human” based in information — on one flank, structure, define the normal brain to a (particular) giant matrix of ones and zeros, while on the other, behavior, treat statistically unusual patterns of activity as morally suspect. “Super Crunching” may be a way of the future, but one might lament the likelihood that it is THE way of the future, crowding out or delegitimizing other forms of inquiry into the human condition. Together, these two articles suggest the imperative of an affirmative complement to our fascination with what we CAN do with information.
Source Mentions and Allusions
- Ayres, Ian. 2008. Super Crunchers: Why Thinking-By-Numbers is the New Way To Be Smart
- Foucault, Michel. 1995 (1975). Discipline and Punish: The Birth of the Prison
- Gabriel, Trip. 2010. “Cheaters Find an Adversary in Technology.” New York Times, December 27, 2010
- Swedberg, Richard. 2000. Max Weber and the idea of economic sociology
- Vance, Ashlee. 2010. “In Pursuit of a Mind Map, Slice by Slice.” New York Times, December 27, 2010