This is your Background Check on Steroids

An article, “Social Media History Becomes a New Job Hurdle,” by Jennifer Preston in yesterday’s NYT is obvious fodder for the sociology of information.  It’s primarily about Social Intelligence, a web start up that puts together dossiers about potential employees for its clients by “scraping” the internet.

Issues that show up here:

  • the federal government (FTC) was looking into whether the company’s practices might violate the fair credit reporting act (FCRA), but determined it was in compliance
  • “privacy advocates” said to be concerned that it might encourage employers to consider information not relevant to job performance (why not fair employment advocates? — later in the article we do find mention of Equal Employment Opportunity Commission)
  • what do we make of the statement: “Things that you can’t ask in an interview are the same things you can’t research”?
  • since this is really just an extension of the idea of the “background check” — can we think a little more systematically about that as a general idea prior to getting mired in details of internet presence searches?

Perhaps more alarming than the mere question of information surfacing was the suggestion by the company’s founder, Max Drucker, about how a given bit of scraped information might be interpreted.  To wit, he mentioned fact that a person had joined a particular Facebook group might “mean you don’t like people who don’t speak English.”  According the reporter he posed this question rhetorically: “Does that mean…?”  This little bit of indirect marketing via fear mongering adds another layer to what we need to look at: what sort of information processing (including interpretation and assessment) are necessary in a world where larger and larger amounts of information are available (cf. CIA problem of turning acquired information into intelligence via analysis).

Drucker characterized the company’s goal as “to conduct pre-employment screenings that would help companies meet their obligation to conduct fair and consistent hiring practices while protecting the privacy of job candidates.”  This raises another interesting question: if an agent has a mandated responsibility for some level of due diligence and information is, technically, available, will a company necessarily sprout up to collect and provide this information?  Where would feasibility, cost, and the uncertainty of interpretation enter the equation?  Can the employer, for example, err on the side of caution and exclude the individual who joined the Facebook group because that fact MIGHT mean something that the employer could be liable for not having discovered?  Will another company emerge that helps to assess the likelihood of false positives or false negatives?  What about if it is only a matter of what the company wants in terms of its corporate culture?  Can we calculate the cost (perhaps in terms of loss of human capital, recruitment costs, etc.) of such technically assisted vigilance?

Author: Dan Ryan

I'm currently an Academic Program Director at I've been a professor at University of Toronto, University of Southern California, and Mills College teaching things like human centered design, computational thinking, modeling for policy sciences, and social theory. I'm driven by the desire to figure out how to teach twice as many twice as well twice as easily.

Leave a Reply

%d bloggers like this: