GPS, Orwell, and the 4th Amendment

The 9/11 anniversary reminds us, among all the other things, of the questions of government surveillance that have arisen in the last decade, some related to terrorism, some reflecting challenges raised by new technologies, and many at the intersection of these.

This fall, the US Supreme Court will consider whether law enforcement should be able to attach a GPS tracking device on a vehicle without a warrant. Adam Liptak reports on the issue in “Court Case Asks if ‘Big Brother’ Is Spelled GPS” in today’s New York Times. Lower courts have ruled in different directions on the question.

One way to think about it is in terms of aggregating information and whether there’s an emergent property that changes how we would classify obtaining, possessing, or using information. Consider, for example, one’s daily round. Leave the house at 7:30, stop for coffee, pick up the dry-cleaning, get stuck in traffic, arrive at work, park in the lot over behind the pine trees, etc. All of these are done in public with no expectation of privacy. And then it all happens again tomorrow, and tomorrow and tomorrow. Except the dry cleaning stop is only made on Mondays and every other Friday there’s a stop at a bar on the edge of downtown. If there is a GPS attached to your car, the separate public facts of any given daily round — the sequence and full set of which perhaps only you know — are assembled as a unit of information. And, if the GPS is there for a month, both the overall, boring, day-in-day-out pattern and the regular exceptions and the truly unique exceptions are all a part of the information bundle available “out there.”

 Even if all of the component information is about mundane, innocent, non-embarrassing activities, indeed has all the properties that would exclude it from your understanding of “private” information, does your willingness to do these things in public view aggregate to willingness for information about them to be aggregated into a tracking record?

See also
UNITED STATES v. GARCIA No. 06-2741.
New York Times. Articles on Surveillance of Citizens by Government
New York Times. Articles on Global Positioning System

This is your Background Check on Steroids

An article, “Social Media History Becomes a New Job Hurdle,” by Jennifer Preston in yesterday’s NYT is obvious fodder for the sociology of information.  It’s primarily about Social Intelligence, a web start up that puts together dossiers about potential employees for its clients by “scraping” the internet.

Issues that show up here:

  • the federal government (FTC) was looking into whether the company’s practices might violate the fair credit reporting act (FCRA), but determined it was in compliance
  • “privacy advocates” said to be concerned that it might encourage employers to consider information not relevant to job performance (why not fair employment advocates? — later in the article we do find mention of Equal Employment Opportunity Commission)
  • what do we make of the statement: “Things that you can’t ask in an interview are the same things you can’t research”?
  • since this is really just an extension of the idea of the “background check” — can we think a little more systematically about that as a general idea prior to getting mired in details of internet presence searches?

Perhaps more alarming than the mere question of information surfacing was the suggestion by the company’s founder, Max Drucker, about how a given bit of scraped information might be interpreted.  To wit, he mentioned fact that a person had joined a particular Facebook group might “mean you don’t like people who don’t speak English.”  According the reporter he posed this question rhetorically: “Does that mean…?”  This little bit of indirect marketing via fear mongering adds another layer to what we need to look at: what sort of information processing (including interpretation and assessment) are necessary in a world where larger and larger amounts of information are available (cf. CIA problem of turning acquired information into intelligence via analysis).

Drucker characterized the company’s goal as “to conduct pre-employment screenings that would help companies meet their obligation to conduct fair and consistent hiring practices while protecting the privacy of job candidates.”  This raises another interesting question: if an agent has a mandated responsibility for some level of due diligence and information is, technically, available, will a company necessarily sprout up to collect and provide this information?  Where would feasibility, cost, and the uncertainty of interpretation enter the equation?  Can the employer, for example, err on the side of caution and exclude the individual who joined the Facebook group because that fact MIGHT mean something that the employer could be liable for not having discovered?  Will another company emerge that helps to assess the likelihood of false positives or false negatives?  What about if it is only a matter of what the company wants in terms of its corporate culture?  Can we calculate the cost (perhaps in terms of loss of human capital, recruitment costs, etc.) of such technically assisted vigilance?

No Such Thing as Evanescent Data

Pretty good coverage of the “iphone keeps track of where you’ve been” story in today’s NYT “Inquiries Grow Over Apple’s Data Collection Practices” and in David Pogue’s column yesterday (“Your iPhone Is Tracking You. So What?“). Not surprisingly, devices that have GPS capability (or even just cell tower triangulation capability) write the information down. Given how cheap and plentiful memory is, not surprising that they do so in ink.

This raises a generic issue: evanescent data (information that is detected, perhaps “acted” upon, and then discarded) will become increasingly rare.  We should not be surprised that our machines rarely allow information to evaporate and it is important to note that this is not the same as saying that any particular big brother (or sister) is watching.  Like their human counterparts, a machine that can “pay attention” is likely to remember — if my iPhone always know where it is, why wouldn’t it remember where it’s been? 

It’s the opposite of provenience that matters — not where the information came from but where it might go to.  Behavior always leaves traces — what varies is the degree to which the trace can be tied to its “author” and how easy or difficult it is to collect the traces and observe or extract patterns they may contain.  These reports suggest that the data has always been there, but was relatively difficult to access.  It’s only recently that, ironically, due to the work of the computer scientists who “outed” Apple, that there is an easy way to get at the information.

Setting aside the issue of nefarious intentions, we are reminded of the time-space work of the human geographers such as Nigel Thrift and Tommy Carlstein who did small scale studies of the space-time movements of people in local communities in the 1980s and since. And, too, we are reminded of the 2008 controversy stirred up when some scientists studying social networks used anonymized cell phone data on 100,000 users in an unnamed country.

Of course, the tracking of one’s device is not the same as the tracking of oneself.  We can imagine iPhones that travel the world like that garden gnome in Amelie and people being proud not just of their own travels but where there phone has been. 

See also

  1. Technologically Induced Social Alzheimers
  2. Information Rot

From Information Superhighway to Information Metrosystem

The new FTC report on consumer privacy has an interesting graphic in an appendix. It purports to be a model of the “Personal Data Ecosystem.” It’s interesting as an attempt to portray a four-mode network : individuals, data collectors, data brokers, and data users. The iconography here seems to be derived from classic designs of subway and underground maps.

From http://www.ftc.gov/os/2010/12/101201privacyreport.pdf.

The genre mixing in the diagram invites, on the one hand, a critical look at where the FTC is coming from in the report (which, in my limited experience of digesting FTC output looks relatively well done) and, on the other, points toward a need to better conceptualize the various components and categories.

Under “collectors,” for example, we have public, internet, medical, financial and insurance, telecommunications and mobile, and retail. The next level (brokers) includes affiliates, information brokers, websites, media archives, credit bureaus, healthcare analytics, ad networks and analytics, catalog coops, and list brokers. Finally, on the info users front we have employers, banks, marketers, media, government, lawyers and private investigators, individuals, law enforcement, and product and service delivery.

It’s a provocative diagram that helps to focus our attention on the conceptual complexity of “personal information” in an information economy/society. More on this to follow.

Surveilance Raised to the Second Power

The following article appear about a week ago over the AP business wire. It turns out that parents who “spy” on their children may be unwittingly helping corporations to spy on them too. It’s very valuable to folks in marketing to know what kids are talking about. If you believe the companies that make/sell the child surveillance software to parents, the information being collected is not associated with the kids’ names but it is tagged with information about the kid (ironically, often entered by the parent when s/he sets the software up in the first place).

One easy take-away is the idea that norms about spying on kids are highly dependent on who is doing the spying and why. If you have legal custody of the kid and you are trying to protect her from predators, spy away. If you are a commercial entity who wants to listen in to the kids’ chats, you’re crossing the line.

Bunch of sociology of information questions emerge in what looks in the article to be real mishmosh of thinking about this phenomenon. We see talk of “targeting children” (by marketers), “putting the children’s information at risk” (not really sure what that means), legal issues of collecting data from kids and having parents’ permission implied if software is installed, and so on. What doesn’t get thematized is that this is yet another example of trading a service for your information. In pure economic terms it can be written off as an exchange, that, if people do it, must be identifying an equivalence in value (as in, “it’s worth it to me to play this game at the cost of the provider can observe what kind of music I like”). In fact, though, I suspect that these dimensions of value are more orthogonal than is being pretended. It works because of multiple slights of hand — one isn’t really sure what information one is giving up or what is happening to it or one doesn’t get to evaluate those questions until after certain commitments have been made or it’s just plain too complicated to find out.

Look for another post soon about FaceBook applications and quizzes and the kinds of information give-aways and grab-ups that they involve.

Web-monitoring software gathers data on kid chats

* By DEBORAH YAO, AP Business Writer – Fri Sep 4, 2009 5:16PM EDT

Parents who install a leading brand of software to monitor their kids’ online activities may be unwittingly allowing the company to read their children’s chat messages — and sell the marketing data gathered.

Software sold under the Sentry and FamilySafe brands can read private chats conducted through Yahoo, MSN, AOL and other services, and send back data on what kids are saying about such things as movies, music or video games. The information is then offered to businesses seeking ways to tailor their marketing messages to kids.

“This scares me more than anything I have seen using monitoring technology,” said Parry Aftab, a child-safety advocate. “You don’t put children’s personal information at risk.” [Read More…]

Notification and the Public Sphere

Working today on the outline for a chapter on “notification and the public sphere.”  In previous chapters the focus was notification and the maintenance of relationships among individuals. In this chapter I look at the broader distribution of information in society and the institutions that give rise to it.

The raw material I am working with runs the gamut from sunshine and freedom of information laws, mandatory disclosure regulations, discovery in legal context, state mandated notification, truth and reconciliation commissions, emergency warning systems, diplomatic protocol, gag rules, and privacy standards. Generically, I’m thinking of these as “information institutions.”

This is admittedly a big bucket of diverse phenomena; today’s work was a first stab at grouping and categorizing and discovering underlying dimensions that organize these things as manifestations of basic informational forms.

Here are my preliminary categories.

Sunshine, Stickers, Labels, and Report Cards. Laws and rules that say that the state and private and public actors cannot keep (all) secrets. Some of these are things like sunshine laws that promote accountability or combat corruption, others are disclosure rules that address information asymmetry between producers and consumers or between service providers and the public. This category resonates with the “is more information always better” posts that have appeared here previously.

Structured Honesty: Social Organization of Informational Equality. Being able to say “I don’t have to tell you” is an important manifestation of inequality with both material and symbolic consequences. In various forms, the capacity to maintain some control over the disposition of some information is widely recognized as a key component of autonomous personhood. This category includes institutions that collectively enforce (true) information sharing — from legal rules of discovery to truth commissions. It is, I think, distinct from the previous and next categories, but I’m still working on a rigorous way to distinguish them.  The “democracy and the information order” posts that have appeared previously would fall into this category (6 August 2008, 20 September 2007,  22 May 2007, 11 March 2007)

The Social Organization of Omniscience (includes warning systems). These can be distinguished from the disclosure examples because in those cases one entity either has the information and just needs to be compelled to release it or has/controls access to the information and needs to be compelled to collect and release it. By contrast, this category includes cases where either the information is dispersed and we organize a means to detect and aggregate and channel it. Or, where a special channel is set aside to that one type of information (perhaps a rare one) can take precedence. Examples: ER doctors who must report abuse or abortion providers who must provide parental notification for minors, emergency warning systems (tornado.,hurricane, tsunami), airport announcements that recruit everyone as a lookout for unattended bags (see also post on children as spies).

Protocol. In diplomacy, for example, protocol strongly regulates who would speak with whom. As in computer communication protocols, these institutions allow us to tie systems together.

Socially Sanctioned Non-Telling. This is almost the opposite of the first category (leaving an interesting space in between) — secrets that are socially organized. Gag rules and sealed agreements, trade secrets, intellectual property regimes, governments classification systems (top secret, etc.), official secrets acts, privacy standards.