synaposophia

homologies between diverging traditions of thought

Friday, November 12, 2004

Primate Privacy, Social Security

Danny Weitzner spoke on privacy at SIMS recently. He advocated the need to regulate not just the collection of personally identifiable information (PII), but also the usage of it, specifically ways of combining information from various sources to infer things about people.

While listening to him I found myself profoundly struck by how our conversations about security and privacy have such technical definitions of the problems and the success of the solutions, seeming to completely ignore the fact that these concepts, and the need for them, arise from human social interactions. So we talk about the risk of identity theft as losing money, and something that VISA wants to help avoid, and a technical issue that all information managers should have some savvy about (hence my presence at the talk), but we don't talk about what it means to a social primate that all these little pieces of information about me are floating out there.

As a human in a social world, I chose to reveal those pieces of info about myself, depending on my relationship to you. Over time, through observations and proximity, and through trust or the lack of it, you gain little pieces of information about me. That history provides one way to characterize our relationship. You, and perhaps my larger social networks, are capable of combining the various pieces of info about me, and making inferences, and essentially creating the social, larger-than-me parts of my identity and my reputation.

Danny talked about the increasing power of computers to make inferences across our pieces of PII, but what seems to be left out of the conversation are the implications of this for me as a social person - what does it mean for me that these impersonal computers are performing these inferences about me? That corporations and other entities are understanding me in ways that used to be the perogative of my social circles? If we'd thought more about this, could we have predicted phishing? Could we predict the next human-mediated security risk? Can we advocate social-human-aware privacy and security solutions? Can an understanding of human social uses of security and privacy lead additional useful descriptions of the harms that computerized inferences achieve? If regulations are written about usage and inferences with PII, would a social perspective allow them to be written to last beyond today's technology? If my identity is larger than my physical body, could I use my social networks in authenticating myself?

Now the conversation about privacy & security issues starts to sound interesting!

1 Comments:

Post a Comment

<< Home