Wednesday, 27 February 2013

Checklists for Privacy Audits

Much of my work are privacy audits of information systems. These vary in nature and context from a quick check to a full dissection, and from client applications to service back-ends, internal systems, cloud deployment etc. Over the years we've made progress in standardising the review both in terms of process and in terms of content along with developing various tools and ontologies for information classification, requirements etc.

One of the most challenging tasks however has been to externalise this knowledge - when you've a small team knowledge is often implicit, but for many reasons including certification (ISO9000 etc) and training new staff documenting the procedures in a form where they can be repeated and understood by others without sacrificing the quality of the review we must externalise what is embedded in our collective team unconscious. Making this simple is remarkably hard - we've certainly been through our fair share of templates, documents, process diagram etc.

Areas such as air traffic control and piloting are well known for their command-response checklist techniques and we've taken much inspiration from these. However the key inspiration for our current ideas actually came from the WHO's Surgical Safety Checklist which aims to provide a simple, high-level checklist for surgical procedures without compromising the context in which the checklist is employed. This latter aspect rings especially true for us in that our context of review changes dramatically between audits as do individual surgeries. Here's our version:


We've remained true to the WHO's version (why change something that works?) with the three high-level stages, two of which emphases the necessity of getting the scope, context and reviews made properly. The middle section - the operation if you like - preceded by a time-out in which we ensure that each member of the privacy audit team works properly together and have a common understanding of what is required. Note the emphasis at the end of evaluating the review and the effectiveness of our procedures - this is critical in bettering how our work is executed.

An accompanying manual in the same style as the WHO's Surgical Safety Checklist Implementation Manual is under construction and in test. This we aim to address not just the process but the roles and responsibilities of the case coordinators, the "operating team's" (including individual members) roles, management and, importantly, the R&D or owner of the system being investigated. Never forget about the customer especially as privacy reviews tend to be quite deep and invasive both in time, information needs and recommendations, Many development teams do see such reviews as not effective use of their time and taking them away from their primary job of developing software.

Now there are many existing checklists for privacy - some detailed, some more process oriented, some technical, some more legal in nature - but consolidating and standardising without compromising the contexts in which these are executed is the issue. This I expect is the guiding principle behind the WHO's checklist which itself has been proven to work effectively.

It is my belief that the best practices from safety-critical system development need to be deployed in the systems we are building through development techniques, methods and processes. I consider good information system management (through privacy principles) to be safety-critical in many senses.

* * *

Compulsory Reading: Atul Gawande. The Checklist Manifesto.

Thursday, 21 February 2013

Some thoughts on Gamfication


It seems everyone is talking about gamification: the idea that any service can be turned into a game, reasoning that through reward and bargaining mechanisms a user will interact more with that given service for presumably greater rewards.

Although gamification seems to be the current zeitgeist, the idea is relatively old and the new part is this explicit in the customer-server interaction and inherent in the development process of applications and services. 

Many of the underlying concepts of gamification come from Game and Economic Theories such as those extensively researched and developed by Nash, von Neumann and. It is the principle behind the usage of store cards, air miles, nearly every loyalty scheme and most recently (in internet terms) the idea behind the “free” service. 

Despite gamification now coming to the fore, gamification is already relatively well established in many areas including privacy, although not as explicitly as currently being suggested.  For example you get your social networking (or blogging provision!) for free by giving up your privacy – you get a free service (and advertisements) and the service provider gets your behavioural profile. 

We can simply demonstrate this aspect of gamification in privacy by constructing a small normal form model using Game Theory techniques. Consider the payoff  between a customer and some service – if the customer registers with that service then they obtain an enhanced service:

Service
Basic Enhanced
Anonymous Random advertisments, No personalization, No identification of user, No targeting, No profiling 0
Customer Pseudo-Anonymous Random advertisements, Some session provisioning, Pseudo anonymous tracking Limited profiling – non traceable 0
Registered/Identified 0 Targeted advertising Full user experience, Customisable, Profiled

The difficulty here is assigning values to the sets of features available versus the data collection. In the above example disabling cookies has probably better payoff than leaving them enabled, primarily at the expense of the quality of data being collected by the service versus the consumers’ privacy. If we consider what the enhanced service might provide then the fact that the consumer is being profiled is outweighed by the advantages of the enhanced service. Certainly in the above, it is in the service’s interests to ensure that the benefit to the consumer outweighs any privacy (or other) concerns.

Payoff matrices for services such as Facebook, Google, Skype, Amazon etc, can be similarly constructed? It might just be that a well-defined payoff is what is contributing to those services’ popularity when put in the context of privacy.

Things get more interesting when services are combined. For example would letting Amazon provide Ikea with your purchase details being a step too far regarding personal privacy? Maybe Amazon should consider teaming up with Ikea… 10% discount vouchers on book cases for every 20 books bought…as long as you tell Amazon your Ikea Loyalty Card number. Just no recommendations for books on interior decorating please!

To further strengthen the gaming link, in the above the customer is also presented with a challenge in optimizing the reward – what is the cheapest way of getting a cheaper bookcase?

It might also just be that gamification becomes one of the drivers behind improving the quality of Big Data. The more “points” your score using a service, the better the quality of the profile and better service. The question here then becomes are we as individuals (or even as collectives) that interesting or for that matter, are we as consumer that discerning in the content from service providers and the back-and-forth bargaining? Do we care enough to be interested in playing the service improvement game?

Will gamification improve on the experience for the consumer and turn us all in to marketers of our own information?  Will recasting your service as a game lead to greater popularity, more consumers and success? 

What interests me, assuming the above answers are positive, is then how to do this through the embedding the economic principles of gamification inherently into the system design.

References

Tuesday, 5 February 2013

Deconstructing Privacy

Some very constructive comments after my previous posting on the naivety of privacy - thanks to all who participated. So to address this problem that we are often talking cross purposes and without any common frame of reference we need to first take a look at in what terms we're framing privacy [of information systems].

Typically we see that privacy is addressed or framed in seven broad areas:


Each of these areas most certainly overlap but we have the difficulty of switching between these frames. For example, it is often the case that if we have great system security, then privacy is of little concern because we've addressed the problem of data leakage; however we haven't addressed the problem of data content because this is largely irrelevant to security. Similarly if we have great access control we don't have to worry about the data getting into the wrong hands? Or possibly that if we've presented the user with the necessary consents then all is fine?

If we firstly deconstruct each area and examine how each views privacy, then attempt a cross-referencing exercise between these, then we might actually have a basis for constructing, at least a framework for a common terminology and semantics.