Tuesday, 18 August 2015

On Being Privacy Risk Adverse

Being risk adverse in [IT] system development isn't always a bad idea - consider mainframe technologies which are constructed to avoid any kind of failure bringing the whole system down, or not using the latest, greatest JavaScript library for your mission-critical web development...

Risk management in privacy has come to the fore of late, especially the with publication of the NIST standard of risk management. So today's conversation about being risk adverse and how one assess risk in privacy was extremely interesting.

Consider this:

Collecting personal data (or PII) is a risky activity and therefore must be minimised as much as possible.

The definition of personal data is very weak, but it is always best to consider almost everything personal data in case it is cross-referenced with other data (which would make it personal data)


Don't collect anything. Ever.

While extreme, it shows how a misplaced understanding is many aspects, including what is risk and the nature of information (personal data) can lead to extreme situations and conclusions.

While NIST is absolutely correct in its assessment that we need proper risk management procedures, how these relate to requirements, information type and all of the other privacy ontological structure is as yet very, very weak.

In fact, terms such as personal data and PII do not come even close to being in any form usable for risk management - for this we need to go much deeper into the nature of information. For example, instead of "personal data" we could use classifications on information type and a mapping from different kinds of data (of these types) to risk metrics (note the plural). An overall risk value can then be more accurately calculated - or at least be calculated on the basic of what information we actually have.

You can read more about this approach to privacy engineering in the book: Privacy Engineering - a dataflow and ontological approach.

No comments: