Part of the ISO27000 standard for security directly addresses the need for a classification system to mark documents and data sets. How you set up your security classification is left open, though we tend to generally see four categories:
For each level we are required to define what handling and storage procedures apply. For example:
- May be distributed freely, for examples by placing on public websites, social media etc. Documents and data-sets can be stored unencrypted.
- Only for distribution within the company. May not be stored on non-company machines or places on any removable media (memory sticks, CD etc). Documents and data-sets can be stored unencrypted unless they contain PII.
- Only for distribution by a specific denoted set of persons. May not be stored on non-company machines or places on any removable media (memory sticks, CD etc). Documents and data-sets must be stored encrypted and disposed of according the DoD standards.
A good maxim here is: Unclassified is the TOP level of security.
Sounds strange? Until document or data-set is formally classified how should it be handled?
Note in the above that we refer to the information classification of any data within a data set to further refine the encryption requirements for classified information. No classification system as described earlier exists alone, though ultimately they all end up being grounded to something in the security classification system. For example we can construct rules such as:
- Location & Time => Confidential
- User Provenance => Confidential
- Operating System Provenance => Public
we find that this channel and the handling of the data by the receiving component should conform to our confidential class requirements. A question here is that does a picture, location and time constitute PII? Rules to the rescue again:
- User Provenance & ID => PII
- http or https => ID, Time
The observant might notice that ensuring protection of data as we have defined above for some social media services is not possible. This then provides a further constraint and a point to make an informed business decision. In this case anything that ends up at the target of this channel is Public by default, this means that we have to ensure that the source of this information, the user, understands that even though their data is being treated as confidential throughout our part of the system, the end-point does not conform to this enhanced protection. Then it becomes a choice on the user's part of whether they trust the target.
In the previous article about information contamination, we have an excellent example here. We need to consider the social media service as a "sterile" area while our data channel contains "dirty" or "contaminated" information. Someone needs to make the decision that this transfer can take place and in what form - invariably this is the source of the information, that is, the user of the application.
Does this mean that we could then reduce the protection levels on our side? Probably not, at least from the point of view that we wish to retain the user's trust by ensuring that we do our best to protect their data and task the responsible position of informing the user of what happens to their data once it is outside of our control.