Zero? Trust? - Part 1

Zero Trust. It's a phrase which is flying around the InfoSec world with great aplomb, decorated with the baubles of it's adoption at tech giants such as Google, and wrapped up in scary stats and examples of how organisation's pockets are being emptied by the cost of hacks and breaches.

But, what is ZeroTrust? It's neatly defined here:

https://www.csoonline.com/article/3247848/what-is-zero-trust-a-model-for-more-effective-security.html

"Zero Trust is a security concept centered on the belief that organizations should not automatically trust anything inside or outside its perimeters and instead must verify anything and everything trying to connect to its systems before granting access."

A pretty clear definition, but one which already alludes to some problems.
  1. The perimeter is back! Oh, how we lauded the demise of the perimeter all those years ago. Our walled gardens, our fortresses, our firewalls... inside is safe but outside be dragons? We'll look at this one in part 2.
  2. Automatically trust. None of us automatically trust anything, if you think about it. Either consciously, or subconsciously we are constantly evaluating the people or situations we are in contact with, or find ourselves having to deal with. None of us can say "I trust" as blanket statement. Consider the below example from the Jericho Forum's Trust EcoSystem paper (2014) [1]:
Trust as a Noun and as a Verb.

Trust, when used as a noun, implies reliance and respectability. It is, however, a quality that is not easily measured/quantifiable. Verbs by comparison are readily made more precise (i.e., qualified) by using other parts of speech to make complete thoughts, which can then be constructed into complete sentences comprising subjects, objects, and modifying clauses – all combining to provide clearer definitions including quantifiable measures. This can be used to control and so refine the scope of the action specified by the verb.

For example:

  • I trust
  • I trust my son
  • I trust my son to drive
  • I trust my son to drive my car
  • I trust my son to drive my car tonight from 6 to 9 pm
  • I trust my son to drive my car tonight from 6 to pm for up to ten kilometers
  • I trust my son to drive my car tonight from 6 to 9 pm for up to ten kilometers to play in a local club football match

Let's explore this example some more:

I am a system storing confidential data.
  • I trust my users (A single subject)
  • I trust my users who provide a valid password (the subject, plus an object)
  • I trust my users who provide a valid password, a valid second factor (a subject and two, distinct objects)
  • I trust my users who provide a valid password, a valid second factor generated from a cryptographically secure hardware token (a subject and two, distinct objects with a modifier applied to one)
  • I trust my users who provide a valid password, a valid second factor generated from a cryptographically secure hardware token, to access data for which they have a valid need (a subject and four distinct objects with two modifiers applied)
  • I trust my users who provide a valid password, a valid second factor generated from a cryptographically secure hardware token, to access data for which they have a valid need, as enforced by an ACL (a subject and five, distinct objects with two modifiers applied)
We could go even further into this example, so who maintains the ACL, do I trust them? Why do I trust them... you get the idea.

Let's make a graph of the example above, and introduce (albeit somewhat superficially) the ideas of a Trust Taxonomy and Chain of Trust from the aforementioned Jericho paper:




The diagram is somewhat interpretive of the concepts described in the paper, but I feel it illustrates my point nicely. We, our systems, our data all trusts something to some extent, which in turn will most likely trust something else. 

Nothing in the above diagram feels particularly "automatic" - for example, anyone familiar with two-factor authentication in the enterprise will know of the technical pain and sometimes intricate integrations required to adopt it. The concern is that some components in the Chain of Trust could be invisible, assumed, or untested. Is your most sensitive data able to validate that those who access it are using a secure password? Should it? Probably not, it trusts your authentication system to get that assurance. What does your authentication system trust? 

Takeaways:
  1. Understand every component in our Chain of Trust, what each component in turn trusts, and what modifiers may act on that component and its trusted components.
  2. Accept that the business impact, expected C.I.A. levels and Data Classification levels applied to data must be considered as critically important when deciding what and how much the data should trust.
  3. Equip our data to recognise its own value, and what components of the Chain of Trust must be present and validated in order to allow access.
We'll look at an example in the next post of how a Chain of Trust works in a Microsoft Office 365 deployment.

[1] https://publications.opengroup.org/white-papers/security/jericho-forum










Comments