I'm not sure I see the difference between zero trust networking and authorize everything. Seems like they are saying pretty much the same thing, differing in positive vs negative assertions.
The idea that many vendors have is an agent on the client and server that scrutinizes every connection, at both ends, based on policy and/or entitlement to specific resources. For an enterprise this means not having to worry as much about locking down your internal network as it is no longer the only boundary. In reality this is a very tall order. All your internal systems, legacy systems, cloud vendors, etc. have to be on board. The reality is a hybrid solution with all the attendant complexity.
Good point. I would also like as an argument that the hybrid approach should be seen as having multiple lines of defense. A corporate network should have an outer boundary that is hard-ish to penetrate. Personal devices should either require approval or use some kind of vpn to access any part of this network. Inside, the network should be divided into subnets with their own border protections, in case an attacker penetrates the outer shell/userland. For all subnets, zero-trust should be implemented when possible, even for communication within a subnet. Finally, monitoring should be in place to listen for unusual activity, whether that is unusual or suspicious traffic, unexpected changes in configurations of nodes on the network, etc.
Zero trust means "don't authn/authz based on IP address, authn/authz at the application level where you should have been doing that in the first place."
How long has authn/authz been in your lexicon? Took me a second but I rather like the abbreviation even if the spelling omitted is different.
If you are to provide access to an app or network, you need to trust (something) under some circumstances.
> provide access to a network
That's your first mistake. You should be providing access to software, not networks. The idea of a "trusted" network that applications or users can communicate on is the fundamental idea that zero-trust architecture aims to get rid of.
> the context back then was don't trust an endpoint just because it is on your network.
The majority of the world is still in this context...
Although "authorize everything" can also be confusing - so, what, you just let anything query your system?
That seems appropriate in many contexts. The originator of the "beyond corp" meme has all their critical systems just hanging out there on port 443. You could just download all their source code, if you were authorized. The point is if you want the extra layer of security, the belt of physical or VPN access to go along with the "authorize everything" suspenders, you can just put it on top. But you don't assume that the network security actually works, you don't allow its existence to degrade the rest of your security story.
> But you don't assume that the network security actually works, you don't allow its existence to degrade the rest of your security story.
If you implement security at the network layer, what’s to stop app developers from kicking the can down the road rather than implement app security?
This is a valid solution - open everything up to the public internet, but gate everything on the domain behind authn (with the application performing authz only if authn is successful). Proxies can also handle the authn part and ensure the app isn't touched without authn, see: https://teams.cloudflare.com/
Usually there is a "after authentication" implied
"Authenticate everyone, authorize everything"
Agree, not a perfect term. You certainly make a good point and in fact the strongest zero trust approaches don't even let you on the network (even at layer 3) until after you are authorized.
Err, no - the strongest zero trust approaches never let you “on the network” - authorized or not.
They are designed to connect you to an application, you should never see the network.
I met customers who put anything under that umbrella. And they want to use PaaS/SaaS products - like, you need to encrypt table names, or some unrelated data… Some companies just use that to protect their …
> the context back then was don't trust an endpoint just because it is on your network.
I remember working for a major tech company that had contracts for big critical industries (think oil, aviation, trains, etc.). All the R&D department pretty much rested on one Brent, because they made no efforts to upgrade their systems for 15 years+.
tl;dr: I asked him: hey maybe we shouldn't trust devs computers just because they are "on the network", he just shrugged and said: "if we're compromised we'll have other things to worry about anyway".
While zero trust as a goal is impossible, it still is the guiding principle that everyone should follow; especially if you have a Brent.
What was its original coining? PC Booting?
This was a really great episode. I'm glad you took the time to get someone from the gov to talk about this on the podcast.
Well you have insurances for ransomware, and it leads to the bad guys specifically targeting companies with such insurance, because they are more likely to pay the ransom.
Exactly. By making the insurance company responsible for both the security and the payout, this would turn into a direct war between the insurance companies and the attackers. The incentives would now be on the insurance company to take the measures necessary to protect against the ransomware in the first place.
Now, if society feels that too many ransoms are being paid (due to externalities, such as loss of confidential information, service quality, etc) , this might also make it easier to implement additional countermeasures. In particular, I think it would make sense to demand a fine or tax any time a company pays such a ransom.
Lets say, if the government would demand a tax equal to the size of each ransom paid, the insurance/security company would either have an increased incentive to protect against ransomware, or alternatively, the attackers would understand that the break-even size of the ransom, where it would be preferable for the target to not pay, would be not much higher than half as high.
And you end up with companies having wide security gaps because the Sec and Actuary teams won't agree about anything to actually get on with implementation...
Those insurance companies would then go out of business quickly, and be replaced by organizations that were able to handle this.
This is a benefit of having actual security be core business for such a company. While for many companies, security is a small part of their business, and not critical for their long term performance, a dedicated insurance/security company would HAVE to be good at both to stay competitive.
There is maybe no less promising blockchain application than identity. It solves none of the real problems, and introduces its own.
Arguably, identity is the only application for which blockchains are the most suitable for the reason that identity itself is the artifact of a consensus. I'll share the argument here in case you haven't read this particular driver for it yet, and it needs scrutiny.
The reasoning goes that in govt, you have licenses and birth certificates, SINs and ID cards, and depending on the legal regime, there are limitations for the legal uses of each of those. Banks could theoretically offer ID because they have more information about you, but then they take on risk and liability in its usages. Second, each transaction that requires identity only needs a certain level of assurance, and what these orgs have agreed to is they will contribute their user identity attributes to a blockchain to support an identifying consensus without holding any risk or having any responsibility for the usage, while providing a high degree of institutional assurance that their contribution is verified and supported by KYC, pattern of life, anti-fraud etc. In this case, identity is an epiphenomenal artifact of the consensus.
Where I might agree is that I don't think strong identity itself is valuable, it in fact destroys value, and the consensus just implements the impunity and social anti-pattern of a one-way committee decision - and I think it's an imposition that doesn't help anyone except make them more exposed to remote enforcement of fines and petty policies. e.g. Yay, now you can get parking tickets in foreign countries, and all the use cases are variations of that.
However, if you want recourse and sanctions of individuals that are portable without integrating with central national identity schemes, a blockchain is the way to do that. That's the most succinct version I think there is. The irony may be that civil liberties may be the strongest argument against blockchains because they are the cryptological implementation of a committee or star chamber when their consensus represents a monopoly or "radical monopoly."
The problem is that 'identity' is context specific. You're right that it is an artifact of consensus - but it is actually consensus in different groups for different reasons and at different points in time.
An interesting study on identity is the (book/play/film) Les Misérables. At various points in the story, the protagonist is a prisoner (#24601), a Frenchman (Jean Valjean), a factory owner (Monsieur Madeleine), and a community leader (Monsieur le Maire). The main antagonist in the story, Javert, is so strongly biased in his rigid system of classifying people, that he is unable to accept Valjean's identity as anything other than a criminal - prisoner 24601. The internal dissonance this creates when Valjean continues to defy this categorization eventually results in Javert's demise.
At the beginning of the story, the protagonist's identity _is_ prisoner 24601. At the end of the story, it is not. Identity changes with time, context and circumstances, and how that identity is asserted will also vary from group to group.
One of the problem's with MDM software is that corps want you to login and use your personal phone, I guess to save costs, and to make it easy for you to do work out of your regular business hours.
If a company asked me to use MDM software and set themselves up as a device owner on a phone I purchased and used every day my answer is: hell no
If they want that, they can buy me a phone, and pay for the mobile/data plan. I've worked places that have done this, having 2 phones is a pain, but you only use the corp one at work or if you're oncall.
BYOD support without having everything managed by the company is a pain point Apple and Google are trying to solve.
For Chrome, it will perform a very intrusive popup whenever you log into an extra Google account to get you to use a different profile. If you say yes, that new profile will be governed by the administrators without them gaining access to the entire Chrome browser.
For Android, there are 'Work Profiles', however I haven't tried this and I wouldn't be surprised if it breaks fundamental parts of Android and/or it's disabled on certain OEM Android makers.
For iOS, User Enrollment is a thing.
The main problems I see with these solutions is that they add a lot of complexity to MDM configurations, so chances are the organization will either go without MDM, or ask you to set up your device under full MDM. Under the second scenario I would suggest purchasing an extra phone just for work - this also helps with the possibility of an internal investigation, or even subpoena, asking for access to any phone with work data on it, as chances are they won't limit searches and data exports to data stored in your work profile.
At a previous company I declined the "perk" of the company paying for my phone plan, because it required giving them control over it. I was mostly worried about losing my phone number accidentally upon parting ways.
Nowadays, Android can have a Work profile that your company can control (and wipe, for example) that doesn't affect your personal stuff. It's actually convenient because you have a separate instance of Chrome, which is a good workaround for mobile Chrome not supporting multiple profiles inside the app like the desktop version does.
I just like that there will eventually be a big shiny C-level-friendly website that I can show my bosses, that they can show their bosses, so we might actually get funding to start working on Zero Trust. Nobody cared about BeyondCorp but they might care about NIST.
Is your PowerPoint™ broken, son?
People get into all sorts of trouble trying to reason axiomatically about "Zero Trust". It's definitely a problem with the term, and a strength of "BeyondCorp"; BeyondCorp can only mean the one set of things, because it's meaningless outside of Google's branding. But everyone feels like they can work out what "Zero Trust" should mean. So the first thing you have to do is, you have to rewire your brain to read "Zero Trust" as the marketing term of art that it is.
The OMB ZT stuff is a reaction to USG breaches, and I think in particular the OMB hack. There's a "before" state and a desired "after" state.
In the "before" state, you're one of the 2.1 million federal employees, and you start your day by inserting a PIV card into a reader, and with that, you're given access to an intranet that in turn gives you access to a bananas number of different things that nobody can keep track of or secure.
In the "after" state, each service is responsible for establishing its own tight trust boundaries, and instead of providing a network dial tone that people mistakenly assume is a proxy for trust, the USG infrastructure provides you with end-to-end authentication for requests regardless of the network you're using.
As far as OMB and NIST talking about ZT goes, the major problem you're trying to solve is that there are a zillion federal agencies --- way more than you think there are, like you know that there's a Department of the Interior, but also under Interior there's a Susquehannah River Basin Commission with 100 employees, and there are other agencies that have like 4-5 employees. And what you're trying to do is provide a security strategy and a toolbox and a set of best/worst practices that you can apply across all of these organizations, to replace what I understand to be the status quo ante strategy of "stick it on the VPN, pretend we've kept it off the Internet, and call it a day".
The other important subtext to all of this is that there's a huge give and take between USG and industry, where USG tends to take its lead from what's happening in industry, but it also participates in the industry in that it is one of the largest customers for technology products, so the industry is intensely interested in what it does. So when USG decides to demand "Zero Trust" for its agencies, and sets out a standard set of requirements for ZT, industry goes nuts making sure their products are responsive to that standard.
The good thing here is that the OMB memo is smart, and ZT as construed by the current administration's IT people is a pretty good baseline security strategy, so in this one instance the USG is being a force for good, in that it's aligning a lot of industry work around a strategy that people should be seriously considering adopting anyways. And I think there's pretty broad recognition/agreement about that in the "security community" (hate that term), so when USG (here: NIST) does some big new thing about ZT, it gets a lot of positive attention.
... is how I understand all of this.
I'm kind of unclear how the device part is supposed to work. Let's assume the work laptop is fully locked down, and employees' personal laptops are completely compromised with each keystroke sent directly to ransomware rings. Are you supposed to block your employees from logging into your SaaS apps and internal web apps from their personal devices? What's the mechanism for that?
You generally run an agent on the client machine that verifies machine identity and configuration as part of authentication. Beyond identity is an example.
Agents on the client can’t really be trusted unless there’s a secure boot and only authorised software is running - at which point it’s not really a personal device any more.
Avoid unrelated controversies, generic tangents, and internet tropes.
I dont blame the parent for paranoia. NIST has been involved in standardizing protocol/algorithms with backdoors. It is not a controversary, it is documented. Although they pulled off the standard when it became public news .
It's not whether it's a controversy, it's that it's a generic, tropey and unsubstantive comment. 'X sux' is a bad comment even if X really sux.
Well, parent has actually expressed my reaction. The article seems to be a puff site, with no details about what they mean by "Zero Trust". I didn't follow any of the links (I don't think I should have to).
If NIST has something to tell me, then they need to tell it me straight, not cloaked in partner sites and snazzy graphics.
You can write a comment about things you don't like in the thing posted but not a reflexive trope response that can appear unchanged in anything with 'NIST' in it, which is what the guideline is about.
You're a Google search away from the entire industry explaining to you what they think "Zero Trust" is --- the term predates the OMB memo.
Your best first read is the Google BeyondCorp paper.
I can understand not trusting NIST's approach to crypto standards, given their history with the NSA. However what in this set of documents do you find untrustworthy?