BLOG | OFFICE OF THE CTO

What Hollywood Taught Me about Zero Trust

Ken Arora Thumbnail
Ken Arora
Published May 05, 2022
  • Share via AddThis


If I ever—in some alternate reality or fantasy future—have the opportunity to design Starfleet’s computer systems, one thing I would most certainly ensure is that the weapons systems were not connected to life support subroutines. Or, were I to be the commander of an alien invasion force tasked to take over the earth—a planet with a completely different species, mind you—I would insist on biometric authentication, rather than a passcode or token. And, finally, if any of my officers or spacecraft were to against all odds, miraculously “escape” their captors, I’d certainly first check to make sure they weren’t carrying any trojan horses.

So, what does this have to do with zero trust? As you’ve probably guessed by now, Hollywood loves storylines that play out the epic consequences that result from forgoing a few ounces of healthy up-front paranoia. And, from my perspective as a cybersecurity practitioner, that same mindset—maintaining a healthy paranoia—is at the heart of what zero trust is really about.

So, why am I choosing to focus on zero trust, specifically? My motivation is based on a trend of how the term “zero trust” is being used today. Going back to another film production anecdote, this one from the late 80s, this was the time when Hollywood was transitioning from legacy analog technologies to digital standards for audio, video, and post-process editing. At that time and place, many of the less technical members of the movie-making community didn’t understand what “digital” actually meant, nor did they really care to; instead, the term “digital” was, to them, effectively synonymous with “best-in-class.” As a result—and much to the chagrin of my techie friends that worked with them—producers and directors would start asking if the lighting or the set construction was “digital,” when what they really meant was: “Is this the best lighting design, or the best set construction?” Now, coming back to today, I too often hear “zero trust” being used within the CSO community in much the same way movie producers used “digital” in 1990.  

Separately, I was recently introduced to Simon Sinek’s “Starts with Why” framework. That framework, brewed alongside remembrances of how Hollywood thought about the early days of “digital,” and how films created stories based on security (mal-)practices, helped distill a number of thoughts I had around zero trust. At the core of zero trust is the moral of the Hollywood storylines I opened with: forgoing a few ounces of thoughtful cyber-prevention in the design and operation of securing a critical system will result in pounds of later compromise and pain. Analogously, at the central “why” level of the framework, zero trust can be articulated as the following set of beliefs:

A.     Always explicitly verify ‘who’: That is, the actor that is attempting to interact with your system.

B.     Default to the least privilege required: Once identity is established, allow that actor only as much privilege as is required to interact with the system for the specific business transaction being performed, with the requisite privileges enumerated by the design.

C.     Continuously monitor and (re)assess: The identity verification and privilege rights should not be a static, one-time thing; instead, those decisions must be continuously assessed and reassessed.

D.    And, still, assume you’ve been compromised: Finally, despite doing the above three items, presume that a sophisticated adversary has gotten past the defenses. Therefore, the system must also consider how to identify and isolate any compromised elements or identities, and a strategy for containment and/or remediation of their impact on the system.

Simply: Don’t implicitly trust, instead always verify. And trust only as much as needed. And continuously evaluate. And don’t assume you’ll catch them all. That’s the ‘why’ of zero trust.

Zero Trust

Of course, ‘why’ is only part of the story. The ‘how’—that is, the techniques and tools used to embody the mindset that ‘why’ engenders—is another lens that’s relevant to the practitioner; it falls out as a consequence of the aforementioned beliefs. Here again, I’ll be specific, phrased within the context of the current set of tools that today’s cybersecurity practitioners have available:

  1. Authentication: Any actor interacting with the protected system must attest to having some identity, or, in some cases, a tuple of identities—such as an identity for the human or automated system, along with an identity for the device or platform the human/system is on, and perhaps even an identity for the browser or tool used to facilitate the access. The zero trust mindset implies that any such attestation must be verified, or “authenticated,” by one or more means: a shared secret, a token or certificate, and, in more modern systems, by also observing and verifying the pattern of behavior of that actor.
     
  2. Access Control: Once identity is established, that identity should be assigned a level of trust, embodied by the access-control rights granted to that identity. The access-control policy should follow the principle of least privilege, where the only rights granted are the minimal set required for the actor to perform its role within the system. Ideal access-control implementations should allow fine-grained specification of the rights granted, such as: Role <X> allows access to APIs <1>, <3>, and <4>, and read privileges to objects of class <Y> and <Z>. A best practice to note is that complex access-control scenarios targeting application resources should be abstracted behind APIs, rather than granting direct access to objects, files, and network resources.
     
  3. Visibility: Moving on to the “monitor” part of the mindset—a prerequisite for “continuous reassessment”—the system must be capable of ongoing, real-time visibility into each actor’s system behaviors. In this context, the statement “if you didn’t see it, it didn’t happen” is axiomatic. In addition, not only must the collected “telemetry” be visible, but it must also be consumable, in the sense that it must exist within a framework that enables the sharing and contextualization of what is reported. This allows data from multiple sources to be meaningfully combined and correlated, enabling more robust, higher efficacy risk assessment.
     
  4. Contextual Analysis, ML-Assisted: The motivation for the aforementioned visibility is to be able to execute on the “continuously reassess” principle. In implementation, this precept requires not only visibility, but analysis—typically, across multiple data sources (requiring the sharing-friendly framework mentioned earlier) in near real time. In order to do so, the continuous assessment will often require assistance by AI machine-learning systems to detect actors that are acting anomalously, in order to identify any possible system compromise. Finally, a robust analytics engine should be capable of providing a more nuanced answer than a simple binary yes/no—ideally, a risk assessment along with an associated confidence score.
     
  5. Automated Risk-Aware Remediation: Lastly, because part of the belief system is that some sophisticated adversaries will still manage to infiltrate the system, the system must be able to act, in order to more deeply monitor, and, when needed, contain and/or block such actions or actors. The system’s response—ranging from mere logging to deeper inspection to blocking the attempted action, or even toward deceiving the suspected bad actor—must be considered in the higher-level business context. The likelihood and impact of false positives or negatives, and the business risk of the action are part of those considerations. For example, blocking browsing of a product catalog might only be appropriate if there was very high confidence of the actor being a malicious site-scraper, but requiring additional authentication may be appropriate for a banking transaction with a milder degree of confidence. Lastly, because of the sophistication and speed of modern cyberattacks, the operational remediation action must be automate-able, with the human specified policy being described in terms of intent-driven goals.

The final aspect of the “why, how, what” framework is the “what”—namely, the goals can be achieved, and the classes of attacks can be prevented or mitigated, using the above tools and techniques. A full taxonomy of the complete set of cyberattacks will be a topic for a future article; however, as a preview of coming attractions, the “why” and “how” described here do address the spectrum of sophisticated “advanced threats.” As one example, the zero trust mindset can address ransomware threats, even if initiated by “trusted” software components (a.k.a. “supply chain attacks”). Specifically, the application of the principle of least privilege, embodied in the access-control policy, should be used to limit file read/write permissions only to those actors that do require that privilege, preventing the encryption of file resources. Further, should some actor—perhaps an existing software component with file write permissions be compromised (using the aforementioned supply-chain attack vector)—attempt high-rate data encryption on many files, continuous reassessment and analysis should detect the anomalous behavior in short order, as discovered by noting the span of files accessed, and the rate at which they are accessed. The detection, coupled to automated mitigation, can be used to quickly block such activity. 

So, going back to the alternate worlds that I started off with... If all of Starfleet’s computer subsystems operated on the principal of least privilege, the API that launches photon torpedoes should not be invokable by the gravity control subsystem. And the alien mothership’s controls would not only perform biometric-based MFA, but the mothership’s security controls would also assume that breaches will occur—and therefore be continuously monitoring and reassessing, detecting the anomaly of a fighter drone that flies through the ship, and mitigating the threat if that anomalous drone heads toward the engine core. Those few key ounces of prevention would avoid a great deal of consequent drama—bad for Hollywood, but good for cybersecurity practitioners.

To learn more about the framework encompassing the broad concepts around zero trust, in relation to the existing business backdrop, and the security mindset that application business leaders should embrace, read our whitepaper Zero Trust Security: Why Zero Trust Matters (and for more than just access).