BLOG

Container Security Basics: Pipeline Security

  Jordan Zebor

  Lori MacVittie

Published July 17, 2019
  • Share to Facebook
  • Share to Twitter
  • Share to Linkedin
  • Share via AddThis

If you’re just jumping into this series, you may want to start at the beginning: 
Container Security Basics: Introduction

In an era of application capital, the CI/CD pipeline is a critical component upon which rests the speed and security of the applications it builds and delivers.

This is a complex system of tools, integrated via APIs and invoked by scripts or plugins that digitally represent the process you are following to build and ultimately deliver containerized applications. What that means is that there are multiple points at which a bad actor could compromise the system. While that may sound far-fetched, remember that a goodly number of organizations are taking advantage of cloud today which necessarily means remote access.

That means there are two pieces of pipeline security: first, security of the pipeline itself. Second, security in the pipeline.

Security of the Pipeline 

1. Authentication is not Optional
If you’re using tools and services as part of your pipeline in the cloud or as a hosted service, they are open to attack. If you enable access for remote developers or operations to internally hosted tools and services, they are open to attack. Before you dismiss that concern, let’s not forget the number of breaches in past years as a result of opening up systems to external access. The best posture to assume when it comes to pipeline security is that yes, it is accessible to bad actors.

In turn, that means the first order of business is to secure access. Strong authentication is not optional. The use of access control is highly recommended to finely tune access to critical systems. Even if you think systems are only accessible internally. There are other systems on your network through which bad actors can gain access.

Once you’ve tackled authentication, next is authorization. Authorization specifies what an authenticated user can do within the system. It’s important to distinguish privileges based on roles because the fewer credentials with access to critical components, the better off you’ll be.

Every component in the pipeline should require authentication and authorization. That means everything from repositories to the tools used to build applications and containers.

2. Securing Pipeline Components  
It is sadly not unusual to hear reports that attackers have discovered vast troves of treasure (credentials and other secrets) by scanning public repositories. See number one for a reminder to require authentication on repositories.

The reality of today’s pipelines is that they are an integrated chain of tools, each of which should require authentication. Because of that, credentials and secrets (keys) often wind up stored in scripts that invoke the tools and services that make up the pipeline. This is a Very Bad Thing™, particularly when coupled with poor authentication practices on repositories where those scripts might be stored.

Credentials are critical assets that need to be protected. Be aware of where they reside, where they’re stored, and how they’re secured. A helpful tool for managing “secrets sprawl” is HashiCorp Vault. Vault securely stores secrets in a central location.

Security in the Pipeline

3. Maintain a Bill of Materials
Before you can secure a system – or component of the system – you have to know it exists. It is important to maintain a comprehensive bill of materials to ensure you know what is in your environment. More to the point, you need to ensure you know what should be in your environment – and conversely, what shouldn’t.

A well-maintained inventory can assist in protecting against attempts to insert tainted or compromised components into the pipeline. Standardizing on a single, base container or at least a manageable handful of containers can dramatically improve your ability to detect potential issues. Deviations, then, should set off an alert that can be investigated. For example, comparing hashes and, if available, validating digital signatures of container images is an important step. If you’re pulling from a remote repository, there is a chance it’s been compromised.

That was the case when DockerHub experienced a breach and exposed credentials and tokens for 190,000 of its users. Using that information, attackers could have compromised images that later gave them access to other systems.

Don’t stop at container images. Remember that externally sourced application components are subject to compromise. A concrete example is the insertion of crypto mining software into a NodeJS NPM package event-stream.

Standardizing on maintained inventory of images and components also streamlines remediation of vulnerabilities when – not if – vulnerabilities crop up. Updating a single base OS layer is much easier to manage than trying to update a disparate set of containers.

4. Scan and Remediate
There are myriad ways in which container environments can be compromised or wind up vulnerable to attack. Most often we think of vulnerabilities in software in a container image. While you should pay attention to them – scan and update/patch – there are also configuration-based issues that are less often caught and addressed. Many of those are preventable with the right configuration. Your pipeline should include tools capable of detecting compromise or alerting on insecure configurations.

Aqua Security offers tools that can assist in scanning and evaluating containers and configurations alike. Kube-bench and kube-hunter are great resources for identifying common (but critical) configuration mistakes in Kubernetes environments.
 

All the scans in the world won’t help prevent compromise or breach if you don’t act on it. Tripwire's 2019 State of Container Security found that nearly one in five (17%) organizations were aware of vulnerabilities in container images - but deployed them anyway. Given that, it’s probably not a surprise that the survey found more than half (60%) of organizations had experienced a container-related security incident in the previous twelve months.

This point cannot be made often or loud enough: if you integrate security scans into your pipeline (and you should) it is important to follow up on findings. Running a scan does nothing to improve security if you don’t remediate.

We repeat: running a scan does nothing to improve security if you don’t remediate.

Pipeline security is a multi-faceted problem that can be challenging. The pipeline should be treated like any other set of applications because ultimately, that’s what it is, albeit an operational one. Don’t ignore security of the pipeline while you’re integrating security into the pipeline.

Read the next blog in the series: 
Container Security Basics: Orchestration