Background
If you haven’t heard of DevOps, that’s alright, we’re going to fix that. DevOps has gained a lot of traction for its ability to rapidly produce results that matter. The purpose of DevOps is to integrate the software development and IT operations practices, where before they were mainly separate entities. By integrating the two, you get a much shorter development lifecycle often with a better-quality product. If you are familiar with different software development methodologies, DevOps is very similar to the agile methodology.
A Continuous Integration/Continuous Delivery (CI/CD) pipeline is a core concept for DevOps. The idea is to have a workflow that is as efficient and repeatable as possible. This is accomplished with the use of various tools that are available today. These tools include, but are not limited to, Chef, Ansible, GitHub, Docker, Jenkins, Jira, etc. Some are online tools that are available to just use, and others require installation. These tools can be integrated one way or another to provide the automation needed for a CI/CD pipeline.
Continuous Integration
Continuous Integration has multiple phases, including the build and test phases. These phases are crucial to the success of the CI/CD pipeline. The build phase is when the developers merge changes to their code. Often times to a central repository such as GitHub, Bitbucket, where their code is compiled to be ready for the next phase, the test phase. The test phase is used to ensure that the code you just updated is working the same as when the previous test was run. These tests can vary in complexity, but are often left simplistic in nature. Adding too many components to a single test can make it more difficult to debug if something goes wrong. Continuous integration not only ensures that code is frequently being updated, but also that it's tested just as frequent.
Continuous Delivery
The "CD" part of the CI/CD pipeline can often be referred to as Continuous Delivery or Continuous Deployment. For our purposes, we will be discussing Continuous Delivery as I feel it is a better option over Continuous Deployment. Continuous Delivery is the next step after Continuous Integration has finished without any issues. It allows you to automate the deployment of code that has passed all tests. The automated deployment, however, is completely at your discretion as there is a manual process that must be done in order to start the automated deployment. With Continuous Deployment, after code has been tested successfully, it gets automatically deployed.
Security Concerns
As with anything, the CI/CD pipeline has its flaws, some of those being security risks. Some are higher risks than others. When you're talking about code that's used by hundreds, or even thousands of people, a small risk can turn into a huge problem. Over time, additional practices have been adopted and tools have become more robust allowing for further lockdown on security risks. There are common security risks that are overlooked or unchanged because “That’s the way we’ve always done things.” The five security concerns we will be discussing include, container hardening, managing credentials, access controls and auditing, hardening the host system, and monitoring.
If you have ever worked with virtualization, then you have probably also worked with containers. Containers are a lightweight version of a full virtual machine, requiring much less resources and software. Containers are already fairly secure due to how little software is needed for them to function and how network communication works between the host and the container. With that being said, there are still security risks in containers that can be addressed. Containers are very useful in the CI/CD pipeline as they can be provisioned and deprovisioned rapidly for testing; all while providing a consistent environment. Although containers generally don't have a long lifespan in the CI/CD pipeline, there are a few things that you can do to enhance the security of your containers, including, but not limited to:
Container hardening – Only install what is necessary and remove things that aren’t
Image scanning – Scan all container images before conducting any tests, and possibly after as well
Access control – Limit who and what systems have access to the container images
Network inspection: Capture traffic going to and from all containers to assess any anomalies
Event logging: Log any and all events, such as authentication, errors, and possibly commands
Passwords are probably one of the hardest things to secure while also keeping automation in your CI/CD pipeline. When implementing security in your CI/CD pipeline, you may have to sacrifice some benefits from a non-secure CI/CD pipeline. Passwords have been known to be found in various applications or text files on a system, causing havoc should anyone find it. They are often needed to access subsystems that the application may use, such as a database. There are ways to better handle passwords by using some sort of credential manager. A credential manager will alleviate the need to store passwords locally while likely doing some sort of hashing and/or encryption of the passwords. You can also implement a password rotation so that passwords are not reused.
Too much access is another security risk that can poke a hole in your CI/CD pipeline. You not only have to think about access from user to tool, but also tool to tool. Auditing can greatly aid in uncovering access controls that are too wide open, allowing you to understand who is authenticating and likely how much access is granted. For instance, anonymous should never be allowed as it allows anyone to authenticate and gain access to data that you may not otherwise want. Not every tool has anonymous access, but some do, and those should be carefully evaluated.
Hardening the host system is something that has been done for a long time now, but still falls short in some instances. Patching is probably one of the easiest ways to harden the host system. This will ensure known vulnerabilities are remediated, and you don’t have to worry about them in your pipeline. Hardening the host system will obviously vary between operating systems and business requirements. Each tool or operating system will likely have documentation on hardening best practices. Below are some links to documentation for hardening various tools:
Lastly, monitoring can be used on the entire CI/CD pipeline to check for anomalies and alert on any inconsistencies. Without monitoring in your CI/CD pipeline, you would not have real-time insight into the data being produced by your pipeline, ultimately limiting your visibility. Security risks can arise from not having visibility as there would be no way to see possible anomalies. Splunk is one of the tools that can assist in this monitoring by providing real time feedback on various stages in the application lifecycle. With the proper searches and dashboards, a monitoring tool can be a huge security benefit.
Wrapping Things Up
Security holes in your CI/CD pipeline can lead to a data leak, financial loss, etc., so it is best to think about it from the start. However, that may not be possible if your CI/CD pipeline is already developed and just needs some hardening. No matter the case, there are always things that can be done to improve the process and make it more secure. We’ve taken a look at only a few examples here, but there is a lot more to be said about security in the CI/CD pipeline. With so many tools and integrations, the possibilities for hardening are nearly endless.
Still have questions or want to discuss DevOps, Automation? Setup a meeting with us to see how we can help by clicking here.
Comments