In the current scenario, the market expects software to have an increasing speed of delivery. In order to make this possible, developers are increasingly seeking to adapt to practices such as CI\/CD - such scenarios will be addressed below. The first concept refers to Continuous Integration (CI), an attempt by the teams to create a structure that allows the creation and testing of software in an automated way. The second concept that we have to be clear is the Continuous Delivery (CD), where we look for an automated process of construction, configuration and the creation of a package where the code will be delivered. These two concepts need practices and tools to be used together to deliver to the development teams the advantages and agility that development processes require today. In this article, we will talk about implementing security in this pipeline and seek to show that the correct and secure use of CI\/CD-based structures can deliver much more than process agility - they can deliver a more secure code. Understanding the pipeline and securing pipeline safety The CI\/CD pipeline is one of the key points within a development process and as such must be protected so that it will not be compromised, e.g. by avoiding the distribution of altered codes and\/or parts of malicious codes using a reliable structure. When we realize that our codes pass through this structure, we see that ensuring their security is fundamental for our entire process to be reliable and actually deliver the code we expect. But pay attention: this is not necessarily a simple process. In general, we need to think about protecting our process - or pipeline - on three distinct fronts, but which work together to deliver security to the pipeline. First, we need to think about protecting the pipeline itself. Access control At this first point, to focus on protecting the pipeline itself, we need to understand what basic safety practices should be addressed. We can start with greater control over who may or may not have access to the pipeline. By having strict control over who sends our code changes to the repositories that will be used as a storage base, we can ensure the first layer of protection for our pipeline. We cannot leave aside other points, such as the connection security of our developers, or even the security of the equipment used by the team. Everything needs to be evaluated and looked at as a possible point of compromise. Here also fits the use of a methodology used a lot for code design, but that can be used at other times. The use of Threat Modeling concepts to check which points can be more fragile within the process is also a good alternative. Looking at it from another perspective, we need to ensure the safety of what goes through this pipeline. At this point, we need to ensure that what is sent, stored and moved within the CI\/CD pipeline is safe and constantly reviewed. This goal is achieved when we have in our process controls that ensure that the code is validated through static analysis tools, and also through code review - these preferably done manually. Thus, the possible vulnerabilities found can be fixed. The third and final point is to ensure that there is security based on an automated process, which helps and assists to ensure that the security of the process occurs with the least possible intervention. Ensuring security through automation is also an attempt to take the human factor out of the equation, which can often be the source of a failure. Let's try to understand in depth each of these points. Let's start this part of the text by explaining a little bit about the concepts that come to be a pipeline. In general, a pipeline is a structured process that responds to a set of actions and tasks. For us, a pipeline is a structured process that ensures that software initially created on a developer's machine can be delivered to his production in an automated way. Using simple concepts, we can represent in the image below what a pipeline really is.. In general, we can understand that the code is initially being written on the developer's machine, after its encoding the code is then sent to a code repository. Arriving at the code repository, we have the first concept, which is Continuous Integration. A set of tools and actions that are performed on the code to ensure the first layer of protection, performing the first code tests. This code is then sent to a second stage, the Continuous Delivery, responsible for "packaging" the software through other tools, then put into production. Of course we have more things in this process, the intention here is just to give an overview of a CI\/CD process and how a pipeline works. If we already understand that, we can follow and understand how we get more security for our pipeline. Secure by Design When we think about software security, we have a source that we always look for, in this case OWASP has a concept of secure by design, and in its document are listed 10 principles that must be followed to deliver a secure code. In our case, we can borrow these concepts and think of them as the starting points to make our pipeline more secure. Of course we won't be able to use them all, but we can take some of these concepts for our purpose. Below are listed the 4 concepts we will borrow and in the course of the article we will put each of them to make our pipeline safer. Imagine a development pipeline like this one in the image below: In a default configuration, we can see that there are numerous points where the user has an incorrect permission for the process we want to protect. In this way, we need to understand what the best flow is and how it can be evaluated to deliver to our process the concept of least privilege. If we think about reducing the attack surface and look at the image of our pipeline, we can see that there is no direct reason for the developer to have write access inside the image repository or even the cluster, in this example we work with containers as the final result. Thus, with this simple change, we can also guarantee that everyone who will have access to this pipeline will have a well defined separation of duties. After all, this guarantees us a little more security and what would give us a process more like the image below. Even though our process is much better now, we still have to imagine that our pipeline is still a big target, because in it we have access to the code, credentials and other information that are of great interest to attackers. In this second image we can see that there has been a considerable reduction in permission. So we now create, through the red dotted lines, the logical boundaries between function separation. We still need to think about how to improve the security of that pipeline. At this point, we can adopt as an evaluation point the use of a concept called defense in depth, where we will create layers of security that try to limit even more the access or even the use of a certain asset. In the case of our pipeline, we can imagine situations where the authentication process is performed through a second factor authentication mechanism (2fA), where even if your password has been entered, the developer must introduce a second factor, perhaps a token, for the system to validate your access. With these simple actions, we can maintain and increase the security of the development pipeline. Of course, there are other concepts and other ways we can use to maintain and ensure pipeline security, this is just one example of how we can fetch in other security areas the concepts needed to ensure the security of a process. Securing Pipeline Safety Pipeline security is a process of implementing good security rules, but we will try to address each of these practices that we can use for pipeline security. Below we will put the security points according to what is most commonly found in a CI\/CD process, but may have some variations according to your process. The developer As a first step, let's try to ensure security at the early end of the process. Our evaluation point will be the developer. Remember that here we're not going to talk about how the developer should log on to the system or the machine - this issue has been covered before. At this point, we want to start dealing with software security, and the developer should be the first point of contact. For this, let's imagine that security should already be thought directly in the working tool, and with this, we can imagine the possibility of using (integrated development environment) IDE plugins so, when writing the code it already goes through a validation process. The most current IDEs and verification tools already have in its great majority the possibility of attaching a plugin that can help the developer to already have some alerts about possible flaws or best practices of coding, this already removes from the equation much of the subjectivity of understanding in security of the developer. Even if some developers don't see it as necessary, the use of these plugins can help to be the first line of defense of the software, as well as help to gain time in a process that is already required to be very agile. Code review Maintaining the security of codes is a process that should be embraced by everyone, and this includes, for example, and to the extent possible, reviewing codes developed in pairs. After all, this increases the possibility of finding vulnerabilities, as well as increases the developer's knowledge of code security principles. A good source of research can be found in a set of OWASP documents called \u201cCheat Sheet Series\u201d. This set of documents can help to identify good coding practices and thus improve the security of the code. A good example of what we can expect in this set of documents is what is described in the document that talks about authentication, is worth reading to realize that there is much to be evaluated in a simple process. Static and dynamic code testing Within an CI\/CD process, we need to ensure that the security of the code is being thought through every minute. One of the most discussed points at times of using tests is about scalability. At this point, we can say that the use of static and dynamic testing tools gives the process an agility that is needed today. However, we need to understand that these tools should undergo validation, ensuring that their results are as reliable as possible. We understand the importance of the tools, but we do not believe that they are the "silver bullets" of code security. The tools should be seen as supporting the manual code review process, which is much more likely to find more accurate results. This does not mean that the tools are not accurate or even inefficient, but subtleties in some flaws will only be perceived by an experienced professional who has his focus on evaluating all angles of a possible error. Unit and functional testing The way we carry out our tests today we don't deliver much to the process, we need to improve the way we carry out our tests. We can say that, to improve, developers or testers should seek to assess whether their test scripts are prepared to test at least the most common vulnerabilities that may be based on the OWASP TOP TEN 2017. Just like static and dynamic tests, unit tests need to be designed to be executed quickly, leaving longer tests to be executed at previously planned times and on a regular basis. Today there are several tools that can perform these tasks, and it is up to those responsible to configure and use them correctly. We cannot forget that there are still functional tests, which are traditionally performed to identify and validate what applications should do, and so they are written, trying to understand if what is required is what the software does. However, we should not leave aside that at another point to be observed, we need to understand that there are things that the software should not do, and we need to test that too, these are called negative requirements of the software. So, when there is the creation of user use cases, we should mount the reverse user cases, results that should not appear as a result of a software action. A possible example can be seen in this OWASP document. These tests can be automated by a series of tools and these can be integrated into our CI\/CD process, which guarantees agility and automation of the process. The traditional test emphasizes what a program should do and functional test cases are usually written in a similar format (positive requirements). However, there is much more importance for negative requirements in safety testing, describing what a system should not allow. The agile concept of user stories, describing what users can do, has an inverse concept in the form of bad user stories that are useful for formulating functional safety test cases. This way, we can understand that a development process would be more prepared to guarantee the security of your software. However, this is not a static and finite document in itself, it needs to be evaluated and understood as a starting point and not the final map of the process security. Concluding We know that integrating security into DevOps automated processes facilitates and speeds up the entire development process and consequently the delivery of secure software. We realize that the DevOps model is constantly and rapidly evolving and there are always new products and tools coming in very quickly. We hope to have started some ideas on how to improve the security of your pipeline using or not the newest tools but mainly understanding the concepts behind each of the security points placed here.