Peter Drucker once said, “That which is not measured, is not improved.” He’s right – what we can’t understand, we can’t improve or even know if it’s working or not.
When we apply that same thought to the secure development processes, we realize that few companies really understand what’s going on in their process. At most, they have a sense of the number of vulnerabilities in their code, however that’s the symptom not the cause.
So, it’s inevitable that within this whole scenario we don’t come across the question, “So how to measure things in a secure development process?”
In this article, we will try to put some points that we understand can be of great importance to show us possible causes for our failure points within the process. However, this article is not intended to become a list of points that should be used – we just want to start a discussion, show that we need to talk about this topic.
What is the importance of metrics?
It is natural that when we see a process or an activity being executed and working, we do not worry too much about data or even its numbers.
However, if we stop to reflect a little, we will realize that these numbers will give us an opportunity to improve. We will also realize that what we imagined to be a positive activity for the process is actually being done without any effect on the process as a whole.
For the safe development area we have both OWASP SAMM and BSIMM as two maturity models that look at this point and make clear the importance of measuring the processes and their results to seek to understand whether or not they have been efficient within the whole.
Metrics can help us better understand what is happening with our process, how we can understand how it works and whether or not we have an evolving process.
The use of metrics that clearly show what has been happening with the process and its products can help directly in the decision making process in several aspects.
Appsec metrics in the decision making process
Metrics can also help security professionals inform and influence decision-making in the company by providing tangible benchmarks that can assist in discussion points.
Besides, when we support the justifications for new allocations or investments in numbers that actually show the status of the process, these arguments become much stronger.
Even if you’re convinced that it’s important and you’re already taking some measurements in your process, measuring for measuring, without having a goal for each metric doesn’t help either, on the contrary, it can lead to decisions that are ineffective.
We need to build within our process a set of metrics that make sense to our goal, and that effectively bring the information that will be important towards our decisions.
Yet, how and what would these metrics be? Well, there’s no hard list, and there shouldn’t be, because the processes adopt many characteristics of the companies that use them and it wouldn’t be prudent to have a fixed model of metrics. However, we can work on concepts and also looking at the steps of a secure development process.
Where to observe these metrics?
First, before we even understand what metrics we are going to work with in our process, we need to understand what a basic flow would be to work with the metrics.
As shown in the image above, we must first understand, discover and define what our metrics will be. After all, this is important for us to achieve our goal, because there is no point in having the metrics that will not give us the right answer.
A second phase, and one of the most important is the acquisition of this data. A good metric is only really good as far as the quality of your data is concerned. That is: if we have wrong or poor data, we will have a poor metric that will not be so effective.
Finally, we can say that after the identification and choice, the acquisition of good data we need to understand and analyze this data, generating information that can be used in a clear and positive way for the process.
If we can understand and execute these three phases we can start the process of generating metrics for our process.
In addition, we can get help from an OWASP document, which even though it was launched in 2013 still brings good question marks and can help us identify which and where these metrics can be raised.
In this document, we can see that by OWASP’s understanding, the processes of Application Security, Application Security Risk and the SDLC itself are important points to be observed.
As we can see, there are several points that we must observe. However, in this article, the focus is on SDLC process metrics. Even if we focus on a software development process, we have to remember that this process is made up of a set of structures that support it, the infrastructure, the software itself, and the steps that make up this development process – and we need to think about all these points.
But from the metrics point of view, how can they help us? Orally, we can list several points where metrics matter. Thus, we can put as examples the metrics being used for decision making, or even in Quality Assurance processes, seeking to improve the quality of the code or even in more technical issues, where they can help us in the monitoring process.
Metrics in the SDLC process
The following lists some controls that can be placed for each of the steps of an SDLC, but we need to remember that these metrics must be aligned with their purpose. Therefore, the ones we are suggesting may or may not fit your model.
The training phase, introduced by the development process created by Microsoft, has as its initial objective the construction of a team constantly updated and able to develop their skills to keep their applications always with the best practices.
So it’s not strange to imagine metrics related to knowledge and its acquisition.
If we put it as a base, we can have metrics with the objective of understanding and mapping the needs of the participants of the process in its most diverse domains.
In a development process, whenever we talk about training, one of the first thoughts is about the developer. However, we must remember that the development process involves much more professionals than only developers. So building a skill map can be a good start, and from it we can derive some points.
So, within this first point, we have at least to think about trainings related to the developer, the architect, the testing and incident response team.
Then you can suggest as metrics:
- Identify the percentage of people trained in the process.
- Identify the time between one training and another.
- Identify the number of trainings per year.
Reminding that metrics should be used to help in the decision making process. Then, if you have a structure based on data analysis, we can then cross-check, for example, the number of people trained with the number of vulnerabilities. We can also know if a trained professional produces fewer vulnerabilities than one that hasn’t yet. Finally, there are many possibilities when we think about analyzing data sets.
What we have to keep in mind is that we need to define a goal for these and other metrics, and they should bring us something really relevant to the process. If it’s not well elaborated and doesn’t make sense, it just becomes another piece of data that will consume your time.
Within the requirements process, we need to have defined all the requirements that will be necessary for the application. Among them, the security requirements.
We know that even in more mature structures, 100% coverage of safety requirements is a very complicated point, but we have to try to have as many of our applications covered by them as possible.
This way, we can have a better view of what we have as safety coverage and what we have to continue working on in the process of raising awareness of safe methodologies in development processes.
It can be considered as a relevant metric for this phase:
- Percentage of application with defined safety requirements.
Likewise, we can use these values to correlate the data with other points, as for example, if applications strongly based on security requirements have more or less identified vulnerabilities. This can show us the importance of having more vision of the requirements phase.
Correlation of data will be very important in several phases of the process, but notably it can be a strong ally to demonstrate the importance of phases that are normally more seen as features within the process.
The design phase focuses more on processes and activities that are directly related to how the Organization creates and sets goals for software creation. More comprehensively, this includes collecting requirements, as well as specifying the architecture and a more detailed design of the solution.
Thus, design refers to processes and activities related to how an organization sets goals and creates software in development projects. In general, this includes requirements gathering, high-level architecture specification and detailed design.
Within the design phase, we need to understand and create what macro functions the software should address. In addition, we also need to define what security requirements will be used in the solution.
This is a very important phase, because this is where we start thinking about our security structure, and how we can protect our software. It is important that during the development of the design phase the solution is confronted with a Threat Modeling, so that possible threats that could put the application at risk can be properly identified and addressed.
Metrics that can be considered in this phase:
- How many applications have a defined risk profile.
- How many applications have Threat Modeling performed.
- How many applications have their security mapping for functional requirements.
- How many applications have been validated in the design based on a best practice model.
As we have said a couple of times in this article, these metrics are not definitive and are here much more to stimulate thinking on the need for metrics and we believe that many of these can and will be adequate to the needs of each reality.
The implementation phase focuses on how activities related to the creation and deployment of software components, and how we can observe what can happen to problems with these components.
The activities developed in this phase have the greatest impact on the daily activities of the development teams, and the main objective of this phase is to deliver software with as few defects as possible.
In a first moment, part of this phase focuses on the search for complete automation. In the pipeline, automated construction can include safety checks performed automatically using tools such as SAST and DAST.
In the second moment, there are a large number of software dependencies in modern applications. So our goal at this point is to identify these dependencies and track their security status.
Also, it is important that at this stage we have control over how this code will be delivered, and nothing better than seeking to automate the whole process, thus avoiding that human errors may harm it. This also helps in separating the tasks of the teams present in these processes.
Thus, we can also say that in the implementation phase we seek to identify, record and analyze the defects that have arisen in the codes delivered.
So, if we understand all these steps, we can suggest some metrics to be thought of for this phase:
- What is the percentage of automation of your projects.
- What is the percentage of vulnerabilities found by the tool for every 1000 lines of code
- What is the percentage of false positives identified in the tool reports.
- Percentage of failures that are recurrent.
In this way, we believe one would have a good view of what may be happening during the process of developing an application. In addition, when we cross-check this data with the training data, we can verify whether or not the trainings that are being conducted are being effective.
The validation phase focuses on processes and activities that are directly related to how artifacts produced during the development process are verified. In this phase, quality testing processes and other code verification activities are naturally related.
Therefore, we can understand that, at this moment, the validation process can be the step of much of what was identified in the previous steps. For example, the requirements and the process of validating if there are no threats that have been identified in the modeling process.
When we deal with architecture evaluation, we have to keep in mind that what we are evaluating is both the architecture of the application and the infrastructure will be evaluated and tested.
Then, we can identify possible failures and/or problems that have passed the previous tests.
The validation phase is a crucial phase for the operationalization of an application. After all, it is in this phase that we can seek to identify possible flaws that may have passed the previous phases and that may bring risks to the application.
Thus, we can suggest some metrics:
- Number of validated controls per application;
- What percentage of cases of application abuse is covered;
- How many threats were identified in Threat Modeling;
- What percentage of software is covered by testing;
- Number of validated vulnerabilities in manual code review tests;
- Number of vulnerabilities identified in manual intrusion testing.
In the operations phase, some activities are necessary to ensure that confidentiality, integrity and availability are maintained throughout the operational life of an application and all associated data are covered.
As soon as there is an increase in the maturity of these activities, the organization has a greater assurance that it is resilient in the face of events that may interrupt or even reduce its operational capacity.
It is not unusual at this stage to identify incidents related to the operation of the application. However, these incidents could often be avoided, because in many cases there are good indications in log records of something that is not in accordance with the normal behavior of the application or structure.
In order for operations to be carried out in the best possible way at this stage, some points must be observed. For example: the search to reinforce the configurations of systems and structures that support the application, or even better practices of how to face eventual incidents that happen in our structure.
Ensuring the operation of an application is a complicated process full of details that need to be easily measurable, and metrics help in this process to better discover what is normal within the operation of the application.
Creating a normality profile helps to identify points and/or events that were not expected for the application. To do this, we can suggest that some metrics be created to help us at this point.
We can suggest as metrics for this phase:
- Number of incidents occurring in a 3-month period
- Time elapsed between the discovery of the incident and its solution.
- Number of applications covered by an incident plan.
- Number of outdated systems
Although creating metrics is a time-consuming and labor-intensive process, it is one that bears fruit for application security management. But as we said at the beginning, the metrics put here are good for critical thinking and evaluation, so you can assess whether they would fit your realities, because there is no cake recipe.
After putting our vision on metrics within the software development process, we believe we’ve kicked off this topic within its framework.
Building a metrics program is not a trivial matter, it needs a lot of effort and knowledge about its objectives and processes being developed. But the results achieved can bring to your structure the certainty that the path has been mapped and now there is a metrics structure to be evaluated to see if the path is being followed.
Within a metrics management process, the biggest gain is having the vision of the whole and with it making better decisions.
We want to help in this process and we always want to understand how and where our readers start with their planning, whenever it is possible to share their points with us.