Application Security

Security Testing – applying it to the pipeline

In the first part of our article, we talk about the basic concepts of security testing. In this second part, we will deal more directly with each of the tests that we understand to be necessary within a development pipeline.

What we have to keep in mind here is that these two articles do not own the truth or even should be followed as a test checklist to be used, we want to bring the subject for reflection and perhaps a basis of understanding.

Let’s continue with our theme.

As we have mentioned in the previous article, testing is a process that should be developed throughout the development flow. For that, we need to understand the importance and efficiency of each one of them within our process.

As a basis, we need to first understand our process, and we recommend that you design yours, because in a visual way it can be easier to understand the whole flow. So we can imagine where and how to assign test steps in our pipeline.

Think about your tool set

If you came to this article from the first part you saw that there is no “silver bullet” and that nothing should be given only to tools. But we also know that tools are fundamental parts within the development process.

Without the tools, it becomes more difficult to scale a testing process.

What we understand and what we suggest is that testing processes should be quite balanced between the use of tools and the validations made by experienced analysts, and this balance should happen.

In this sense, we do not want to comment on tool A or B, we want to offer a more agnostic view. We know that all tools have their strengths and weaknesses, and this should be evaluated and known when choosing one of them. 

Knowing how many false positives and or false negatives are there is important. After all, this will mark your perception of the degree of trust that should be given to each of the tools.

Therefore, another important aspect when we talk about tools is how they all integrate, communicate. At a certain moment you will have several of them generating data and information, and your testing process must be structured so that they are visualized and analyzed correctly, without this generating further wear.

So, in addition to your testing tools, it is important that you assess how they will generate and deliver the information.

An Example

At Conviso, we developed a platform called Conviso Platform. In its testing process, it can act as a large integration hub, allowing companies that already have their testing tools to centralize this data and information generated in a central platform.

However, let’s talk more about it, because we can’t approach Conviso Platform only as an integration platform. Conviso Platform is a Continuous Application Security platform, which operates in all areas of the secure development process and allows the participants of the development process to have total vision and control over these steps.

It’s worth a parenthesis here: when we think about processes, we have to think about how to measure this process, its efficiency and how we can improve. After all, without this we run the risk of missing out on many opportunities for improvement. So we need to create our metrics

Again here Conviso Platform helps us with that. In addition, our platform is prepared to present us with a series of indicators that allow us to have an overview of our analysis and our customers, allowing us to maintain greater control.

Measuring is important, because only then can we improve processes within our structure. I believe that in this short space of time I have been able to pass on what we understand by thinking about your set of tools. There is no point in having a set of tools if we don’t know how they behave and/or how we can extract the best from each one of them. Remember: software is already too complex by nature, we need to simplify the process around it.

Security Testing

Let’s talk a little about security testing and its use within the development process.

DevSecOps adopts the idea of DevOps and adds the yet absent component: security.

The image below helps us to better understand how it all fits together. However, it is important to remember that, just like application security, DevSecOps is not a product – it is a culture, and needs a lot of adjustments to make it work properly.

Therefore, we put that security testing within a DevSecOps framework should be a set of tests performed by both tools and experienced analysts, which will validate and help refine the vulnerabilities identified.

When they happen

Security testing is all over the secure development process. It is important to understand that each type of test has its moment and its importance.

We need to know our development process to further improve the security of our code, and the image below will help us to better understand this idea:

Failure to perform security testing, or even not understanding what the concept behind the culture-change that brings DevSecOps is in practice, can introduce a major problem into your development structure, causing your team to fail to deliver secure codes.

Testing Techniques

Threat Modeling

The thought within threat modeling construction process is to try to understand all the threats that your application can face, and thus to design a solution that may already have its building principles mitigating many of the real threats that the application may suffer.

To talk about modeling, we have to understand that although modeling can be an extremely technical approach. It is not necessary to be an application security expert to run within the testing process.

Today we have two great methodologies that we can use to understand and create our modeling. 

The first and one of the oldest is the one that was improved by Microsoft back in the 90s. It made Threat Modeling a more structured process that could facilitate the development of application security.

Within the Microsoft model, there are 5 basic concepts that must be observed: the definition of requirements, the creation of the application diagram, identification of threats, mitigation of threats and validation of mitigated threats.

The great thing about Microsoft’s improved modeling is that it brings us a set of concepts and methods that give us a path to follow, making modeling a structured process. This leaves no room for the process to be developed without a basic structure. 

This point made modeling easy. Let’s see what these points are and the six steps that Microsoft generally identifies to be followed to complete a threat modeling.

The four steps

The first step is the asset identification process. In this case, assets are all the value components of the solution, such as data flow, connections, databases, API and so on.

Then, going beyond the asset identification phase, we need to understand what your application does. To do this, identify the use cases that you deem pertinent for you and other people on the team to use to understand and search for possible threats to the application. 

The third step is when you’ll break down the software into its basic structures, allowing you to gain insight into how the software may be affected by some vulnerability or even threat.

The fourth step is when, based on all the data we already have, we start looking for threats that our application may face. 

Then, at this point, we can use a component of Threat Modeling. Using STRIDE, which is an acronym that helps us structure the search for these threats, the job becomes easier and more targeted.

STRIDE tries to identify threats related to:

Spoofing : Attempting to assume or impersonate another user

Tempering: Attempted modification of something.

Repudiation: Attempted by the user to deny an action.

Information Disclosure: When there is a possibility of information leakage

Denial of Service: Denial of service of any service

Elevation of Privilege: When there is a possibility to elevate a user’s privileges.

By following these acronyms, it is possible to have a direction and identify the threats that can impact the application.

Having created this threat list, it is now possible to document the threats that have been identified using the STRIDE method. For this documentation, try to address the threats by visualizing the target of this threat and placing it in a central set in the documentation. 

A suggestion is to do as shown in the image below:

Finally, we can now classify the threats identified and work on their classification structure. First think about the risk that the threats may bring to your application and how they may affect your business. This will make it easier to create a set of threats to work with first.

The DREAD method

To facilitate this process, the Threat Modeling model also helps us by bringing a method called DREAD. It will help us put into perspective which threats are most relevant to our assessment and works as follows:

ClassificationHigh (3)Medium (2)Low (1)
DDamage
RReproducibility       
EExploitability       
AAffected Users
DDiscovery

Having built this, we now have the possibility to work on the process of developing our application. However, now with more information and a path that can be followed to avoid vulnerabilities that could affect our application.

Using modeling is also important. This allows us to have a “checklist” in the testing phase, which would make it easier to build test scripts, as we already have some vulnerabilities that can be tested. This way, if it has been mitigated correctly, it should not appear in the tests.

However, Microsoft’s methodology is not the only one. We have other methodologies that can be used, like OWASP, or even OCTAVE and PASTA. Here there is no reason to use one or the other – they are all very good, what matters is that you have a positive result for your process.

Try it and see which one best fits your process and structure.

Code Review

The Code Review process is understood as the manual revision of a code, and this revision aims to search for possible flaws or even vulnerabilities in the code.

The use of the code review process causes great discussion between the security and development teams, since there is an understanding that if you have a tool doing a static scan of code, what is the need for manual review that is widely understood to be non-scalable?

Well, as we’ve said, tools are important and help a lot in the code security process. However, they can’t be understood as the only ones responsible for this security.

Understanding how they work is important. Also, very roughly speaking, the tools search for code that fits a signature, and so they let pass many nuances that can easily generate a false negative in the tools.

Manual revisions are key to finding errors that human creativity can create, or even to identifying errors in business logic, which the tools would not normally pick up and let pass.

One of the best sources for those who want to understand and improve their code review process can be obtained through this OWASP document the Code Review Guide.

If we take the definition from the document itself, we’ll get an improvement in the clarity of what a code review is.

“It is the process of auditing the source code of an application to verify that the proper security and logical controls are present, that they work as intended, and that they have been invoked in the right places.

So, as we can see, it is an important process for ensuring the security of the application. And to reinforce this view, here is an image that shows how the OWASP TOP 10 vulnerabilities are better identified, mostly manual testing is more efficient.

We can close this topic by talking about code review with other quotes taken from the OWASP document the Code Review Guide.

“Organizations with a proper code review function integrated into the software development lifecycle (SDLC) produced remarkably better code from a security standpoint.”

At Conviso, we strongly believe that acting early and always in software development is the best solution to make applications more secure.

Static and Dynamic Testing

Testing is a very important process within secure development. Without them, we may be risking the quality and security of our codes.

To do this, we can make use of some tools, which in an automated way can do these tests within our process.

SAST and DAST tools are important allies to help us in this process of code security, we already wrote in our blog about it.

Static Testing (SAST)

The Static Application Security Testing or SAST tools, when executed, analyze the application code in some ways, such as expression matching, execution flow analysis, and data flow. 

To identify possible vulnerabilities, SAST tools use the presence or absence of specific code or data manipulation to determine whether or not vulnerabilities are present. And this is one of its great advantages, since it allows scaling a test operation, but also allows a series of failures, showing false positives, or worse, letting false negatives pass.

But let’s not be unfair. These are very important tools when we know how they work and what they can help us in. 

It is common to see SAST tools with lower false negative rates, but in contrast, their false positive rates are higher than DAST tools, this further reinforces the need to revalidate the results of these tools.

In addition, another point to note is the support for specific languages and versions of these languages, since when there are new versions for these languages some delays in the validations of these tools should be expected.

SAST tools usually work with two types of code:

  1. Source code analysis, which works with uncompiled code and configuration files:

This type of analysis is limited to packages where the code is “open” and therefore may lose some vulnerabilities, which end up being better identified in the compiled code. But this type of validation by SAST is the ideal one to use in earlier moments of the code, such as when we put it in the developer IDE. It can also identify insecurity or quality problems in the code, such as duplicate code or unused code. 

  1. Binary code analysis, which works with compiled byte or binary code: 

Although it can’t be used until the code is actually compiled, it can be used when the original source code is not available, as in the case of purchased software. 

This type of testing may be ideal for validation of blurred code, but should take other factors into account. For example: compiler optimization, third-party libraries, and code injection (for example, using mobile application packaging).. 

Dynamic Testing(DAST)

In a slightly different way, DAST tools test vulnerabilities by simulating interactivity with the application. This happens in real time execution, and the goal is to try to identify if any vulnerability was successfully exploited.

In the DAST tools we will find some of the features of SAST. This is used to improve the effectiveness of the results, and can be put as an example the testing of JS dependencies discovered in the code. 

Unlike what we saw in SAST, DAST tools tend to have fewer false positives, but false negative rates are higher. This is also very worrying and should be understood when using these tools.

DAST tools can be divided into two categories:

  1. API and WEB application testing

In these cases, the tool will combine signature-based checks into patterns for known classes of exploration, such as XSS or even injection attacks.

These tools are generally indicated to identify web applications within network structures and thus proceed with testing strongly based on web applications. Their main functionality is to search for the most common vulnerabilities, precisely because it depends on a signature validation process.

  1. Dynamic testing and “fuzzing” for non-web applications 

This type of test seeks to manipulate network protocols or other data sources, such as files, to look for exploitable vulnerabilities in the application code that manipulates that data or protocols. These tools may or may not be specific to certain protocols, such as the Web, and generally aim to provide random, standards-based testing to maximize coverage.

Intrusion Testing(Pentest)

Intrusion testing is one of the most common tests when we think about testing our applications.

Although we have 3 types of intrusion tests, which we explain in more detail in this article, the most common for application testing is the black-box. Intrusion tests are essentially tests performed remotely to validate the security controls of an application.

Once again we can search OWASP for material that can help us in these tasks. For this, reading and understanding the WSTG (Web Security Testing Guide) is essential.

This test aims to understand how the application would behave with a real attack from a real attacker, and what the pentest teams try to make possible is the discovery of vulnerabilities that were not identified at the stages of building the code.

Intrusion testing is an important validation tool. After all, in theory, the tests previously executed should have already identified the vulnerabilities that the code may suffer.

It is not uncommon to identify many companies that use intrusion tests as their first tests to be performed on applications, and often being done by automated tools. Certainly these tools have their importance in the process, but we also strongly believe that nothing will replace the creativity and malice of an experienced analyst. Therefore, we still reinforce the need for this test to be done by an analyst.

In an article written by Gary McGraw and others, we have some points that may reinforce our understanding of the importance of testing within the SDLC and even the importance of intrusion testing.

In one section of the text, we have:

“Organizations that fail to integrate security throughout the development process often find that their software suffers from systemic faults both at the design level and in the implementation (in other words, the system has both security flaws and security bugs)”.

And in another section, we can see what I think is most relevant to our topic, as he puts as:

“In practice, a penetration test can only identify a small representative sample of all possible security risks in a system. If a software development organization focuses solely on a small (and limited)list of issues, it ends up mitigating only a subset of the security risks present (and possibly not even those that present the greatest risk)”.

As we speak, intrusion testing is the most common type of testing that we perceive in customers. But they are also the ones that are hardly the best planned. When well planned and positioned, they can be an extremely important tool, and can even help maintain a safer application.

Conclusion

Now I want to make a slightly different conclusion: I want to suggest some things we can help with when it comes to testing.

But always remembering that what we put here is not and can’t be understood as a finite list of tests that should be performed. These are important, but far from being the only ones.

Within Conviso Platform we have the possibility to perform automated tests using the testing technologies, DAST and SAST, from Conviso Platform itself. This makes it easier because they are already internally integrated with the platform, delivering the information directly to our users.

However, if our client already has a tool and doesn’t want to stop using it, our Conviso Platform allows the integration of the main tools in the market, delivering the scan results in our platform in a centralized and organized way.

These tests can be done either manually, that is, under user’s demand, or even being left scheduled so that the development team does not have the interference in the process, thus ensuring that the tests are performed within your CI/CD pipeline.

It is common to find clients who already have one or more tools that are a big part of their analysis, and this need not be left aside if they have interest in our Conviso Platform.

In the image above we can see only some of these possible integrations.

Conviso Platform was thought to integrate with several market tools, receiving the analyses and placing all of them within a single strategic vision of the manager and the team.

This kind of ease helps teams that are already used to its tools and understand that not using these products would be an obstacle to the adoption of Conviso Platform.

If you have several tools in your structure and they are generating a lot of information, it’s easy to understand why we have so many teams lost in your vulnerability management and remediation process.

Ensuring a single view is one of the goals of Conviso Platform, we want everyone on the team to have an overview of what’s happening in your structure and how it can affect everyone.

The Conviso Platform Dashboard is built to deliver the most relevant information about your analysis to users. However, we are always careful to evaluate if the way we are delivering the information is the most efficient and that is why the Dashboard always goes through reviews to improve the way we present the information. 

We believe that information is only of value when it delivers a gain to everyone in the face of their problems. Information has to be important to the context.

Nova call to action
About author

Articles

Over 15 years of experience in Information Security and Applications, graduated in Data Processing worked as a Professor and participated actively as an instructor on trainings to more than 6000 developers and IT teams. Father of two daughters and trader on free time.
Related posts
Application Security

Finding classes for exploiting Unsafe Reflection / Unchecked Class Instantiation vulnerabilities in Java with Joern

During a pentest engagement we found a Java application vulnerable to unsafe reflection [1]. This…
Read more
Application Security

Mitigating Vulnerabilities: Elevating Security Proficiency in Software Development

In the ever-evolving digital landscape, the significance of software security cannot be overstated.
Read more
Application Security

The Importance of Supply Chain to Application Security

When we think about software development, we usually think about complex technical concepts…
Read more

Deixe um comentário