When we talk about System Hardening we are referring to the analysis done on systems that will host the application in search of services, default configurations, logic gates and other unnecessary things for that application.
Whenever we deal with web Application Security with our customers we make it very clear that there is no web application security if it is not supported by a well configured and protected system.
Performing hardening is to seek to reduce the attack surface!
The attack surface of a web application is every combination of vulnerabilities and other attack vectors present in the application and the infrastructure supporting the application.
This not only includes outdated systems and firmware but also configurations that have been implemented wrongly and thus can lead to application risks.
In addition to these points, we need to think as attack surface passwords and users left by default in the application code (hard-coded) as well as the failure to properly implement encryption solutions.
Reducing the risk of malware attacks and other security threats is minimized by reducing the attack surface, but there are also a number of other benefits.
Systems where hardening settings have been made are easier to maintain because the amount of active components is smaller.
In addition, the hardening process also improves the performance of the application and the system itself, as unnecessary functionality has been eliminated that could drain valuable resources.
The system hardening process only brings positive points to your application and that’s why it’s one of the most important points that DevOps teams must observe when building an environment that will host your application.
Some ways to execute hardening
Hardening systems is not only good practice, in some areas it can be a regulatory requirement and always with the aim of minimizing safety risks and ensuring information security.
For example, if your system processes medical patient data, it may be subject to data protection requirements based on the new General Data Protection Act (GDPA).
Another example is if your system operates in the processing of payments using a credit card. In this case your system will have to conform to the controls indicated in PCI DSS.
As we can see, not always the simple introduction of a new system in our infrastructure should be understood as just the initialization of a system, we have several aspects that can strongly impact this product.
DevOps teams should always be aware of cases where there are regulations or even contractual requirements.
Furthermore, there are several organizations that create and publish their own standards and/or procedures that can be adopted by companies that can thus present to their customers and/or partners a demonstration that they are willing to invest in security.
Some of these examples can be seen when we look at the documentation produced by the Internet Security Center (CIS) or the International Organization for Standardization (ISO) or the National Institute of Standards and Technology (NIST).
Some of the leading software vendors also provide their own product-specific protection guides.
Have a Checklist
Whenever we are going to develop an action that can be repetitive, such as the validation and or execution of hardening of some services, it is advisable to have this type of action organized and validated in some way.
Thinking about it, the construction of a list with all the necessary steps to execute the hardening is advisable.
Your checklist will vary depending on the infrastructure, applications, and security configuration.
An application deployed in a cloud-based framework will require very different actions from a complete physical infrastructure, but the objectives are the same.
For the creation of your list we suggest you start by building an inventory of all assets that are relevant, both software and hardware.
To complete this first inventory of yours, look for identifying surfaces of external attacks, which can be achieved through specialized audits.
In addition, it is advisable to perform intrusion tests, vulnerability scans, and other methods that can help identify weaknesses in your external structure.
When lifting your web applications, it may be possible to identify applications that should already be disabled or even present serious failures that would further increase system risks.
Evaluate the system and the users account
Today we still find structures where by deploying the systems that support the web application, there are users brought by the system in a standard way and with their proper passwords and permissions.
This is one of the first points that must be observed to improve the security of an application, remove any user that is not necessary for the execution of the application.
This type of action should be done regardless of the physical or logical structure you are currently evaluating.
What we want at this point is to improve and elevate data access control, which is still one of the biggest security problems in applications and systems.
This concept should apply to all levels of software and hardware, because its main objective is to prevent improper access to systems and data.
At this point the first control that should be put is to use as standard rule the “deny all”, that is, all accesses are denied by default and only the necessary accesses are released so that each user has the possibility to work in the system correctly.
After this validation, define the criteria of the password directives to apply strong passwords and password rotation, as necessary.
Within your framework, always seek to impose data centralization policies that facilitate protection and management, and don’t forget to protect your backup files in an encrypted manner.
Look up to net of servers
Securing the server is the main aspect of protecting a web application.
However this does not mean that only the servers that support the web applications will be protected.
Protection should extend to all servers that support the entire solution and this includes database and file servers, cloud storage systems and interfaces to any external system.
The first step is to start by removing or disabling software and/or services that are not required for application support, and this includes services such as file sharing and or FTP.
Have few ways to access the systems, prefer more secure systems such as connections made through protocols such as SSH and when possible disable your web-based administrative interface.
Ensuring network security is critical in system hardening.
Ensuring that a failure is fixed as soon as possible can be the difference between ensuring the security of your system or having your system compromised.
Therefore, apply the latest security patches after testing them outside the production environment.
To ensure maximum efficiency is delivered to the upgrade tasks, evaluate using it whenever you can to automate the upgrade process and generate alerts for outdated products.
How to maintain your area secure
If everything was done, now the question arises “But how do we always keep everything up-to-date and safe?”
The process of hardening systems is not static and should not be executed in a single time, on the contrary it is a dynamic and continuous process.
The first time you run your system hardening should generate the procedure that will be used as a model, a base guide for the other runs.
After that, any and all changes should be evaluated and again go through the hardening process, thus ensuring that all evaluations were made and that the necessary procedures were followed.
What we have to remember is that the security scenario is constantly changing and new threats appear every day and we must always be aware of these changes.
If we understand that the threats that appear daily are just some of the threats our system faces, we will always seek to improve our protection capacity.
To ensure the security of the system, it is a good idea to plan the use of tools to check vulnerabilities always aligned with the execution of Web intrusion tests using experienced and qualified professionals and not bet all its chips on tools that only deliver reports without the least refined analysis of the results.