Application Security

The challenges in application security in the use of artificial intelligence by developers

As artificial intelligence (AI) becomes more and more present in our daily lives, it has become crucial to consider the potential risks and benefits of using these new technologies. An example of this is ChatGPT, an AI chatbot that gained massive popularity in a short period, surpassing one million users on the platform.

However, the use of AI also presents challenges regarding cybersecurity and AppSec. Using AI to create code and functionality, for example, can result in unintended security holes that can be exploited by malicious individuals.

In this sense, it is essential that developers are aware of these risks to ensure the security of their applications when using this type of resource. With that in mind, in this article we are going to take a closer look at the role of AI tools in the development process and the precautions developers need to keep in mind when using this type of resource.

Is Artificial Intelligence here to stay?

In recent years, we have seen a wave of Artificial Intelligence (AI) tools and software emerge on the market. From Lensa AI, capable of creating images from text, GitHub Copilot, which uses natural language to suggest snippets of code to programmers as they work on their projects, to the latest ChatGPT, which uses large volumes of data to provide answers in natural language.

There is no doubt that these tools have demonstrated exceptional results that can improve production efficiency and quality.

However, the use of AI also presents challenges regarding ethics and privacy. An example of this was the case with Lensa AI and Dall-E 2, where digital artists discovered that their work had been used to train these models, without their consent.

Some privacy concerns raised by experts include the possibility of GitHub Copilot suggesting code that contains sensitive information or intellectual property, as well as the collection of user source code data by OpenAI and Microsoft.

Although the use of AI presents challenges, there is a general consensus that the technology has the potential to revolutionize the way we develop digital products, which is perhaps the greatest value of this tool. AI can be a valuable tool for improving the efficiency and quality of many processes and products, from producing physical goods to creating sophisticated software.

It is worth remembering that AI is not a magic solution that can be applied to any problem. It has its limitations and like any tool, it must be used consciously.

What is ChatGPT?

ChatGPT is an artificial intelligence model developed by OpenAI, a San Francisco-based AI research company co-founded by Elon Musk. It looks like a virtual assistant, providing chat to users so they can interact with a machine using natural language.

This tool has the ability to generate a natural and fluid conversation, which surprises many! It is trained to understand natural human language and generate thoughtful human-like prose upon receiving a request.

While ChatGPT brings numerous advantages, such as the ability to provide compelling and useful responses in real time, there are also concerns regarding its implications. Some critics argue that using ChatGPT can lead to job losses and reduced human interaction, potentially damaging social connections and relationships.

It is important to note that the model is trained on a large amount of data up to 2021, which means that it can provide insights based on existing data up to that time.

Benefits of using AI chatbots for developers

These tools can be powerful, helping developers from learning to code to solving complex problems. And, due to its very intuitive use, it can provide great professional benefits for technicians of different levels.

One of the main advantages is its ability to explain the code in an easy and accessible way, allowing developers to better understand how the solutions work and learn from it.

It can also help developers save time and effort by providing suggestions and working code samples. If you’re stuck on a specific programming problem, the tool can suggest a solution or help you identify the error in your code.

Another advantage is its ability to adapt to developers’ knowledge and skills. It can be used by novice and experienced programmers alike, adapting to the user’s experience level and providing relevant information accordingly.

It still sells:

However, as with all tools, ChatGPT has some limitations. In some cases, the answers can be confusing or inaccurate, which can cause more problems than solutions. Additionally, some developer communities are banning AI-generated responses, as is the case with StackOverflow.

Ultimately, ChatGPT can be a great asset to developers, but as with all tools, it’s important to use it with skepticism, especially for Application Security issues.

Cautions developers need to take when using AI chatbots

The tool is capable of generating incorrect information with complete confidence and can be used to aid in the creation of malware or to generate it from scratch to suit specific scenarios. AI chatbots can also be used to generate spam and phishing emails, as well as to steal data and create botnets to carry out distributed denial of service (DDoS) attacks.

But when we deal specifically with Application Security, what considerations do we need to be aware of?

Sensitive data protection – be careful what code you share!

When using an AI to help you work on your project, it is important to ensure that sensitive data such as login information and passwords are not inadvertently shared with the AI.

Reliable source verification – don’t trust everything it says! 

It is important to verify the source and reliability of the information provided by the AI, especially when dealing with security issues or making critical business decisions.

Check and review the generated code – it may solve your problem but bring others behind your back.

If you ask to generate code or solve a problem in your code, the material that it will generate AI may contain security vulnerabilities, be aware and do a secure code review before implementing it.

AI is not a replacement for AST (Application Security Testing) or Secure Code Review – it can check your code, but not all of it. Despite being an interesting tool for validating secure code, it still does not completely replace the use of other security tools and analyzes that are always up to date and have full access from your repository.

Learn security concepts – don’t trade up-to-date OWASP and documentation for outdated smart chats. You have a security doubt in relation to your code, the chat can be a support, but the final word you will find in serious and recognized documentation such as that of OWASP.

Another important tip is to avoid using AI chatbots in sensitive or critical scenarios, such as in projects involving financial data or personal user information.

Furthermore, it is critical to always be up-to-date on new vulnerabilities and threats associated with these AI chatbots. With technology evolving rapidly, it’s important to stay on top of the latest security trends and best practices, thereby ensuring a more proactive security posture.

Application Security Benefits of Using AI Chatbots for Developers

Artificial intelligence offers some positive use cases in application security, especially when it comes to code development and review.

Some tech users have reported that ChatGPT has been helpful in a variety of tasks, including code review, patching tips for pentest reports, automating actions that require basic coding/scripting, program debugging, and more.

Training the AI to prioritize security: By using AI to generate code, you can train the model to prioritize security. This can be done by feeding the model with data and examples that emphasize the importance of security and the identification of vulnerabilities.

Support for Code Review: these tools can help you identify security issues as well as syntax and semantic errors in codes, as well as suggesting corrections and improvements.

Understanding pentest reports and findings from security tools: it can help in the review and analysis of security test reports and findings identified by other tools, making it easier to make corrections through specific questions to the chat.

Automation of security actions: With basic knowledge of coding/scripting, AI can assist in automating tasks that require repetition of commands.

Some tips given by ChatGPT itself:

Conscious and smart use

It is important for developers to keep in mind that application security is a shared responsibility. 

Therefore, it is critical that all team members are aware of the risks involved in using ChatGPT and adopt the necessary security measures to protect the project as a whole.

Therefore, AI tools have great potential to transform developers’ daily lives, but one must be aware of the risks involved. By adopting good practices and maintaining a proactive posture in the face of risks, it is possible to use ChatGPT and other chatbots with more security and confidence in your projects.

Rodrigo Maues Rocha – Security Analyst
Gabriel Galdino – Developer Advocate
Tiago Zaniquelli – Security Analyst

Nova call to action
About author


A team of professionals, highly connected on news, techniques and information about application security
Related posts
Application Security

Finding classes for exploiting Unsafe Reflection / Unchecked Class Instantiation vulnerabilities in Java with Joern

During a pentest engagement we found a Java application vulnerable to unsafe reflection [1]. This…
Read more
Application Security

Mitigating Vulnerabilities: Elevating Security Proficiency in Software Development

In the ever-evolving digital landscape, the significance of software security cannot be overstated.
Read more
Application Security

The Importance of Supply Chain to Application Security

When we think about software development, we usually think about complex technical concepts…
Read more

Deixe um comentário