Secure application development and your secure SDLC program.
By Carl.net on Saturday, August 3 2024, 16:38 - SDLC - Permalink
Application security is the holy grail of security, and you need a mature program to make sure your applications have the least number of security issues possible. This article lists the security components your application development program must have to ensure you can create secure applications.
Application security is the holy grail of security. Much like Tim the Sorcerer from The Search for the Holy Grail, people become overly excited about the topic, and no one entirely agrees on how to reach the elusive goal. I have spent the last 22 years worrying about application development and how to drive security into everything my organization does. And being at a global consulting company, that would be tens of thousands of applications from things in airplanes, applications supporting hospitals and financial institutions, all the way down to kernel level code. It is safe to say that unless you are Amish, code of some sort has a constant effect on your life. And the security of that code has a significant impact on how well that code serves you. The importance of secure coding practices cannot be overstated, as it is the key to ensuring the safety and reliability of the code that impacts our daily lives.
As a consultant at heart, I always like to give you my credentials to speak on a topic so you can weigh what I have to say based on my experience and knowledge. To start with, I am a terrible coder. People who have read my code probably wonder why I do not take up something safe like underwater basket weaving. However, I can usually read and figure out code, and even better, from a security standpoint, I am very good at breaking things. To be frank, my inability to write good code probably makes me better at seeing bad code because I feel a certain kinship for it. Over the last fifteen years, I have also had to clean up the mess after someone else wrote insecure code, so I have been very motivated to ensure people write good, secure code. My experience driving good coding practices has now spanned 15 years and tens of thousands of application development projects. I am not just a disciple of secure coding practices but have become an evangelist of the religion of secure coding.
No two coding languages are exactly the same, and as you move back in time, the older a language is, the more likely it is that a coder can write insecure code. Another thing to keep in mind is that the lower the level at which the coding language operates, the more likely the coder can do things in the language that lead to insecure code. Every language has things it does well and on the opposite side, things it does poorly. However, on the plus side, many of the things you can do to improve the security of the code being written by your teams are mostly language agnostic. This means that regardless of the language you use, you have the power to implement secure coding practices and ensure the safety of your applications.
Here is the list of the twenty-three things organizations should consider as part of their secure application development program to improve the security of their code and applications:
1. Policy - Secure coding all starts with policy. People are selective in what they do and will not do the things they know they can get around doing. It is part of life. If you have twenty things to do and only time to complete ten of them, you will do the ones you must, and the others will get ignored. A policy helps with this issue; in the policy, you set out the organization's requirements for code. With a good policy, there is no guessing and no excuses when the policy has not been followed. A good policy also allows external parties to understand your stance on secure coding and what you plan to do to make theory into reality. Do not skimp on the policy, and make sure to update and communicate your policy regularly.
2. Procedures - Procedures follow policy. You need well thought out and designed procedures giving the steps required to comply with your policy. Procedures should make following the policy easy for your coders. One client I was trying to help had a process where they had to meet with the security team to determine what procedures a team needed to follow. The security team would then verbally tell the development team what they needed to do. This is 100% the wrong way to play this game. Any coder should be able to access and understand the procedures required by the policy and should only need to speak with someone in security as an exception. And every time there is an exception, the policies and procedures should be re-assessed to ensure they do not need improvement. Make consuming the policy and procedures easy. The security team must make writing secure code easy, not hard.
3. Training / Education - Well-trained developers write better and more secure code. The time you spend upfront training your developers on writing secure code will pay off in the end when there are fewer things you have to chase and fix. "Do it once, and do it right" is one of the things the best application developer I know told me. Their code was compact, beautiful to look at, and had few bugs. Over the years, the quality and security of the code produced at my organization has improved considerably based on the reduction in incidents due to poorly written code going down significantly. I attribute a good portion of that to better training for our developers before they write a single line of code. If you have not been trained in secure coding, you don't write code. The reduction in incidents shows how well this rule works.
4. Secure coding checklists - People like easy, and checklists are nuggets of easy. For whatever language your developers use, you should have a secure coding checklist built with your policy, procedures, and the expertise of your best coders. Even better is once a month, you should review the checklists with everyone on the team and ask for examples of where it was used and if there are any updates that might make sense. Team review and discussion of the checklist will drive home its importance.
5. Requirements - Business analysts love requirements; if you let them run wild, they will create a truly impressive number of requirements. One of the things that you should have is a base set of security requirements that get bolted onto your project-specific set of requirements. This ensures that security is accounted for in your application but also when the test scripts and instrumentation are being built. Your security requirements should help reinforce your secure coding checklists.
6. Frameworks - Frameworks like OWASP top 10 are gold when it comes to improving the security of your application. Failure to take everything covered by the OWASP top 10 into account is a monumental failure. As an FYI, there are more OWASP lists than just the standard top ten. Incorporate them all into your coding program as required and, at minimum, use the training provided by OWASP on the topic. If you have another base framework or additive framework, use that too, but never forget the very basics. While we are at it, you also should incorporate the SANS/CWE Top 25. There is considerable overlap with OWASP, so combining them is not particularly difficult.
7. Threat modeling - Threat modeling, when done right, is a game your developers get to play while thinking like a hacker. To put it bluntly, you are paying them to play. Developers like this, and you should, too. When the process is completed, you will have a list of threats to your application that can be used to create reasonable countermeasures. No matter the model you use, your list will not be perfect, but it should be complete enough to give you a fighting chance against the bad people who will attack your application. Choose a model and work the model. And if you want to know which model is best, what experience has taught me is that it is the model you will actually use.
8. Privacy and Security by Design - When your teams sit down to envision your application, security and privacy should be at the core of their thinking. For example, they should not design an interface allowing users to input data into the system. Instead, they should design an interface that enables the user to input only the expected data and assume that the user will try to abuse that interface. Further, the interface should only allow the user to input the data needed. If it is sensitive information, the method of collecting, transporting, and then storing the data should be secured. Secure by design considers security at every step, not as an afterthought.
9. Automated code analysis - Automated code analysis tools (SAST/DAST) have become the standard in many large and medium organizations. There are generally two types. Static code analysis tools look at the code while not being executed and, depending on the tool, can be used while the developer is writing the code or based on specific triggers. Dynamic code analysis looks at the application during execution and, in many cases, is the more challenging technology to adopt. Both automated code analysis types should be part of your secure coding toolbox. I will not call out any specific product as I believe you should choose the product based on your needs, not something someone else likes. Also, since static code analysis is easier to work into your processes, you should consider adopting it first and then figure out how to adopt dynamic code analysis tools. Further, ensure your static code analysis tool includes looking for secrets in your code.
Passwords, API tokens, and cryptographic keys should not be sitting around in your code to be stolen.
There is an important issue with some code analysis tools. Many operate as a service versus as something you run on-site. This SaaS nature of the tools means your code will leave your premises and reduce your control over your code. In many cases, this will not be a concern, but for some organizations, the lack of control over their code is not an acceptable risk.
10. Code review - Code review is a best practice for writing quality code and is also helpful for writing secure code. The value of the code review from a security standpoint is directly related to the experience of the person doing the review. I have seen a highly experienced developer spot security issues in code that no automated tool would catch. I have also seen where organizations use their most junior developers to review code, and they have caught nothing that was a security concern. That said, it is expensive to do code review right, so many organizations rely on automated tools. If your code has to be secure, you will need to spend the time and money to have your best developers do the review.
11. Automated testing - Everyone loves automated testing, and for good reason. Computers are very good at doing repetitive tasks. Thus, once you figure out your testing scripts based on what you know, you can configure testing to happen continuously. The problem with relying on automated testing is that it can only test for what you or the test software knows about. The hackers will test things you never thought about. And with AI-enhanced fuzzing, the tools and the hackers are improving daily. Upgrade your testing to keep up, but do not rely on it entirely. Think of automated testing as your friend but not one you would leave your kids with.
12. External testing - If your application exposes resources to a network, you must also perform network-based testing. You can use automated tools regularly, but you also need to hire a team to perform penetration testing for you at a regular interval. You also want to swap out your testing team/vendor occasionally to see what another set of security testers can find. Failing to do external testing and swap out your team will cause you to miss vulnerabilities the hackers will use against you.
13. Admin access - Developers love to code while having administrator or other elevated access. Things mostly work, so initial development is easier. You must always provide for separation of duties when writing and promoting code. This rule has zero exceptions; anyone who breaks it will eventually pay the price. Applications that are not specifically administration applications should never run with elevated privileges. When someone finds a bug in your application, they will use the access it has to get to more things. If your application has limited access, their access will also be limited. The discipline of writing the code that will become your application with limited access will make the developers use more secure methods to do the things required. Also, the knowledge that they will not be promoting your code to production makes sure they document what is needed for the code to run so it can be assessed.
14. Secure architecture - A poorly designed architecture will ruin many of the positive things you have done. For example, I have seen a large server-based application written as a monolithic application. Forget debugging the application; try updating a part of it. Always use good design and architecture. Good architecture starts with separating the components of your application into logical parts that can then be arranged in the location that provides the best security. The tried and true architecture style of having a front end, a back end, and a segmented data layer has stood the test of time. From a security standpoint, there is never a reason for an end user to be able to access your back end and data layers. Organizations developing in the cloud often forget this and expose portions of their applications to the internet that should never see the light of day. Start with the truism of "if hackers cannot get to it, they cannot attack it" and work forward from there. The net is that you should consider security in your architecture (think back to secure by design) and ensure that you are doing things that make sense from a security standpoint. After you have done everything else, every application should have a security review where the architecture of the application is reviewed for issues by your security team. Do not waste all the good work you have put into writing secure code by running it in an insecure architecture.
15. Cloud architectures - Developing applications in the cloud is very similar to writing for your own hosted infrastructure. The most significant difference will be your level of control over the infrastructure parts you consume versus if you were hosting them. The other significant difference is you must consider what services your cloud provider provides and how those are both exposed to you and administered. Obviously, if you are basing your application on one provider's cloud offering versus another, the specifics of your chosen provider will drive some of your architecture choices. Finally, multitenancy is a concern, but most cloud providers have worked to mitigate the issue by using encryption everywhere. This is not a complete solution, but it helps significantly. Also, if you are developing SaaS types of environments, changes that your provider makes can have huge effects on your security. So do not assume that just because something works one way today, it will work the same way tomorrow.
16. Software bill of materials - Rarely are large applications built solely by your own developers. It can happen, but unless you are coding in Assembler using a text editor, there is a good chance you will incorporate external libraries into your code. Each component you have used to assemble your software needs to be identified and tracked. Before you even begin to code, you must understand the licenses attached to the components you want to include. If you do not, you may violate the component's license or expose yourself to a situation where the original license gives the license holder some rights to your new code. Modern IDEs make incorporating external components into code very easy. You must educate your developers about the issue and provide strong guidance on how and when external components are allowable and can be used. It would be best if you specifically worked with your more junior developers, as many will have been working in environments where licenses were never discussed. Each application's bill of materials must also be assessed at a minimum yearly to understand if there were any changes to the licenses or security updates to the external components that need to be incorporated into your applications.
17. Third-party software assessment - As part of your building a bill of materials or application inventory, you must assess each external component for security. All code has bugs, and the third-party code you want to use is likely not written to your standards. Even if it was, there is a good chance that the methods hackers currently use differ from when the code was written, so the software needs to be reassessed. Assess all third-party code for bugs, back doors, malware, and general security. You should prohibit third-party software from being used until this process has been completed.
18. Application inventory - If you walk into most organizations and ask them to show you their inventory of the applications they have written and use, you will, in many cases, get strong deer-in-the-headlights looks. Let's ignore shadow IT and concentrate on official IT, where the applications were built and owned by at least one department and officially sanctioned. Most organizations cannot give you a list of all of their applications, who currently owns them, maintains them, the language, where the application is in its lifecycle, what it does, where it sits, and what data it houses or has access to. If they have an application scanning program, they should also track the last time the code was run through their scanning process and updated to meet the current knowledge of attackers. All of your organization's older applications are ticking time bombs waiting to be hacked and used against you. Further, privacy laws change, and how the application stores and uses data must be reviewed regularly to make sure you are in compliance.
19. Final testing - Think CrowdStrike. I am not picking on CrowdStrike for their poor testing practices; I am using them as a very good, bad example. If their description of how they test is fact, they were actually doing a "reasonable" job. But they forgot the last step. Testing does not stop when your scripts are done. That just means that the things you thought to test all worked, or your tests failed in some way that you did not detect. The last step is a real-world test. Put the new code into the real world and make sure your initial set of test users report that all is clear before you release your new shiny code to the world. If it passes the real-world test, you have a good chance of things going well. Someone will still find a bug in your code they can use against you, but at least you will not blow things up right out of the door.
20. Code security - Developers are used to checking their code into repositories and will often want to show off their masterpieces to other developers. They can, by accident or intent, export the code you have paid for to external repositories. The code often has your business processes as its genesis, making reverse engineering your business secrets relatively easy. The code they wrote for you is your property. So you must educate developers, especially new developers, that the code they wrote for you belongs to the organization and must be kept secure and not exported to external locations.
21. Code submission tracking - With modern code repositories and coding practices, tracking who submitted what is easy and should be automatic. Ensure it is, and people use the correct methods to submit their code. If you find troublesome code, the idea is not to punish the person who wrote it but to educate them so they do not continue making the same mistake.
22. People security - Personnel are your most significant threat to writing secure code. A person who is angry at your organization or just having a bad day can write any number of things into your code that will hurt you later. Great coders can create issues for you that will not show up for months or years. Know your people, treat them well, and know who you are hiring. When someone leaves, have a senior person review all of their code for at least the last six months.
23. WAFs and API security gateways - Sometimes, things will just go wrong. One of your developers might have built something into the code that allows for easy testing, but it would be a nightmare if let into the wild. Or they will make a mistake that was not caught earlier. There are lots of reasons, but things will go wrong. Web Application Firewalls and API security gateways are not part of your SDLC but are what you use after everything else has gone wrong. They will block some of the issues left in your code and give you the time to find the problems and apply needed fixes. These tools fall into the always have a backup plan for your backup plan category.
After discussing the components of a secure application development program, the question arises of which of these things you need and in what order. As for which of these you need, the answer is all of them. Each component is a building block that interlocks with its neighbors, giving you a strong structure. It would be best if you had them all. In an already operating organization, you will likely have many of these components and need to improve them and add the rest. I advise working on the easy ones first unless you have a risk driving some of the harder ones. But set a time limit to complete everything, as people are already attacking your application.
© 2024 Carl Almond