Security enables business or mission operations.
Full stop.
So the desired confidentiality, integrity, or availability of data is purely a business- or mission-driven decision. This makes it difficult to write “generic” security requirements that are applicable to all situations.
For example, while you would want to restrict who can see social security numbers using an application, at least one person needs to view them to do anything with them. Defining who that person is cannot really come from the security team.
With that said, over my career I have encountered a variety of situations in which development teams designed a product or service in way that to me was clearly insecure. And sure enough, when I pointed out something I didn’t like, they would usually reply that product management (representing the business) hadn’t explicitly written a requirement for whatever security “feature” I was asking for.
Thus, over time, I compiled a list of standing requirements related to security (and other things) to prevent just this type of situation. While the list below is not comprehensive, it should give you a good base from which to begin when starting your own list.
The goal should not be to say “gotcha” to your engineers, but rather the ensure a common understanding of best practices when it comes to security.
Conduct a threat model as part of any feature design and before implementation work.
At a minimum, I would say that any new feature should at least conduct a Mozilla Rapid Risk Analysis (RRA). This is a very quick and lightweight process that can drive risk-based decision-making and should be table stakes for any new feature design.
Much more detailed threat models might be appropriate for features touching or allowing access to sensitive data or business critical operations.
Unless specifically noted in the feature requirements, all Application Programming Interfaces (APIs) must require authentication.
There are certainly cases where having public APIs with no authentication requirements makes sense, but you should default the other way when lacking a clear business reason.
I have seen APIs deployed that would allow for access to sensitive information where no one thought to add authentication; it’s conceivable (but not especially reassuring) that someone might forget to add it if it’s not part of a checklist.
No functionality may in any way override or allow for users to exceed the allowable bounds of existing and/or documented permissions management.
This one might seem like a no-brainer, but I have frequently seen situations arise where undocumented functionality allows users to do more in an application than was specified by the functional requirements. This often happens with legacy features that are deprecated but not fully removed from the code.
A example is debug flag accidentally left behind. A user who is not properly authenticated should not be able to do anything he normally couldn’t just by passing the URL parameter
?debug=true
. This flaw is so common that there is a dedicated Common Weakness Enumeration (CWE) for it.
Administrative users must be able to restrict file transfers facilitated by the application by size, frequency, and file type.
Many file upload features I have seen don’t have any throttling on the size of the file that can be sent, which means that anyone who can upload something can essentially conduct a resource exhaustion attack against the server by uploading huge files continuously.
Similarly, you also need to restrict the frequency of upload so that a malicious actor cannot write a script to upload files below the size limit in a rapid manner.
Finally, you will want to limit the file type to prevent the upload of executables or other potentially dangerous items. And no, you cannot just do a regex on the filename to check for this, as Chinese Advanced Persistent Threat (APT) actors are known to camouflage filenames.
No functionality may capture, collect, display, or transmit more information regarding users, their behavior, or the internal functioning of the software than is absolutely necessary to meet the explicitly identified business requirements.
The best way to prevent sensitive data from getting hacked is to never collect it in the first place. While there may be use cases for collecting internet protocol (IP) addresses, browser fingerprints, activity history, or other telemetry, think carefully about why you are doing so.
Similarly, don’t collect unnecessary identifiers from users. While I understand personalization can be important for B2C products, do you really need to collect a name and email address for a business app? Just collect the latter and make that the username. Names are “personal data” according to the General Data Protection Regulation (GDPR) and you can be fined for mishandling them. So if data serves no business purpose, don’t collect it.
All connections between clients/servers must terminate immediately upon expiration or revocation of the relevant certificate(s).
It should seem obvious that a connection between a client and server should only remain active when both of them have valid certificates. But for long-running connections these may not be checked frequently and can expire or even be revoked before the next handshake.
Thus, you should make sure to have a periodic check of every certificate to make sure it has not been revoked (higher priority, because this suggests the certificate was malicious) or expired (lower priority) in the interim.
While there are potential performance and privacy problems with revocation checks, techniques like Online Certificate Status Protocol (OCSP) stapling can help get around them.
The application must not accept or integrate with third-party party software without explicitly identifying what security measures the application has taken to validate said software and which measures are the user’s responsibility.
Plug-ins, extensions, and other third-party code enhance and improve the functionality of existing products, so it can make business sense to support them.
With that said, they can be an excellent vector for the insertion of malicious code into your or a customer’s network.
Thus, you should be very clear about what (if any) measures you take to validate third-party software that works with your application. The Apple App Store is on one end of the spectrum, with intense security and privacy reviews. Microsoft Excel is almost entirely on the other. Aside from some light (and optional) security checks you can basically upload anything you want.
Neither approach is necessarily correct, but you should be explicit (preferably contractually) with users about where your responsibility ends and theirs begins.
Unless granted an exception by a product management, all calls to third-party libraries shall only refer to the latest version (rather than a specific version) in order to ensure that dependencies remain evergreen and minimize the number of known vulnerabilities present at any given time.
Dependency “pinning” is a common practice where developers use a specific, designated version of an open source or other library when building their code.
This has the benefit of preventing breaking changes from automatically being introduced into the application, but has a major downside as well. Pinning means that the library is not automatically updated whenever a vulnerability is fixed in the open source project and the software is rebuilt.
As a second-level problem, staying on the same version of a library for a long time makes it harder and harder to eventually upgrade to the latest version due to the accumulation of technical debt and widening gap between first-party and third-party code. And due to Murphy’s Law, you usually get forced to upgrade during an extreme emergency like the log4shell vulnerability disclosure.
Thus, although there is some functional benefit to pinning, it should not be the default. Require developers to keep their code evergreen and ensure you have good automated testing in place to detect and help remediate breaking changes from third-party libraries.
Finally, product management should be making these calls. If developers complain that releases will be delayed, etc. then it should be a business leader who weighs the competing priorities.
All demo videos must obscure URLs that are not intended to be publicly accessible from the internet.
This is probably something most people don’t think about, but when recording a demo or other video, developers often use a test environment that are not necessarily accessible to the public.
While often requiring authentication and behind a firewall, exposing its full URL in publicly-posted video could potentially provide an attacker clues to the architecture of your internal network, including your continuous integration/continuous delivery (CI/CD) pipeline.
Thus, I recommend always obscuring or blurring out URLs when recording. Either crop out the navigation bar at the top and small sliver at the bottom that shows your URL or use an application to blur it.
When support employee from our company are “impersonating” a customer user to troubleshoot their use of a Software-as-a-Service (SaaS) application, audit all actions taken by the the employee in the customer environment in the true identity of the employee (i.e. not as the customer user being impersonated).
I have seen implementations which do the opposite, making attribution very difficult and undermines the purpose of doing this auditing in the first place.
All data provided to artificial intelligence (AI) tools or endpoints shall comply with the AI policy, and all unnecessary sensitive data shall be removed prior to transmission.
With the explosion in popularity of generative AI tools like ChatGPT and the accompanying OpenAI API, it is important to keep track of which data is flowing where.
Static analysis tools are developing the ability to identify any communication with AI endpoints in code, which can help prevent unintended transmission of confidential data to an external party.
Even if done in accordance with your organization’s policy, make sure you have an automated method for stripping out any unnecessary sensitive information not vital to the task at hand, such as by using GPT-Guard.
There are many, many other ones to add. And I’ll continue to supplement this list as I capture more ideas.
Additionally, the Open Web Application Security Project (OWASP) has put together some documentation on security requirement as well, and I view it as complementary to what I have written above.