Security questionnaires: worth the trouble?
tl;dr - the security questionnaire juice isn't worth the squeeze.
Let’s look at a traditional but relatively ineffective measure software security practitioners know: the security questionnaire.
In many organizations just beginning to establish a >3rd party risk management program, the first step is often demanding that suppliers complete such a questionnaire. These documents often take the form of multi-tabbed spreadsheets comprising hundreds and potentially even thousands of questions.
To quote one observer, asking a company to complete one is “the equivalent of asking someone if they’re [sic] an axe murderer. As it turns out, most people say no to this question—including most axe murderers.” Anecdotal evidence also suggests that those requesting completed security questionnaires have difficulty interpreting them – or do not even read them at all. Thus, this exercise of answering security questionnaires and evaluating responses to them takes tremendous time for both the requesting and receiving organizations while delivering little value.
Such inquiries are generally broadly-worded requests for descriptions of supplier development, security, and human resources practices, as well as the capabilities of its product(s). Oftentimes the documents allow only binary "yes" or "no" answers to multi-part questions.
Certain inquiries are so outlandish that they are probably traps laid by customer information security teams to determine if the supplier is actually reading the document and responding truthfully. An example would be a requirement that a vendor confirm it has encountered zero vulnerabilities in its product within the past 12 months. Given the rate at which publicly identified vulnerabilities are reported - up to dozens per day - it would stretch belief for an organization to claim that it had detected none during an entire year. Any vendor who does so is likely not reading the question, lying, or doing something even worse (at least from a security perspective): not scanning for vulnerabilities at all.
Assuming you can get accurate answers to your questions, there are yet more problems to address. Answers to questionnaires usually only reflect a single point in time and generally cannot reveal future configuration drift. Questionnaires sometimes ask if the target organization has its own vendor or 3rd party risk management program, partly addressing 4th party risk, but only superficially. As is often the case, this question can sometimes be answered with a binary “yes” or “no,” providing little detail.
Even with more information, evaluating another organization’s risk management program is generally impracticable for most business consumers of technology products and services. Organizations have hugely varying standards for >3rd party risk management, and you might find yourself relying on a critical piece of software that itself has a catastrophic Achilles' heel. Many FireEye customers found themselves in exactly this position after the cybersecurity company revealed it was breached as part of the SolarWinds supply chain attack.
Additionally, modern software deployments often package or run on top of open-source applications, making such externally-developed code a potential source of cyber risk. There is a near-zero chance (and humorous examples to the contrary), however, that any open-source software organization or its contributors will complete an exhaustive security questionnaire, due to the lack of financial incentive or other leverage. This fact has a natural tendency to cause risk management teams to create two different processes for evaluating commercial and open-source software (or simply not even bothering for the latter), treating them as if the nature of the business relationship is necessarily indicative of the cybersecurity risk posed. Similarly, due to the availability bias, such teams often heavily scrutinize responses from vendors while worrying less about unanswered (and usually unasked) questions about open-source software.
Based on the above and the comments of others, I view the purpose of security questionnaires to be mainly to hold a vendor liable after a breach if the information contained therein is false. There is likely some value in holding such a Sword of Damocles over organizations upon which your data relies in order to ensure they take information security seriously, but the state-of-the-art leaves much to be desired.
If you are going to employ security questionnaires - and their use unfortunately seems to be completely ingrained in both the public and private sectors at this point - then at least follow some best practices.
These include using a generally accepted standard such as the Standardized Information Gathering (SIG) questionnaire, employing an automated system for managing the completion and retention of questionnaires, and periodically re-validating previous vendor responses to your security-related questions.
Furthermore, developing a quantitative system to evaluate the responses received will allow you to develop a comprehensive risk picture by comparing scores between suppliers. Although it does have some of its own challenges, the open-source Software Assurance Maturity Model (SAMM) could serve as a good framework for numerically grading vendor development practices, in addition to your own organization's.
If you are starting from scratch in building your >3rd party risk management strategy, I would recommend simply skipping developing or implementing a security questionnaire program altogether. There are better means of measuring and managing risk in your software supply chain. Dropbox, for example, integrates all vendor security requirements directly into sales contracts.
It would behoove everyone in the industry to focus on these more effective techniques, which I will address in future posts. In my subsequent article, I will address the next most useful tool, although one that also has its own shortcomings: external audits.
Related LinkedIn posts