5 Comments

It is nice to see an RQ type of approach and we need so much more of it. Thank you! I am similarly motivated and in significant ways. I presented at SIRAcon in 2023 on quantification of adversarial machine learning and, very coincidentally, next week, I will be donating the script to a popular open source project (I will post more here soon). This script works on tabular data which is not common - most public training data is MNIST and image attacks! It also uses PyFAIR (https://pyfair.readthedocs.io/en/latest/#). I'm OpenFAIR certified and my co-founder is a specialist in decision sciences. So we are brethren. AI red teams and blue teams are also leveraging open source tools but much more attention needs to be paid to Cyber Risk Quantification (CRQ) and FAIR. I would love to learn more about these tools mentioned in this post.

Expand full comment

Currently documenting our AI security programme and the vulnerabilities we're looking for. Your article is the best starting point I'm aware of.

Expand full comment

Thanks Walter, very interesting. I am assuming that AIRSS will be owned by security team (who will be tracking and reporting AIRSS to the overall IT risk). If that's security team, don't you think that they would want to assess the security risk of the overall AI application (and not just the model)? If that's true then how will AIRSS fit in the larger picture?

Expand full comment
author

Good question, Ashish. My view is that the security team should be accountable for making risk assessments. And I agree they should look at the holistic risk of a given AI application.

They can do so by combining the AIRSS with existing methodologies like the Deploy Securely Risk Assessment Model (DSRAM, for evaluating known vulnerability risk) and the Factor Analysis of Information Risk (FAIR) methodology for evaluating human factor risk.

Since all of approaches allow for communicating risk in terms of dollars, they can be merely added together to determine the total risk of the AI application.

Expand full comment

I see AI security carving out its own niche similar to Application Security. This is going to be an exciting industry to work in as it matures

Expand full comment