New York State Department of Financial Services guidance on AI security and governance
What a key regulator is focused on.
On October 16, 2024, the New York (NY) State Department of Financial Services (DFS) released a letter on AI and its cybersecurity implications.
In addition to focusing on AI-powered risks like deepfakes and malware creation, the NY DFS letter also addressed AI governance.
While the letter didn’t go especially deep in its recommendations, the key ones for regulated companies are that:
1. "Covered Entities should address AI-related risks in the following areas:"
The organization’s own use of AI
Controls include a(n):
AI technologies used by third parties
As vendors integrate AI, tracking the inherent risks becomes more important. Ways to do it include:
Standardized contract terms
Examining vendor processing and security practices
Monitoring a continuously-updated repository of AI risks
Vulnerabilities stemming from AI applications
Examples include:
Prompt injection
Data poisoning
2. "DFS strongly recommends Covered Entities consider"
Threats facing third parties from the use of AI
Different types of companies face different challenges related to AI. NY DFS-regulated companies will need to understand this throughout their supply chain.
How they could impact the Covered Entity
This includes not just cyber risks from malicious actors, but also compliance issues introduced by certain technologies.
How third parties protect themselves
When another organization accesses your data, it becomes part of your security perimeter. In addition to AI-specific controls, consider folding them into your vulnerability disclosure or bug bounty program.
3. "If deploying AI directly, or working with a [third party] that deploys AI, relevant personnel should be trained on how to"
Secure and defend AI systems
Having a policy is a start, but making sure your team implements it is what matters. That makes proper training and education a vital part of complying with NY DFS guidance.
Design and develop AI systems securely
Properly architect systems to avoid tainted trust boundaries and runaway chatbots.
Draft queries to avoid disclosing nonpublic info
Things like opting out of training and limiting data retention will help.
The NY DFS alludes to, but doesn’t directly tackle, sensitive data generation
The letter notes that:
Products that use AI typically require the collection and processing of substantial amounts of data, often including NPI [nonpublic information]. Maintaining NPI in large quantities poses additional risks for Covered Entities that develop or deploy AI because they need to protect substantially more data, and threat actors have a greater incentive to target these entities in an attempt to extract NPI for financial gain or other malicious purposes.
True, but this isn’t really specific to AI.
Completely deterministic products also collect a lot of non-public information (NPI). What they don’t do, however, is make inferences from that data. Something the NY DFS should consider is how the confidentiality of NPI will erode over time as AI systems become better and better at inference. This will lead to sensitive data generation and make it easier for those on the outside to intuit what is going on inside companies.
Trying to navigate NY DFS and other AI-related regulatory guidance?
A wave of laws, regulations, and statements by government agencies on AI is overwhelming security and compliance teams. Staying on top of every new requirement requires the right:
Tools
Expertise
Resources
StackAware gives AI-powered companies all of this and more. Our 30-day AI risk assessment will map your entire cyber, compliance, and privacy risk surface so you can:
Maintain customer trust
Avoid fines and penalties
Innovate responsibly with AI
Ready to learn more?