What should AI regulation look like?
A response to the Department of Commerce's request for comment.
More regulation is coming for AI.
The main questions, however, are “when?” and “how?”
Things in government move slowly. Despite acknowledging that it needs to move fast on this specific topic, it doesn’t seem that much will change in terms of federal speed of action.
With that said, there have already been some initial moves toward AI regulation at the federal, state, and local level. And agencies like the Federal Trade Commission (FTC) have asserted that AI is regulated at present.1
Most recently, the Department of Commerce recently signaled that it will likely be starting a new federal rule-making process. They recently released a lengthy document asking for public input on a series of questions related to AI regulation.
The first part of the document is mostly a combination of boilerplate, political posturing, and an overview of terms and AI regulation history. You should probably skip this.
The second part, the “instructions for commenters,” however, is quite thoughtful.
With over 30 detailed questions, it shows quite a depth of thinking on the topic of AI regulation. Since Deploy Securely is a cybersecurity-focused publication, I applied my efforts to answering the most relevant ones, and have included my responses below.
With that said, much of the document takes as a given - or at least implies - many things with which I don’t agree. For those out of the scope of Deploy Securely’s remit, though, I won’t address them. But know that I don’t necessarily accept the premise of many of the questions included.
Instructions for commenters
7. Are there ways in which accountability mechanisms are unlikely to further, and might even frustrate, the development of trustworthy AI? Are there accountability mechanisms that unduly impact AI innovation and the competitiveness of U.S. developers?
Any measures causing AI development to move outside the United States would frustrate the development of trustworthy AI.
This is because such efforts would rapidly move to jurisdictions that have fewer restrictions. Since the People’s Republic of China is the next closest competitor to the United States when it comes to AI development, it is highly likely that at least some of those at the forefront of development would relocate there.
The following are just a few examples of why Chinese-developed and -regulated AI would be less trustworthy than that developed in the United States:
Huawei has reportedly tested AI software that could recognize Uighur minorities by recognizing their facial features and alert police.
Chinese government organizations are actively seeking AI tools to allow them to combine surveillance camera footage, familial relationships, and even online shopping history to track dissidents more effectively.
The Chinese military is aggressively pursuing autonomous combat systems that could assist it in the armed conquest of the democratic nation of Taiwan.
Potential measures that could lead to this undesirable outcome include:
A blanket “freeze” on AI development, such as that recently proposed by the Future of Life Institute.
While developments are happening quickly in the field of AI, government regulators should learn to move along with them.
Even it were possible to implement, freezing AI development would essentially crown existing leaders in the field as the winners.
Organizations attempting to leap ahead of these players during the “freeze” would likely relocate abroad until such a freeze is lifted.
“Pre-approval” requirements for launching AI products - similar to the FDA approval process for drugs and medical devices.
Especially since the federal government lacks sufficient expertise to conduct a thorough technical review of AI products - and will for the foreseeable future - attempting to create such a regime could essentially halt legal AI research in the United States.
Unlike physical products like drugs, there is no realistic or constitutional way to limit access to digital products like AI. Even if the United States government forbade the release of certain products, they are sure to be released in more permissive jurisdictions, from which U.S.-based people and organizations would inevitably access them.
Taxing AI algorithms in a unique manner.
While some technology leaders have proposed taxing automated processes in order to slow their development, this would almost certainly lead to an even more rapid loss of American jobs as companies move overseas or go out of business due to their relative lack of competitive advantage.
11. What lessons can be learned from accountability processes and policies in cybersecurity, privacy, finance, or other areas?
Important lessons regarding AI accountability processes and policies from both the cybersecurity and finance fields include the:
Importance of auditor incentives.
A key flaw with existing cybersecurity and financial audit practices is the fact that the audit target is often a customer of the auditor. Whether the customer makes it explicit or not, eventual passage of the audit is a key condition for continued use of the auditor’s services.
While the relevant professional organizations effectively prevent any auditing organization from “guaranteeing” a passing grade, there is nonetheless a strong incentive for auditors to create the easiest process possible for audit targets while maintaining a fig leaf of compliance.
Thus, any AI accountability framework should leverage those with the proper incentives to scrutinize the audit target. Customers of the AI service in question generally have the best incentives, as they will want to have the most safe, secure, and effective product available.
Externalities outside of this relationship, however, can arise. The government should consider creating a private right of action for impacted parties outside of the commercial relationship. This would allow them to litigate against the maker of the AI if they are damaged by it.
Vital nature of establishing correlation between audit passage and desired outcomes.
As recent financial crises - such as the collapse of Silicon Valley Bank (SVB) - and security incidents - such as the breach of the password manager company LastPass by still-unidentified attackers - show, passing an audit provides no guarantee against the relevant undesirable outcome.
In the case of SVB, a major auditing firm gave it clean audit report almost immediately before it failed during an unprecedented bank run.
Similarly, LastPass maintained a SOC 2 and 3 attestation prior to its breach by unidentified cyber actors.
Even if auditors are conducting their assessments in a completely impartial manner, it is quite possible that enforcement of the audit standards themselves are not the most effective preventative mechanism.
Thus, any AI accountability framework needs to ensure it causes the target of any audit to take the most effective series of steps possible to avoiding undesirable outcomes, rather than robotically adhering to the framework itself.
Focusing on simulated real-world situations through scenario-based exercises is likely to be far more effective than control-based analyses.
From the world of cybersecurity, things like penetration tests are generally more effective at identifying vulnerabilities than are the presence of administrative controls such as mandating secure development techniques.
AI audits should thus focus on the output of the system in question in realistic circumstances rather than the fact that it “checks-the-box” for a given number of controls.
Need to avoid perverse incentives in audit and compliance frameworks.
Especially when it comes to government cybersecurity standards, such frameworks often encourage organizations following them to take actions that are antithetical to effective cybersecurity.
The FedRAMP standard, for example, penalizes organizations for discovering new vulnerabilities in their networks. This is despite the fact that discovery of vulnerabilities is a necessary precursor to fixing them and thus reducing risk.
The National Institutes of Science and Technology (NIST) Special Publication (SP) 800-53 encourages the use of the outdated Common Vulnerability Scoring System (CVSS), which is far less effective than other methods of vulnerability assessment.
In the search for a broadly applicable standard, those developing AI accountability frameworks must ensure they do not incentivize the very outcomes which they ostensibly seek to prevent.
Toward this end, any such framework should not penalize the audit target for the mere discovery of a flaw in its system. Such behaviors will undoubtedly cause organizations to look less carefully for such problems.
30. What role should government policy have, if any, in the AI accountability ecosystem?
Government should generally take a “light touch” approach to AI accountability, primarily focusing on existential risk. On to of the aforementioned private right of action, a mandatory reporting framework for certain key events - with liability protection for organizations making them - is the maximum that the U.S. government should consider imposing at this time.
Such a framework should require organizations developing AI to report any actual or attempted instance of an AI doing the following without explicit instruction from a human:
Escalate its privileges.
Seek access to sensitive data (personally identifiable, protected health, or classified information) not required for the task given to it by a human.
Intentionally deceive another human.
Interact with the physical world.
Otherwise do anything illegal.
Need to make AI risk decisions now?
That’s where StackAware comes in. We help organizations innovating at the AI frontier manage:
Cybersecurity
Compliance
Privacy
risk related to their deployments.
In typical FTC fashion, the agency makes confusing and contradictory claims - along with menacing threats - in their blog post attempting to explain the topic. For example, the agency says that AI is “ambiguous term with many possible definitions” but also that if “you think you can get away with baseless claims that your product is AI-enabled, think again…FTC technologists and others can look under the hood and analyze other materials to see if what’s inside matches up with your claims.” If AI is an ambiguous term, it would seem difficult to me to hold anyone accountable to their claims.
Might help if we started by defining exactly what we mean by artificial intelligence... Not an easy task. I suspect that there may be categories of AI that are subject to differing regulations. Also, the entire request focuses on "trustworthiness," which is also ambiguous, i.e., to whom, for what purpose(s), etc. I suspect trustworthiness is a metric, similar to "level of confidence." We also need a metric similar to "crossover error rate" from biometric authentication.