A weird new trend at AI-powered companies creates risk for both buyers and sellers:
detailed restrictions on customer use of outputs.
On top of standard language requiring “complying with applicable law” (sure - ok), these demand that you not:
Create anything related to gambling, sex, or drug (even legal) use
Generate content for political or lobbying campaigns
Input "personal, financial, or health" (not PHI) data
Violate "technical documentation" or guidelines
Alter/remove watermarks or metadata
Claim output is human-generated
The most ridiculous one I’ve seen (incorporated into AI-specific and legally binding product terms)?
“Be a good human.”
These restrictions create AI-related second party contractual risk.
I understand AI companies want to prevent use of their tools to build competitive products or in ways that reflect poorly on them.
But these vague and difficult to enforce (on both sides) guidelines just muddy the water. I sense some companies are trying to pre-empt the EU AI Act’s restrictions on certain practices, but this is a very broad approach.
What I would do as a customer
I am not an attorney and this is not legal advice.
Push back on these terms. Try to narrow them as much as possible so they are only focused on protecting the vendor’s vital interests.
If a company won’t budge, consider looking elsewhere.
In any case, update your compliance program to incorporate the usage restrictions you agree to.
What I would do as an AI company
Focus usage restrictions on things that directly impact you (e.g. building competitive products). As the EU AI Act enforcement regime clarifies, carefully consider additional restrictions as required by the law.
Beyond that, simply state the customer is responsible for the output and its use, and make them agree. OpenAI has a decent example in its Terms of Use (with the exception of the 3rd paragraph. This is extremely vague and broad so I struck it):
When you use our Services you understand and agree:
Output may not always be accurate. You should not rely on Output from our Services as a sole source of truth or factual information, or as a substitute for professional advice.
You must evaluate Output for accuracy and appropriateness for your use case, including using human review as appropriate, before using or sharing Output from the Services.
You must not use any Output relating to a person for any purpose that could have a legal or material impact on that person, such as making credit, educational, employment, housing, insurance, legal, medical, or other important decisions about them.Our Services may provide incomplete, incorrect, or offensive Output that does not represent OpenAI’s views. If Output references any third party products or services, it doesn’t mean the third party endorses or is affiliated with OpenAI.
If you throw a bunch of usage restrictions into your contractual language and people start flagrantly violating them, you will start to lose credibility unless you take action. And suing your customers for minor infractions is not good business practice.
Reminder: I am not an attorney and this is not legal advice.
Need to manage risk when buying AI products?
StackAware has a massive and growing database of every issue we’ve identified during AI risk assessment process. And we do a customized review for every new customer. So if you are security or compliance leader in:
Financial services
Healthcare
B2B SaaS