These 5 AI contract questions are the ones I wish every buyer had asked before they signed. When an AI project goes wrong, the vendor has usually already covered themselves. The contract you signed had language that seemed reasonable at the time and turns out to be very unfavorable when things break down. I have seen this enough times that I want to put the specific questions in writing, so buyers can ask them before signing rather than discover them in a dispute.
These are not abstract legal concerns. They are practical questions that determine who bears the cost when an AI system underperforms, breaks in production, leaks data, or fails to deliver what was promised. These are the AI contract questions that determine who bears the cost when things break down. Ask all five before you sign anything.
The AI Contract Questions Most Buyers Never Think to Ask
According to a Stanford Law School analysis of AI vendor agreements,
88% of AI vendor contracts cap the vendor’s liability at the monthly subscription fee. Only 17% include any regulatory compliance warranties.
In practice, this means that if an AI system your vendor built causes a compliance failure, produces a discriminatory outcome, or leaks sensitive data, the vendor’s financial exposure is roughly one month of fees. Your organization’s exposure is unlimited.
This is not unique to small vendors. It is standard industry practice. The contracts are written this way because vendors can get away with it. Most buyers sign without reading the liability section carefully, or without understanding what the language actually means in a dispute.
he AI contract questions below will not turn a bad contract into a good one. But they will surface the terms that matter most and give you leverage to negotiate before you are locked in.
Question 1: What Happens When the System Does Not Perform as Promised?
Every AI vendor will tell you their system works. The question is what they are willing to put in writing.
Ask specifically: what are the defined performance benchmarks for this system, and what happens contractually if those benchmarks are not met? You are looking for service level agreements with real teeth, not marketing language about expected outcomes.
If the vendor cannot name a specific performance metric they will commit to, that tells you something important. It means the contract will hold you to paying regardless of whether the system delivers value, while giving you no contractual recourse if it does not.
Push for: defined accuracy thresholds, uptime commitments, response time SLAs, and a clear remediation process if performance falls below them. Minimum: a right to exit the contract without penalty if defined performance thresholds are not met within a reasonable cure period.
Question 2: Who Owns the Work Product, the Model, and the Data?
This question has three parts and each one matters.
Who owns the system that gets built? If a vendor builds a custom AI system using your requirements, your data, and your operational context, you should own the output. Many AI contracts default to joint ownership or vendor ownership of the “model and underlying architecture.” Joint ownership sounds fair until you realize it means the vendor can use the system they built for you as the foundation for the next client’s competing system.
Who owns the fine-tuned model? If your data was used to train or fine-tune a model, the resulting model represents your organization’s institutional knowledge baked into a system. The contract should specify that you own that fine-tuned version, not just a license to use it.
What happens to your data? Find every place in the contract that references your data: how it is used during the engagement, what happens after the contract ends, whether it is used for model improvement, and whether it is aggregated with other clients’ data. This matters regardless of whether you are in a regulated industry.
Question 3: Who Is Responsible When the System Produces a Wrong or Harmful Output?
AI systems produce wrong outputs. That is not a flaw unique to bad systems. It is a characteristic of all current AI systems, including very good ones. The question is not whether your system will produce errors. It is who bears the cost when those errors have consequences.
In most AI vendor contracts, the answer is: you do. The vendor disclaims liability for the outputs the system produces, including outputs that are factually wrong, discriminatory, or that cause regulatory non-compliance. The reasoning vendors use is that the system is a tool, and the organization deploying it is responsible for how it is used.
This is worth understanding before you deploy, not after. Ask directly: if this system produces an output that results in a legal claim, a regulatory finding, or a customer harm, what is your liability exposure under this contract? Read the indemnification section. Understand whether you are required to indemnify the vendor against claims arising from the system’s behavior in your environment.
In regulated industries including finance, healthcare, government, and transportation, this question is not optional. The regulatory exposure from an AI output is real and can be significant.
Question 4: What Does Ongoing Support and Maintenance Look Like After Go-Live?
Most AI vendor contracts are structured around a build engagement with a defined end date. What happens after go-live is often underspecified or left to a separate agreement that does not yet exist.
AI systems require ongoing maintenance. Models drift as the world changes. Data pipelines need monitoring. Edge cases that were not in the training data will appear in production. New regulatory requirements will emerge. If the vendor’s engagement ends at deployment and there is no defined maintenance arrangement, you are on your own with a system that will gradually degrade.
Ask specifically: what is included in post-launch support, what is the response time for production issues, who monitors the system after deployment, and what is the process and cost for retraining or updating the model as performance drifts?
A vendor who cannot answer these questions in specific terms either has not thought through the post-launch requirements or is not planning to be accountable for them.
Question 5: What Are the Exit Terms If This Does Not Work Out?
Ask this one early, not after something has gone wrong.
If the project underperforms, the relationship deteriorates, or your organization’s needs change, what does it cost to exit the contract? What data do you get back, in what format, and on what timeline? Are there IP or non-compete provisions that restrict your ability to build something similar with a different vendor?
The exit terms in an AI contract are often the most consequential terms in the agreement, and they are almost always the least negotiated because nobody wants to start a vendor relationship by planning its end. But a vendor who is confident in their work should have no problem offering clean exit terms. A vendor who resists reasonable exit provisions is telling you something important about how they expect the engagement to go.
At minimum, you want: clear data portability rights, a defined format for data return, a reasonable termination-for-convenience clause, and clarity on what happens to any IP if the engagement ends early.
One More Thing: Read the Liability Cap
Before you sign, find the liability cap in the contract. It is usually buried in the limitation of liability section. In most AI vendor agreements, it reads something like: total liability shall not exceed the fees paid in the prior 30 or 60 days.
Read that number. Then think about the scale of business risk this AI system could create if it fails. If those two numbers are not in reasonable proportion to each other, negotiate before you sign. It is significantly harder to negotiate after.
If you want a structured way to evaluate vendors beyond the contract terms, our guide on what to look for in an AI development partner covers the qualitative and operational factors that the contract does not capture.
Want to Know What a
Fair AI Contract Looks Like?
I do a free 30-minute call where we review your situation, flag the contract terms that matter most for your use case, and give you an honest read on what you should push back on before signing.
About the Author
Jason Wells is the founder of AI Dev Lab and a fractional Chief AI Officer who helps organizations implement AI that actually works in production. He has developed more than 20 AI products, led technology initiatives across six continents, and spent two decades building technology for transit and regulated-industry clients. He holds degrees from Wharton and in applied mathematics and is a four-time Ironman finisher.


