Before we take on any new AI project at AI Dev Lab, we run every prospective client through the same set of questions. Not to qualify them out. To protect them from spending money on a build their organization is not yet positioned to succeed with.
This AI readiness assessment is that set of questions. All ten of them. Answer honestly and you will know exactly where your organization stands before you commit a dollar to a development engagement.
According to the F5 2025 State of Application Strategy Report, 96% of organizations are implementing AI, but only 2% rank as highly ready to tackle the evolving demands of their AI deployments. That gap between activity and readiness is exactly where projects go wrong.
What Is an AI Readiness Assessment?
An AI readiness assessment is a structured evaluation of whether your organization has the foundations in place to successfully build, deploy, and sustain an AI system. It covers data, infrastructure, people, process, compliance, and organizational alignment.
It is not a test you pass or fail. It is a diagnostic that tells you where your highest-risk gaps are before you start building, so you can address them deliberately rather than discover them expensively mid-project.
We use this assessment in the Assess phase of the FlexAI Framework before any architecture gets designed or any development begins. The organizations that do this work upfront move faster, spend less, and end up with systems that actually get used.
The 10 AI Readiness Assessment Questions
Work through each question and score yourself honestly. At the bottom of this post you will find a link to download the full AI Readiness Scorecard, which gives you a weighted score across all ten dimensions and a tier rating for your organization.
Question 1: Do You Have a Specific, Measurable Problem AI Is Meant to Solve?
Not “we want to use AI” or “we want to improve efficiency.” A specific problem. One you can describe in a sentence, with a measurable outcome you will use to evaluate whether the system worked.
Examples of specific: “Reduce time to process an intake form from 48 hours to under 4 hours.” “Handle the top 20 most common rider questions without a live agent.” “Flag at-risk accounts 30 days before they churn.”
Examples of not specific: “Use AI to improve the customer experience.” “Automate our operations.” “Get more value from our data.”
If you do not have a specific, measurable problem definition, you are not ready to start building. You are ready to start the Assess phase.
Score yourself: 0 = No clear problem defined. 1 = Problem identified but not measurable. 2 = Specific problem with defined success metric.
Question 2: Is Your Data Clean, Accessible, and Governed?
This is the question most organizations get wrong, and it is the one that causes the most expensive surprises.
AI systems are only as good as the data they are trained on and operate against. If your data is scattered across multiple systems, partially duplicated, inconsistently labeled, locked in PDFs or spreadsheets, or governed by nobody in particular — your project will hit a data preparation phase that nobody budgeted for.
Ask yourself: if I needed to pull all the data this AI system would use into a single, clean, structured dataset today, how long would that take? If the answer is months, or if you genuinely do not know, that is the most important readiness gap you have.
Score yourself: 0 = Data scattered, ungoverned, unclear quality. 1 = Data mostly accessible but needs significant cleaning. 2 = Data is clean, structured, and accessible with clear ownership.
Question 3: Do You Know Which Systems the AI Needs to Connect To?
Every integration is a project inside your project. Each one takes time, surfaces edge cases, and introduces a new failure mode.
You should be able to list, right now, every system the AI agent will need to read from or write to. CRM, ERP, ticketing system, database, API, internal knowledge base, external data feed. If you cannot list them, you do not yet have a complete picture of the build scope, which means any estimate you have received is incomplete.
Score yourself: 0 = Integration requirements unknown. 1 = Some systems identified but not fully mapped. 2 = All required integrations identified with API/access status known.
Question 4: Have You Identified the Compliance Requirements That Apply?
In regulated industries including healthcare, finance, government, and public transportation, compliance requirements shape the architecture. They are not a post-build review. They are a pre-build constraint.
HIPAA, FERPA, FTA Title VI, ADA, GDPR, state-specific AI regulations, internal data governance policies — any of these that apply to your use case need to be mapped before you design a system, not after.
If you are unsure which regulations apply to your specific AI use case, that uncertainty itself is a readiness gap. It needs to be resolved in the assessment phase, not discovered during development.
Score yourself: 0 = Compliance requirements not yet identified. 1 = General awareness but not mapped to this specific use case. 2 = Compliance requirements fully mapped and architecture constraints understood.
Question 5: Do You Have Internal Ownership for This System?
Who owns this AI system after it is built? Who is responsible for its performance, its outputs, and its maintenance? Who has the authority to make decisions about it?
If the answer is unclear, or if ownership is assumed to be the vendor’s responsibility after deployment, that is a gap. Vendors build and hand off. Someone inside your organization needs to own what they hand off.
This is also the question that surfaces whether you have the internal capability to operate what you are about to build. A system with no internal owner will degrade without anyone noticing.
Score yourself: 0 = No designated owner identified. 1 = Tentative owner identified but not formally accountable. 2 = Clear owner with defined accountability and operational capacity.
Question 6: Have the People Who Will Use This System Been Involved in Defining It?
The people who will use the AI system every day know things about the workflow that no stakeholder interview, documentation review, or requirements document will capture. If they have not been involved in defining what gets built, something important will be missing from the build.
This is also a change management question. People who were involved in designing the system are more likely to use it. People who had a system deployed on them are more likely to resist it.
If the answer is that end users have not yet been consulted, that is not a disqualifying gap — it just means it needs to happen before design begins.
Score yourself: 0 = End users not yet involved. 1 = Some consultation but not structured. 2 = End users formally involved in requirements definition.
Question 7: Do You Have a Budget That Reflects the Full Scope of the Project?
Not just the build budget. The full scope: data preparation, integration work, change management, training, ongoing maintenance, and the internal time your team will spend on the engagement.
We covered the real cost breakdown of production AI agents in an earlier post on AI agent cost in 2026. The summary is that the most common budget surprises are data preparation costs, integration complexity, and the annual maintenance expense that nobody planned for.
If your budget was set before a scoping assessment was completed, it is likely missing at least one significant cost category.
Score yourself: 0 = Budget set without detailed scoping. 1 = Budget accounts for build but not full lifecycle. 2 = Budget reflects full scope including data, integration, change management, and maintenance.
Question 8: Does Your Leadership Team Understand What AI Can and Cannot Do?
This question is about expectation alignment, and it matters more than most technical factors.
Leadership teams that expect AI to be infallible, instant, or self-managing will become disillusioned when the system requires tuning, produces an occasional wrong answer, or needs quarterly reviews to stay accurate. Leadership teams that understand AI as a powerful but managed capability will support it through the normal challenges of a production deployment.
Misaligned executive expectations are one of the most common causes of AI project abandonment after launch. The system works. Leadership expected something different. The project gets defunded.
Score yourself: 0 = Leadership has unrealistic or uninformed expectations. 1 = General understanding but not calibrated to this specific use case. 2 = Leadership understands realistic performance, limitations, and maintenance requirements.
Question 9: Have You Defined What Success Looks Like at 30, 90, and 180 Days Post-Launch?
Not just the launch metric. The trajectory.
A system that performs well at launch but has no defined review cadence will drift and degrade. A system with defined 30-day, 90-day, and 180-day success criteria gives everyone on the team a shared definition of what it means for the project to be working.
This question also surfaces whether your organization is prepared for the Lead phase of an AI engagement — the ongoing optimization that turns a working system into a compounding organizational advantage.
Score yourself: 0 = No post-launch success criteria defined. 1 = Launch metric defined but no ongoing review cadence. 2 = 30, 90, and 180-day success criteria defined with review process in place.
Question 10: Are You Prepared to Iterate, or Are You Expecting a Finished Product?
This is a mindset question, and it is one of the most predictive of project success.
AI systems improve through use. The first version of a production AI system should be better than nothing and worse than the third version. Organizations that understand this, that budget for iteration and build feedback loops from day one, get dramatically better outcomes than organizations that treat an AI deployment as a one-time project with a defined end date.
If your internal stakeholders are expecting a finished, perfected product at launch, that expectation will work against the project from day one.
Score yourself: 0 = Expecting a finished product at launch. 1 = Open to iteration but no formal feedback mechanism planned. 2 = Iteration and feedback loops planned as part of the engagement from day one.
aidevlab.com
How to Interpret Your Score
Add up your scores across all 10 questions. Maximum possible score is 20.
| Score | Tier | What It Means |
|---|---|---|
| 0 to 6 | Not Ready | Foundational gaps that need to be addressed before any build begins. Start with an Assess engagement. |
| 7 to 11 | Building Foundation | Meaningful readiness in some areas, significant gaps in others. Map the gaps before scoping a build. |
| 12 to 16 | Nearly Ready | Strong foundation with specific gaps to address. A structured scoping process will surface and resolve them. |
| 17 to 20 | AI Ready | You have the foundations in place. A well-scoped build engagement is your logical next step. |
Download the AI Readiness Scorecard
The scorecard expands each question with additional sub-questions, weighting for regulated industries, and a completed score sheet you can use in internal planning conversations or share with a prospective AI development partner.
Get the AI Readiness Scorecard
What to Do With Your Score
If you scored in the Not Ready or Building Foundation tier, the most useful next step is not to find a developer. It is to do the foundational work that will make a development engagement successful when you are ready for it. We are happy to help with that work. Our how we scope and deploy AI projects post covers what that process looks like in practice.
If you scored in the Nearly Ready or AI Ready tier, you have the foundations in place and a structured scoping conversation is the right next step. That conversation will surface the specific gaps your score identified and map them to a build plan that accounts for them. You can also get a jump start by downloading our AI Roadmap and learn how to spot your best opportunities right now.

Either way, knowing your score before you start talking to vendors is the most valuable thing you can do for your AI budget.
Not Sure Where You Stand?
Let’s Find Out Together
I do a free 30-minute AI readiness call. We work through your score, identify your highest-risk gaps, and give you an honest picture of what you need to address before a build makes sense.
About the Author
Jason Wells is the founder of AI Dev Lab and a fractional Chief AI Officer who helps organizations implement AI that actually works in production. He has developed more than 20 AI products, led technology initiatives across six continents, and spent two decades building technology for transit and regulated-industry clients. He holds degrees from Wharton and in applied mathematics and is a four-time Ironman finisher.

