Most mid-size companies are now under pressure to do something with AI. The pressure comes from boards, customers, competitors, and internal teams who are using AI tools in their personal lives and asking why the company is not. The question is rarely whether to start. It is how to start without wasting a year on the wrong thing.
For a company at this stage, the practical answer is to run a structured readiness step before committing to a project. Here is what that actually involves and why it changes the odds of success.
What to know |
• AI readiness is mostly about data and processes, not about technology or models, and an honest audit of both is the foundation for any successful project. |
• Most companies overestimate the quality of their data and underestimate the work needed to make it usable for AI applications. |
• A well-run readiness assessment usually produces a shortlist of two or three viable first projects rather than a single grand plan, and the prioritisation is the actual deliverable. |
What readiness actually means in practice
Readiness for AI is not the same as readiness for a software project. A traditional software project needs a problem, a budget, and a team. An AI project needs all of those plus three other things. It needs data that is accessible, complete and consistent. It needs business processes that are stable enough that a model can be trained on them without becoming obsolete in six months. And it needs an organisation that will trust and adopt the output of a model rather than overriding it on intuition.
When mid-size companies skip the readiness step, the failure mode is predictable. The team starts building. The data turns out to be more fragmented than expected. The process being modelled turns out to be changing as the project progresses. The model gets built, technically works, and then sits unused because nobody trusts the output. None of these failures are technical. They are organisational, and they could have been spotted in a readiness audit.
The data audit step
The first piece of a readiness assessment is an honest audit of the data that the candidate AI project would depend on. Where does it live. Who owns it. What state is it in. Is it complete enough and consistent enough that a model could be trained on it. Is there enough history to be useful. Is the labelling reliable, or would the project need a labelling phase before any model work could start.
A proper AI readiness assessment usually surfaces the data work that is needed before the project can start, and that is one of the most useful things it produces. Companies that go in expecting a green light and instead get a clear list of data fixes to address first usually save themselves six to twelve months of pain. The list itself is more valuable than the green light would have been.
The audit also flags data that exists but is locked behind systems that do not currently support extraction. Legacy systems often store useful data in formats that cannot be queried at scale. Knowing that early changes the project plan in important ways.
The process audit step
The second piece is the process audit. Even with good data, an AI project needs a process that is stable enough to model and important enough that improving it is worth the effort. Processes that are still being designed are not good candidates for a first AI project. Processes that are deeply embedded but unimportant to the business are also not good candidates.
The right candidate is a process that is well established, generates meaningful business value, has measurable outputs, and where small improvements in accuracy or speed compound into significant value over time. Customer support triage, document processing, demand forecasting, fraud detection, and lead scoring tend to fit these criteria. New product design, novel customer interactions, and fast-changing operational workflows usually do not, at least for a first project.
According to research summarised by McKinsey on the state of AI in enterprise, the gap between organisations that have captured meaningful value from AI and those that have not is largely explained by whether the work was concentrated on a small number of high-value workflows rather than spread thinly across many opportunities.
The data governance question
Closely linked to the data audit is the question of data governance. For an AI project to be sustainable, the underlying data needs to be governed in a way that supports ongoing model use. That means clear ownership, defined quality standards, documented lineage, and a process for handling changes to source systems. Without these, the model will degrade as the data feeding it drifts. A meaningful AI data management approach is part of every successful AI programme, not an add-on after the model is live. Companies that treat data governance as a follow-up project often spend the second year of their AI work rebuilding what the first year should have established.
The shortlist of candidate projects
A useful readiness assessment ends with a shortlist of two or three candidate first projects, with each one analysed for data readiness, process fit, business value, and organisational fit. The shortlist is what the leadership team uses to decide where to start.
The best first project is rarely the most ambitious one on the list. It is usually the one with the cleanest data, the most stable process, a clearly defined success metric, and an internal sponsor who will actively use the output. The ambitious projects come second, after the first has demonstrated that the company can deliver an AI project end to end.
For most mid-size companies the second project is the one where the value really starts to compound. The first project pays for itself in modest ways. The second and third projects, built on the platform and the team developed during the first, are where the transformation begins to be visible.
What this means for the next quarter
For a mid-size company that has not yet started, the right move is rarely to commission a strategy. The right move is to scope a focused readiness assessment that produces a shortlist of viable first projects within four to eight weeks. The output is a clear go-no-go on the data, the processes, and the candidate projects.
From there, the company can move quickly into a defined first project with a realistic chance of reaching production within six to nine months. That sequence is what separates the companies that have something running by the end of the year from those who are still circling the topic.