
Most enterprise AI initiatives fail because the scope is wrong from the start. As McKinsey observes, some organisations begin too small, hoping an incremental approach will reduce risk. Others go too big too soon, attempting to transform the entire business at once.
Both approaches fail for different reasons. Start too small - with isolated use cases - and the work rarely translates into meaningful business impact. Start too big - and the effort becomes too costly, too disruptive and too complex to execute.
Over 30% of generative AI projects are abandoned after proof of concept (Gartner, 2024), often because ambition outpaces organisational readiness. The answer sits between these extremes: a starting point large enough to matter, yet contained enough to succeed.
What the right size looks like
McKinsey's analysis of large-scale AI programmes suggests that long-term value typically comes from scaling across two to five interconnected business domains. Critically, successful organisations do not start there. They begin with a single domain - tightly scoped but meaningful - and expand from that foundation.
In practice, this means selecting a domain that contains five to fifteen related use cases and targeting a measurable improvement - typically around 20% incremental value - within a 6-36 month horizon.
A domain, in this context, is a distinct operational area with clear ownership, defined processes and measurable outcomes. The right starting point has four defining characteristics:
Bounded scope: a single business domain - such as customer acquisition, claims processing, content production or platform operations - rather than a cross-cutting function spanning the entire organisation
End-to-end redesign potential: large enough to support multiple interrelated use cases that can be transformed together, not just isolated automations
Measurable commercial impact: tied to clear outcomes such as revenue, margin, conversion or cycle time, with results visible within 6-36 months
Manageable dependencies: contained enough to redesign without requiring simultaneous change across unrelated parts of the business
Leaders generate 62% of their AI value from core business processes and scale more than twice as many AI products despite pursuing fewer opportunities. The implication is clear: the right size enables genuine workflow redesign, where AI changes how work gets done, not just what tools people use.
Stay ahead of the competition with Adrenalin's AI workshops
Practical, hands-on sessions that take your team from insight to action - in half a day or less.
AI Strategy Accelerator: align leadership and build a 90-day AI roadmap
AI Innovation Framework: apply design thinking and AI to solve real business problems
AI & User Research: build richer personas and customer journeys faster
AI Creative & Campaign Accelerator: move from brief to campaign concept in a fraction of the time

Choosing where to start
Knowing the right size is one challenge. Choosing the right domain is another. Many organisations stall here. Gartner reports that a significant share of low-maturity organisations cite "finding the right use case" as a primary barrier to progress.
There are four practical approaches to domain selection. Used together, they significantly increase the probability of choosing a starting point that delivers.
Approach 1: Score on impact and feasibility
The most widely used approach evaluates domains across two dimensions: business impact and implementation feasibility.
Impact considers the size of the opportunity, its strategic importance and the extent to which AI can change outcomes - not just automate tasks. Feasibility considers data availability, technical complexity, speed to value and the level of organisational change required. Domains that score well on both tend to share common traits:
Clear, quantifiable outcome: revenue growth, cost reduction, conversion rate improvement or cycle time reduction - not softer productivity gains
Accessible, sufficient data: the domain already generates structured data or can be instrumented to do so quickly
Defined ownership: a business unit leader who will champion the initiative and be accountable for results
Workflow redesign potential: the domain involves repeatable, high-volume processes where AI can fundamentally change how work gets done, not just assist it
Focusing on impact alone is a common failure mode. High-value domains with poor data quality or immature capabilities frequently stall before delivering results. Poor data quality, in particular, is consistently cited as one of the leading causes of AI project failure.
Approach 2: Prioritise strategic proximity
The strongest starting points are closely tied to how the business competes.
McKinsey's research shows that executive involvement - particularly at CEO level - is strongly correlated with measurable financial impact from AI. In practice, this means prioritising domains that sit at the centre of strategic focus. Key signals of strategic proximity include:
Revenue impact: the domain directly influences conversion, retention or average order value
Competitive exposure: rivals are already deploying AI in this area or moving quickly
Executive mandate: a senior leader has named this domain as a priority for the current planning cycle
Operational pain: the domain has known inefficiencies that constrain growth or margin
The domain that sits at the intersection of strategic priority and operational pain is usually the right one.

Approach 3: Build for reuse, not just results
A consideration that separates high-performing AI programmes from one-off projects is infrastructure reusability. Deloitte's first use case methodology treats the initial AI initiative as permanent organisational infrastructure - governance standards, data pipelines and technical architecture that become the foundation for every subsequent deployment (Deloitte, 2025). BCG's 10-20-70 principle reflects the same logic: 10% of AI investment should go to algorithms, 20% to technology and data and 70% to people, processes and change management (BCG, 2024).
When evaluating a candidate domain, the questions to ask are:
Data reusability: will the data infrastructure built here serve future AI initiatives or is it a point-to-point solution?
Governance portability: can the oversight and compliance framework developed for this domain be applied across others?
Model transferability: are the AI patterns and workflows developed here reusable in adjacent domains?
Team capability: will the people involved in this initiative build skills that compound across the programme?
A domain that scores well on reusability is worth more than its standalone value suggests.
Approach 4: Validate before committing
Before committing to a domain, leading organisations run a structured discovery exercise - typically four to eight weeks - to validate assumptions about data quality, technical feasibility and business value. McKinsey found that 80% of successful course corrections in struggling AI programmes came from re-anchoring scope to a better-defined domain, rather than improving the technology (McKinsey, 2024). A focused validation sprint should confirm:
Data quality: is the data accessible, clean and sufficient to train and run models reliably?
Baseline metrics: are current performance levels documented clearly enough to measure AI-driven improvement?
Stakeholder alignment: do the domain owner, technical team and executive sponsor agree on the definition of success?
Risk and compliance: are the regulatory, ethical and change management implications understood and manageable?
Leadership teams that see a domain assessed with this rigour - realistic benefit projections, clear KPIs and identified risks - are more likely to back the initiative with genuine commitment rather than passive approval. That distinction matters: 91% of high-maturity AI organisations have appointed dedicated AI leaders with the authority and accountability to drive execution (Gartner, 2025), which rarely happens without a compelling, evidence-based starting point.
Start right, then scale
The companies pulling away from their competitors on AI are not the ones who started with the most ambition or the most use cases. They are the ones who chose their starting point deliberately - a domain large enough to matter, scoped tightly enough to succeed and structured to build the capability that compounds over time. The question facing most Australian enterprise leaders is not whether to invest in AI. It is whether the starting point has been chosen with the same rigour applied to any other major business investment.
Stay ahead of the competition with Adrenalin's AI workshops
Practical, hands-on sessions that take your team from insight to action - in half a day or less.
AI Strategy Accelerator: align leadership and build a 90-day AI roadmap
AI Innovation Framework: apply design thinking and AI to solve real business problems
AI & User Research: build richer personas and customer journeys faster
AI Creative & Campaign Accelerator: move from brief to campaign concept in a fraction of the time
Learn from us
Join thousands of other Product Design experts who depend on Adrenalin for insights


