From AI To-Do Lists to Real Strategy: How to Build Systems That Drive Productivity

build-systems-drive-productivity

Every enterprise boardroom in the UK has, at some point in the last 18 months, signed off an “AI strategy.” The decks are polished. The roadmaps are colour coded. The pilots are running. And yet, the productivity gains aren’t showing up.

This isn’t a technology problem. The models are good and in many cases, extraordinary. The issue is that most organisations have confused activity around AI with strategy for AI. They’ve built a to-do list dressed up as a transformation plan, and they’re now sitting on a growing pile of disconnected experiments that no one quite knows how to stitch together.

We’ve seen this pattern enough times to recognise it on sight.

The Pilot Graveyard Problem

Ask the average enterprise how many AI initiatives are currently running across the business. The honest answer is usually: more than we thought, fewer than we hoped, and far less coordinated than we’d like to admit.

BCG research has shown that properly designed agentic AI systems can accelerate business processes by 30-50% and reduce low-value work by up to 40%. But “properly designed” is doing a lot of heavy lifting in that sentence. Most AI deployments in large organisations are neither properly designed nor coordinated – they’re departmental experiments that got funding, ran for a quarter, produced a promising demo, and then quietly stalled when it came time to connect them to real data or real workflows.

The problem has a name in some circles: pilot purgatory. You’re not failing, exactly – you’re just never quite succeeding either.

Gartner recently estimated that out of the thousands of vendors claiming to offer agentic AI capabilities, fewer than 130 are genuinely doing so. That’s the supply side of the problem. For broader context on how the UK AI consulting landscape is navigating this, this overview of how AI companies are reshaping UK enterprises is worth reading. On the demand side, of the enterprises claiming to have an AI strategy, very few have built the underlying infrastructure – data architecture, governance frameworks, cross-system integration – that would allow those strategies to compound over time.

What a Real Strategy Looks Like

The distinction we keep coming back to is between AI as a feature and AI as an operating layer.

Feature-level AI adds value at the point of use: it summarises your meeting notes, flags anomalies in your finance data, suggests next-best-action in your CRM. This isn’t nothing. But it’s additive, not transformative. The work hasn’t changed – it’s just slightly faster.

business-strategy

Operating-layer AI is different. Here, intelligent agents don’t wait to be prompted. They coordinate across systems, surface insights without being asked, and handle multi-step workflows that previously required human handoffs. The work itself is redesigned around what machines can now do continuously and what humans should be doing instead.

“Dropping a large language model into an existing system or adding a chat interface on top of legacy software doesn’t create transformation – it just creates another layer of complexity,” as the team at Elsewhen, the London-based AI consultancy and digital product studio, has put it. “True productivity doesn’t come from adding more tools; it comes from rethinking how work itself gets done in collaboration with a machine.”

That reframe is harder than it sounds. It requires organisations to be honest about which of their current AI initiatives are genuinely building toward an operating model – and which are just keeping stakeholders busy.

The Data Problem No One Wants to Talk About

Here’s the uncomfortable truth that sits underneath most stalled AI strategies: the data isn’t ready. Not “bad data” in the obvious sense – though that’s real too. The deeper issue is that enterprise data is fragmented, inconsistently governed, and in many cases structurally inaccessible to the AI systems that need it.

Models hallucinate not because they’re defective, but because they’re being asked to reason accurately about a business they can only see a fraction of.

The implication is that before most organisations can build AI systems that genuinely perform, they need to invest in the foundations: cleaning pipelines, establishing governance, connecting fragmented data sources into something coherent. This isn’t glamorous work. It doesn’t generate interesting demos. But it’s the difference between AI that reliably delivers and AI that reliably disappoints.

data

As Dr. Aleksandra Przegalinska, AI researcher and author, has noted in her work on enterprise automation: “The organisations that will extract lasting value from AI are those that treat data infrastructure as a strategic asset, not an IT problem.”

That investment pays off – but only if the architecture is built to allow agents to plug into a cleaner, more connected ecosystem over time. Elsewhen’s AI Productivity Platform framework captures this well, describing a three-layer model where data foundations and agent deployment reinforce each other rather than being treated as sequential projects.

The Governance Gap Is Getting Wider

There’s a second quiet crisis running alongside the data problem: governance. As AI systems move from assistants to agents – from producing outputs to taking actions – the question of accountability becomes urgent. Who is responsible when an agent makes a decision? How do you audit a system that’s operating continuously and autonomously? What does “human in the loop” actually mean when the loop is running at machine speed?

Most UK enterprises don’t have good answers to these questions yet. And the regulatory environment, while still taking shape, is moving in a direction that will require them to have answers – and documented evidence of those answers – sooner rather than later.

The consultancies doing serious work in this space aren’t treating governance as a compliance checkbox. They’re building it into the design phase: defining autonomy thresholds upfront, assigning clear ownership to each agent or workflow, enforcing audit trails at the infrastructure level. For a sense of how the most capable UK firms are approaching this, this guide to leading UK AI consulting firms offers useful benchmarking across different delivery models.

The framing, as one senior practitioner put it, is to “treat agents the way you would treat a new hire – with credentials, a job description, and performance monitoring.” That mindset shift – from deploying a tool to onboarding a digital worker – turns out to be one of the most practically useful ways to get governance right.

From To-Do List to Compounding System

So what separates the organisations that are genuinely moving forward from those stuck in the pilot loop? Three things, in our experience.

First, they’ve stopped treating AI initiatives as standalone projects and started building toward a unified architecture – one where data flows cleanly, agents can operate across systems, and new use cases can be added without rebuilding from scratch.

Second, they’ve built accountability into the design, not bolted it on afterward. Every agent has an owner. Every workflow has defined boundaries. Every output has a mechanism for oversight.

Third – and perhaps most importantly – they’ve accepted that the first wins will be modest and operational, not grand and transformational. Sales operations. Document processing. Customer query routing. These are the domains where agents build a track record, where the organisation learns how to manage autonomous systems, and where the trust gets established to expand into higher-stakes territory over time.

The compounding effect is real, but it requires patience and architecture in equal measure. An AI strategy that delivers isn’t a list of initiatives – it’s a system designed to improve continuously, connecting intelligence to infrastructure in ways that get smarter the longer they run.

That’s the difference between a to-do list and a strategy.

You May Also Like