fbpx
Techitup Middle East
Interviews

Interview: The Path from AI Pilot to Production

Many enterprises are stuck in “pilot purgatory.” Why do AI projects succeed in POCs but fail to reach production?

I think it comes down to how these projects are built, because pilots and production systems are created for completely different objectives. When you build a pilot or POC, the goal is simply to show potential value as quickly as possible. You’re not necessarily building it in a way that’s ready for deployment. It’s literally a proof of concept, a quick example showing that the technology could work.

In practice, that means the data used is often just an extract from the real production system, with no robust connection to live data. The design is usually done quickly and not in close consultation with the business experts who will eventually use it. There’s also no governance or monitoring because it wasn’t intended for production.

That’s very different from building a system meant to be deployed with confidence, fully connected to production systems, and monitored after launch. So yes, the industry has focused heavily on pilots and POCs, and that’s why so few projects make it into production.

The challenge now is to move past those limitations and establish an operating model that lets organizations identify opportunities, build with confidence, deploy with confidence, and run these systems reliably. That requires different approaches and capabilities that most organizations haven’t applied during the pilot phase.

What are the most common mistakes organizations make when trying to operationalize AI?

One major mistake is building solutions in isolation. AI expertise is scarce, so many organizations have small, pressured teams working very quickly. This leads to solutions being built in a silo and handed over to the business only at the end.

But when the business receives a solution that’s meant to change how they work, it often creates distrust. If someone has been doing their job for 20 years and suddenly receives a black box claiming to transform their work, they’re naturally skeptical.

Organizations need to break down the wall between the AI lab or center of excellence and the rest of the business. AI experts must collaborate directly with the business experts who will use these solutions. That’s the only way to ensure relevance, adoption, and trust.

And it should be more than ticking a box, right? It can’t just be a tick-box exercise.

Exactly. For AI to reach its potential and not end up being a disappointment, it has to be far more than a small assistant sitting on the side. It must be integrated deeply into how the company actually operates. That means connecting with enterprise systems and data and tackling the most complex processes.

Previously, many of these processes could only be done by humans because traditional software couldn’t handle them. AI now gives us the ability to automate more complex, judgment-based tasks. There’s a major opportunity to extend automation into parts of the business that were previously out of reach.

Do you believe regulation will accelerate or slow down AI deployment in the enterprise?

AI regulation is both a reality and a necessity. Governments definitely need to guide how powerful technologies are used. Think of it this way, just imagine driving without traffic laws, things would quickly become chaotic or inefficient.

What slows innovation isn’t the regulation itself, but an organization’s inability to adapt to it. Large enterprises operate across borders and will face different regulatory requirements in different markets. And because AI is evolving so quickly, regulations will evolve too.

The key is agility. Organizations need governance models that adapt to various regulations and use cases, while still supporting performance, cost-effectiveness, and business needs. If they can do that, regulation won’t be a barrier.

You talk about an “enterprise reasoning layer.” What is it, and why is it important for scaling AI with trust?

When you look at the scale of what’s possible and increasingly what’s competitively necessary, organizations have thousands, sometimes hundreds of thousands, of business processes. Any one of them could be augmented by one or more AI agents.

This highlights the gap between building a one-off pilot and building a capability to deliver AI agents at enterprise scale.

Organizations have scaled other technologies before. Cloud migrations let them build applications faster. Data platforms created dedicated data layers for agility.

Now, they need something similar for AI: an enterprise reasoning layer. It’s a dedicated layer that lets organizations combine different types of intelligence while connecting to the underlying compute and data layers. It enables them to design intelligent agents specific to each application repeatedly, at scale, and with trust.

It becomes the abstraction layer necessary to truly deliver on the promise of AI across the enterprise.

So, it’s really about AI working with you, not for you.

Exactly. Humans are still a core part of these systems. AI may have some human-like abilities, elements of judgment or decision-making, but the future is a hybrid workforce. Humans and agents will collaborate, passing tasks back and forth depending on the situation.

You released the AI Confessions report yesterday. What were the findings, and how are UAE business leaders faring?

One fact really stood out for the UAE: Quite a number of leaders said, they couldn’t trace how their AI systems were making decisions. This issue appears across all regions, no one has full confidence yet but the UAE’s number was relatively high.

My interpretation is that there’s a strong desire to move fast. Organizations here often buy off-the-shelf solutions or work with service providers to deploy quickly. But that sometimes means skipping steps related to traceability and understanding how decisions are made.

Speed is good, but when running AI systems in production at scale, organizations need to be able to look inside and understand the reasoning behind decisions.

The next step for many organizations is moving beyond the “go fast, deploy pilots” mindset. They need to build a proper factory for developing and deploying AI agents at scale with the necessary controls, visibility, and understanding built in.

Two other findings also caught my attention — one technical, one cultural.

First, the technical surprise: UAE data leaders are among the most willing globally to accept higher error thresholds in AI systems. One in ten told us they’d allow more than 10% error before reverting to human oversight. That’s higher than global norms. At first glance, this might seem reckless. But I think it reflects something smarter, a recognition that perfection is the enemy of progress. They’re deploying AI in domains where an 85-90% accuracy rate still delivers significant value, rather than waiting for 99% certainty that may never arrive.

The cultural surprise: When AI delivers results, UAE organizations credit the AI and data science teams (42%) — not the CIO, not business leaders, but the practitioners who built the systems. That’s the highest globally. And when things fail, responsibility gets shared almost evenly between technical teams and the CIO.

This matters because it signals that UAE organizations treat AI as a technical discipline that requires specialized expertise, not just an IT procurement decision. You can’t build sophisticated AI systems if the people with the skills aren’t valued and accountable. The UAE seems to understand that.

Related posts

Interview: SOC Insights, Empowering Security Analysts

Editor

ManageEngine: Simplifying AI Adoption for the Enterprise

Editor

Interview: Bentley Systems, Driving Sustainability for Industries Digitally

Editor