Ramprakash Ramamoorthy, Director of AI Research at ManageEngine in a conversation with Techitup ME, explains how ManageEngine is simplifying AI Adoption and shaping the future of Enterprise IT
What AI trends do you believe, will shape the industry in 2025 and beyond?
As always, the AI hype has been ahead of reality, the marketing has been ahead of reality. But since 2022, we’ve all been using AI in some form in our personal lives, and we’re familiar with what ChatGPT and other large language models (LLMs) can do. In the next few years, we’ll see deeper integration of AI into enterprise software. Enterprises always have a late mover advantage, and a recent Economist survey shows that 82% of employees don’t want their employers to know they’re using LLMs. This must change, as LLMs need to be part of your enterprise stack. Agents are a step in the right direction, as they connect LLMs to structured enterprise data. LLMs excel at summarizing conversations and helping with document understanding.
AI will increasingly integrate into everyday IT, marketing, and sales workflows, with agents playing a key role in connecting AI to these processes. Over the next two to three years, we’ll see widespread adoption of enterprise AI, deeply intertwined in company workflows. Additionally, AI will become a standard feature in both consumer and enterprise software, no longer a competitive differentiator but a necessity. A parallel can be drawn to mobile apps: in 2011-2012, apps like Instagram were exclusive to iOS, but today, most apps are available across web, Android, and iOS platforms. We will see a similar shift with AI, where every software tool you use will include AI capabilities, such as summarization and predictive functions. This will be the norm within the next five years.
How can organizations adapt to AI’s integration into infrastructure and stay ahead of potential disruptions?
Technology evolves in waves of innovation, and leaders in one wave often don’t lead in the next. For example, the music industry transitioned from vinyl records to cassettes, CDs, downloaded MP3s, and now streaming. The top CD manufacturer has no presence in today’s streaming market. Waves of innovation can cause disruptions, but not all are relevant to every company. For example, the crypto wave didn’t impact most enterprise workflows. However, missing a wave can be costly—ask Tesla and Toyota about BYD becoming the world’s largest EV automaker, Sony about Spotify and Apple Music, or Nokia about iPhones and Androids. It’s crucial to identify which wave matters to you, and AI is a powerful driver of digital transformation. For the last 20 years, digital transformation has been a key focus, and AI is just one piece of this puzzle.
To stay competitive, companies must ride this wave, as digital transformation enhances productivity for those using the right tools. However, it’s important to recognize that AI isn’t always accurate. For non-tech companies—like manufacturers, hotels, or hospitals—looking to deploy AI, the first step is choosing the right partner. The problem with AI today is that many SaaS companies are resellers of compute power or OpenAI services. It’s crucial to find a partner with a long-term vision—someone who hasn’t just rebranded overnight to appear as an AI company.
A level-headed, partner is the first step. Once you’ve selected the right partner, the next step is determining where and how to deploy AI. Find the low-hanging fruit—problems that can be addressed with around 80% accuracy. This aligns with common product management advice but is especially important with AI. If a task requires 100% accuracy—like automating credit decisions in a bank—AI might not be the ideal solution, as all large language models typically achieve only 80-90% accuracy.
To effectively integrate AI, companies need a holistic understanding of their business and industry. While your partner may not grasp all the specifics, you do. It’s essential to distinguish between AI’s real capabilities and the surrounding hype. A neutral partner can help navigate this, offering practical insights into what’s possible and what’s not.
What’s ManageEngine’s AI roadmap? How are you enabling the modern IT enterprise?
We’ve been building AI for the past 12 years, and we strongly believe that not everything is a large language model (LLM) problem. Instead, we focus on bespoke foundational models for IT, as AI works best in digital domains where IT information is already computer-based and has past data. However, the impact of LLMs on IT has been minimal because they are optimized for natural language, not the specialized IT terminology that involves configurations, security parameters, and executables.
We believe in right-sizing AI models—using machine learning models that are best suited for solving specific problems, rather than trying to apply LLMs everywhere. This approach has been central to our work over the last 12 years.
Recently, we launched our Agentic platform, where ManageEngine has already deployed pre-built agents across its products. However, we understand that each company’s use of agents may vary. So, we’ve developed an agent studio that allows companies to deploy agents tailored to their specific use cases using low-code or no-code platforms. We are currently working with a few closed customers and expect general availability in June.
You mentioned Agentic AI and of course for AI adoption as a whole, the costs can be overwhelming. Does ManageEngine have a solution to address this?
We don’t see AI as a separate entity. As of February 2025, our customers have paid zero for the contextual intelligence services we offer. We treat it as part of the product, like how you get Apple’s intelligence with an iPhone without paying extra for it. Currently, there are no charges for standalone intelligence services, and we don’t plan to price any of these such as anomaly detection or forecasting.
Agents, however, have usage limits, and customers can expand their usage for an additional cost, as is typical in the agent market. The key to our approach is right-sizing the models. By not forcing everything into large language models, we avoid the need for expensive GPUs and large data sets that can compromise privacy. This strategy allows us to offer significant value to our customers while maintaining privacy boundaries.