Andreas Hassellöf, CEO, Ombori highlights a broader shift in AI toward, what many are now calling “Physical AI”—intelligence that can observe, interpret, and respond in real-world environments
Artificial intelligence is entering a new chapter, one that will redefine how machines interact with the world around us. While much of the past decade has been focused on digital breakthroughs, we’re now seeing AI evolve in a way that brings intelligence into the physical world. This shift is more than a technological milestone. It’s a turning point in how we live, work, and interact with the environments around us.
The AI journey began quietly. It started with recommendation engines that predicted what we might want to watch, listen to, or buy. These systems operated in the background, learning from our behavior and nudging us toward the next click. Then came generative AI, and with it a wave of models that could talk, translate, summarize, and create content on demand. Large language models became household tools, reshaping communication, creativity, and knowledge work almost overnight.
But through it all, AI remained largely confined to digital spaces. It processed information, but it didn’t truly understand or act within the physical world. That’s what makes NVIDIA’s recent release of Cosmos-Reason1-7B such a significant development. This isn’t just another large language model. It’s part of a broader shift toward what many are now calling “Physical AI”—intelligence that can observe, interpret, and respond in real-world environments.
Cosmos-Reason1-7B is designed to reason about cause and effect. It can infer what might happen next, evaluate risks, and choose a safe course of action. Crucially, it can do so in natural language, explaining its reasoning in a way that humans can understand. This opens the door to AI systems that are not only autonomous but also explainable, an essential quality for trust and adoption in complex environments.
NVIDIA’s CEO Jensen Huang has referred to this as the “ChatGPT moment for robotics,” and it’s easy to see why. Just as generative AI unlocked a new wave of digital tools and interfaces, physical AI has the potential to transform logistics, healthcare, infrastructure, and consumer experiences. The ability to deploy intelligent systems that can reason in real time, navigate uncertainty, and adapt to dynamic conditions changes the entire equation.
Imagine a warehouse robot that doesn’t just follow a predefined path but understands that a wet surface could cause it to slip, or that a particular object is both fragile and oddly shaped. Picture a hospital cart that slows down in crowded corridors or reroutes itself to avoid obstacles, reducing the physical strain on nurses. Think of traffic lights that adjust based on the behavior of pedestrians, delivery robots, or baby strollers, not just on timers. These are not far-future scenarios. They’re beginning to happen now.
Of course, the real world is messy. Unlike digital platforms, physical environments are unpredictable. They involve a mix of legacy systems, regulations, safety considerations, and human behaviors that can’t be easily standardized. This is why progress in this space has often lagged behind what the technology can technically achieve. It’s not just about building smarter models. It’s about integrating them seamlessly into real environments without weeks or months of custom development.
That’s why open-source models like Cosmos-Reason1-7B matter so much. They democratize access to this new capability, allowing developers and organizations to experiment, iterate, and deploy at scale. They lower the barrier to entry for innovation and speed up the timeline for real-world adoption.
This matters not just for the tech industry but for every sector that interacts with the physical world, which is to say, nearly all of them. In cities pushing toward smart infrastructure, AI that understands space and context will be central to improving everything from public transport to energy efficiency. In retail and hospitality, intelligent systems will support human workers by handling repetitive or physically demanding tasks while maintaining a high level of service and safety. And in manufacturing and logistics, physical AI could become the backbone of a more adaptive and resilient supply chain.
As always, there are questions to be answered. How do we ensure safety and ethical deployment? How do we regulate autonomous systems in public spaces? How do we build trust in machines that make decisions on the fly? These challenges are real, but they’re also surmountable, especially when transparency and collaboration are built into the process from the beginning.
We are no longer asking if AI can move beyond the screen. That question has been answered. At Phygrid, an Ombori company, we’re asking how quickly we can bring intelligence into the real world and how wisely we choose to use it.
This is where Phygrid plays a vital role—bridging the gap between digital intelligence and physical environments, helping organizations deploy real-world AI solutions with speed, scale, and safety.
This is the third wave of AI. It’s not about virtual assistants or digital personas. It’s about machines that see, think, and act in the same spaces we do. The future of AI isn’t coming. It just walked through the door.