Sebastian Luxem, Chief Technology Officer at Experience One, discusses what it takes to build a scalable and sustainable agent architecture, why decisions about AI autonomy are ultimately a leadership issue, and how companies can develop AI agents that operate reliably—even in the face of uncertainty.
Mr. Luxem, what strategic questions should companies ask themselves before investing in agentic AI? We often hear people talk about AI agents without a clear understanding of what they are—or how these systems could be applied effectively within their own organizations. Before investing in agentic AI, it’s essential to understand both the technology’s potential and its limitations. We’re not talking about just another smart chatbot. We’re talking about systems that pursue goals autonomously, make decisions, and interact with people, tools, and even other agents. Would you assign tasks to a new employee without defining their responsibilities, providing the right tools, or offering any oversight? Probably not. The same principle applies to agentic AI. Clear responsibilities, defined requirements, and well-designed interfaces are essential from the start. AI agents need structured data, clean APIs, and clear expectations. Only those who lay this foundation can make the leap from concept to implementation. But even then, one critical question remains: How much autonomy should such a system have? That’s not a technical question—it’s a management decision. Just as with human employees, it must be clearly defined who decides what, within which boundaries, and with what level of accountability.
In your opinion, what is the most important first step for companies looking to implement agentic AI in practice? The most important step is to be guided by need—not by the technology: purpose first, not technology first. If you want to leverage agentic AI, you must first identify the right use case. Start where tasks are repetitive, well-structured, and currently require more resources than they should. These use cases rarely emerge from strategy decks—they’re uncovered through real-world application. Companies need to move beyond theory and start experimenting. There are still very few established standards for agentic AI, but what may not be deployable today could become a strategic differentiator within months—if companies start early and learn intentionally. That’s why exploration is so critical. But don’t wander blindly—always have a compass. The goal isn’t to implement everything at once. Start small, build experience, and then scale. Those who understand this approach gain an edge—not only technologically, but also organizationally.
What are currently the biggest technological challenges in implementing agents or multi-agent systems? The hardest part isn’t building agents—it’s making them scalable, controllable, and testable. You can build a prototype agent quickly. But once you have multiple agents interacting in a production system, you need architectures that coordinate communication, states, and tasks intelligently.
Scalability becomes a matter of stability. How do we orchestrate complex processes without agents blocking one another, duplicating tasks, or entering infinite loops?
The next big challenge is measurability. There’s no universal metric or standard evaluation framework—everything depends on the specific use case. Still, we need ways to assess which agents produce reliable results and how they deviate. In non-deterministic systems, traditional testing strategies fall short. Sometimes we even use AI to evaluate other AI systems. And instead of focusing only on final outputs, we must assess modular components separately—language capabilities, RAG performance, factual accuracy, and intent recognition. This kind of decoupling is crucial. We can’t create monolithic systems made of opaque prompt chains and tool stacks that are impossible to test or maintain. If too many small black boxes are bundled into one large one, we lose control.
That’s the real challenge: building production-ready systems on top of a constantly evolving foundation. Even in the absence of mature standards for multi-agent systems, companies must architect cleanly—with strong interfaces, logical modularity, and maintainable structures. Without engineering discipline, we risk building fragile systems that quickly hit their limits.
The full interview with Sebastian Luxem can be found in our white paper "Road to Agentic AI. The Fascination of Automation."