Anita Klingel from IPAI Foundation explains why Agentic AI is not just about technological advances, but above all about clear responsibilities, conscious decisions, and structural sovereignty. She calls for greater transparency and a clearer distinction from traditional automation solutions.
Ms. Klingel, what concerns you most in connection with Agentic AI? We talk a lot about Agentic AI, but what I currently find lacking is a common definition of what we actually mean by it. And, above all, what it is not. For me, the key feature is the depth of decision-making by the machine—in other words, the number and type of intermediate steps it performs independently. However, not every use case requires Agentic AI. Companies can also use classic Robotic Process Automation (RPA) to automate processes such as email management. The important thing is to carefully examine what truly makes sense—and, if necessary, to have the courage to resist the hype. A good strategy also means deliberately choosing not to pursue certain things.
How can we ensure that AI is used effectively in the right areas? From my perspective, it is crucial to establish clear responsibilities: Who is authorized to decide which use case gets implemented and when? It can be beneficial to involve employees in the ideation process to develop use cases with a user-centered approach. However, the actual decision must be made on a solid foundation that brings together multiple perspectives—for example, technical feasibility, legal frameworks, domain relevance, and economic value. Therefore, the decision on how to prioritize use cases should never rest with a single individual but rather with a committee or panel. A well-structured evaluation process helps adequately consider strategic factors such as resource requirements and opportunity costs. This ensures that the right projects are prioritized. The question of responsibility does not only concern the prioritization of use cases.
What does this mean in practice? A simple example: If the camera in an AI-powered image recognition system on the production line suddenly fails, who is responsible for the repair? Such questions must be clarified in advance. When companies implement new AI processes, simply introducing the technology is not enough—it requires a responsibility model covering the entire workflow. What do the responsibilities look like—also beyond the AI itself? For me, the greatest risk does not lie in the AI itself, but in poor human oversight. The more autonomous a system operates, the more important it is that someone is there to review its decisions. Take, for example, an AI agent that distributes sales orders: Who checks whether the distribution is truly fair to all employees? Errors must be detected and corrected—this also means that resources must be allocated for identification and remediation. Perhaps soon we will consider similar performance evaluation criteria for AI agents as we do for humans.
The full interview with Anita Klingel can be found in our white paper "Road to Agentic AI. The Fascination of Automation."