The days when AI in the legal industry was little more than a glorified spell checker are over. What sounded like science fiction just a few years ago is now reality: Agentic AI – systems that don't just support, but can act independently. David, CEO & Co-Founder of Legartis, recently moderated a fascinating conversation between Kai, Co-Founder of the Liquid Legal Institute, and Gordian, CTO at Legartis. The three discussed how this new generation of AI systems is fundamentally changing legal work.
The New Era: From Support to Autonomy
David: "Previously, AI in the legal field was a spell checker – drafting, flagging, possibly summaries. Now we're actually entering the age of agentic systems. AI that acts, not just assists."
This shifts the focus: away from individual prompt helpers, toward coordinated agent workflows that take over real work steps – with tool access, memory, and feedback loops. This was the starting point for an in-depth conversation about the technological foundations, practical applications, and strategic challenges of Agentic Legal AI.
What Are AI Agents Anyway? – Gordian's Technical Perspective
Gordian first explained the technical foundations: "2025 was the year of the agent. An agent is essentially a framework around a large language model."
The crucial question: Why isn't the LLM alone enough? Gordian made it clear: "Traditionally, the large language model itself only has the data from the training that was given to it, and of course then whatever the user enters during a chat. The downside is, of course, that this training data becomes outdated very quickly, and training is also very expensive – it can take up to a year and cost billions of dollars."
The solution lies in four key elements of an agentic framework:
1. Role and Task: "Giving the model a specific task leads to much better performance. Instead of having it do everything, it could be, for example, a travel agent whose purpose is to help you find a good trip."
2. Access to Tools: "That is, I would say, the biggest of all these things that has changed. Agents can now expand their own context independently. For example, an agent can get access to an API, which means it can, for instance, independently retrieve emails or access a contract repository."
3. Long-term Memory: "The model is now also selectively able to store very important information about you, add it to long-term memory, and perhaps retrieve it at a later point."
4. Feedback Loop: "Probably one of the trickiest and still most challenging things is: How do you make an agent better without actually retraining it? And that is actually possible. Many are still not aware of this or think that the only way to make a model more efficient is training. But it can also be done without that."
From Individual Agent to Framework – The Next Step
Gordian then moved to the concept of agentic frameworks: "The next big step is not just one agent, but a large agentic framework where multiple agents can communicate with each other."
He used travel planning as an example: "The user plans the trip. You might want to have very specific agents that handle different parts of that trip planning. For example, one collects activities, one looks at accommodations, one is responsible for budgeting, and then there's of course an agent that has to coordinate all these sub-agents."
David followed up: "What is now the big shift from agents to agentic frameworks in one sentence?"
Gordian: "The biggest change is standardization. Especially MCP – Model Context Protocol – was probably the keyword some of you have heard. What that basically means is that most platforms or SaaS tools will expose their tools in a standardized way going forward, so that everyone can almost plug-and-play add these tools into the context for their models or agents."
People, Processes, Technology – Kai's Strategic Perspective
As an experienced practitioner, Kai brought the discussion to the practical challenges of implementation: "We're describing a brave new world again with new terms – you mention MCPs and frameworks, agentics – and probably the big question is: What does this mean for us?"
He structured his analysis along three dimensions:
1. People – The Biggest Challenge
"When you look at the people aspect, that's the biggest challenge for me: How do you get them on board or keep them on board? Because this is quite confusing."
The future, according to Kai, lies in interdisciplinary teams: "The composition of these future teams – we've been saying this for years: The team doesn't consist only of lawyers. There are lawyers with IT people, and now, to build these new connections, not only do the people involved need to grow and maybe change a little, but also the entire organization."
2. Processes – The Foundation
"You need to know your business, right? If you don't know how the process works and how things are fully integrated into the entire enterprise... How do you do Contract Lifecycle Management without knowing exactly the touchpoints? And how do you create an agent world of the future without knowing exactly what the defined tasks are?"
3. Technology – The Basic Capabilities
"It's the basic capabilities. How much tech knowledge do we need? Do lawyers really need to become techies? I doubt it. But you need to be able to speak at eye level. If you're not speaking the same language, if you're using terms that the other side doesn't understand, you can't work together."
Gordian added from the IT perspective: "Kai mentioned some already. I also thought it was kind of funny to hear from the other side: Do lawyers need to talk to tech people? At Legartis, I had voted from the other way around. I'm starting with the dev team and we said: We actually need lawyers to talk to our devs so that we can actually build a product that makes sense in the end."
Risks and How to Manage Them
A critical point in the discussion was the risks of agentic systems. Gordian warned: "The large language model cannot be one hundred percent controlled. There's no way to do that."
He illustrated this with a practical example: "Let's say you have a simple agent that can read your email inbox but also reply automatically. That seems like a very nice use case. You might get maybe a customer request, and your agent also has access to your knowledge base and can answer immediately. But what can happen is: The person who writes the email can try to trick the model into sending out all your other emails."
The solution? "This can be mitigated by implementing the tools that a model can talk to in the right way. In this example, the model or agent should only have access to read the email that the sender has written, or maybe the same conversation thread."
Gordian's conclusion: "In the end, we're already implementing AI, but we're still back to the basics of IT projects, right? That's a risk, but also something that can be mitigated if you do it the right way."
The Practice: Fully Automated Contract Playbook Creation
David then presented a concrete application: the Legartis Contract Playbook Creator. "At Legartis, we really focused our development on one concrete use case. We wanted to accelerate and optimize the creation of playbooks – something that has been talked about in the industry for quite some time."
He identified three central hurdles that had previously been showstoppers:
1. The Articulation Problem: "If I were to ask a legal department: 'What do I actually need to review a specific contract type for?', I would potentially get the answer: 'Well, you need to look for this and that. But you know what? I actually only know the problem when I see it.' How do you successfully translate this invaluable experience, this know-how, into a rigid digital rule set that an AI can actually understand?"
2. The Effort Problem: "It would actually take a lot of time to train an AI to bring all the industry-specific know-how into the creation of a playbook."
3. The Trust Problem: "Many providers don't like to talk about the quality of AI. We at Legartis have been doing this from minute one, but it lacked transparency for a long time."
The demonstration showed how the Contract Playbook Creator overcomes these hurdles: Through a structured dialogue, the system captures requirements, automatically creates test sets for quality assurance, and enables the user to control and improve the AI themselves.
Kai was impressed: "When you presented the Contract Playbook Creator for the first time in front of over thirty General Counsels, I saw jaws dropping because this is something we, as an industry, as legal departments, have been waiting for for quite some time."
Ready, to jump on the latest AI technology?
The Reality of Implementation
Despite the impressive possibilities, David emphasized the necessary investments: "What we see is: You still need a bit of time. If you actually want to create playbooks, you need to be willing to sit down and first have this conversation with the Playbook Creator."
He identified four key factors:
1. Invest Time: "On average, that's about twenty to thirty minutes per requirement for fine-tuning the test sets."
2. Develop Strategy: "The big question is: Where is this highest quality really needed? Where do you need to safeguard that?"
3. Willingness to Review: The option exists to either review all test sets in advance or only readjust when there are deviations.
4. Provide Contracts: "To actually create a test set, you need to provide contracts that give an answer to your requirements."
Future Perspectives: 2026 and Beyond
Kai shared his vision for the coming years: "At the Liquid Legal Institute, we took a step back and chose a very structured approach: Let's build something like a Foresight Office with:
- Trend Scouting: We look at all these moving parts that are currently emerging.
- Signal Tracking: When we hear something new, when we see a new whitepaper, we consolidate them and bring them into our Custom GPT, where they are analyzed and automatically put into a structured format.
- Scenario Planning: "We really sit down with experts and imagine: What could that be? What is the possible future? But also: What is a desirable future?"
For 2026, Kai identified several key themes:
1. New Roles Emerging: "The AI supervisor is a new role. Is it a sexy role? Is there an existing curriculum where you can become a trained AI supervisor? I doubt it. But there must be one in the future."
2. Content Management: "Supporting content needs to be in place: Where is your glossary? Where are your company standards? Your style guide on how you write things?"
3. Legal Benchmarking: "What do the LLMs of the world do with legal content and how do they work with it? Currently, it's narratives, stories. I experienced it, it was nice, but there's no objectivity."
4. Contract Layering: "A contract is made by lawyers with lawyers for the case something goes wrong. But the actual user of a contract is different. It could be the procurement department or the person buying things. With all the power we have with all these large language models that can translate something that is in legal language for good reasons, we can translate that into something the end user understands and maybe even enjoys."
Conclusion: A Revolution with Clear Requirements
The conversation between David, Kai, and Gordian makes it clear: Agentic Legal AI is no longer a distant vision of the future, but already reality. The technology has the potential to fundamentally change legal work – from the way contracts are reviewed to the roles and responsibilities in legal departments and law firms.
But technological possibilities alone don't guarantee success. As the three experts unanimously emphasized:
- People must be brought along – through training, new roles, and interdisciplinary collaboration
- Processes must be understood and documented – otherwise even the best AI remains ineffective
- Technological basic understanding is essential – not to become developers, but to communicate at eye level
- Quality and trust must be secured – through transparency, benchmarking, and human control
Gordian's pragmatic final word sums it up:
"In the end, we have the human-in-the-loop. I think – especially in the legal field – there's no way around it, at least not right now. That's why we also built human-in-the-loop into the Contract Playbook Creator and contract review with Legartis: During testing and also during analysis. AI is there to assist in every step. Our job is, of course, to speed up this process so that you can focus more on your own work and AI can assist more and more. But the loop – I think that will remain very valuable."
2025 may have been the year of agents. But 2026 and the following years will show whether the legal industry is ready to shape this new era not only technologically, but also organizationally and culturally. The tools are there. Now it's up to us to use them wisely and responsibly.
This blog is based on a conversation as part of the Legal AI Talk series with Kai (Co-Founder of the Liquid Legal Institute), Gordian (CTO at Legartis), and David (CEO and Co-Founder of Legartis).
Ready to jump on the latest AI technology?
Recommended Articles
Agentic AI: Automated Generated Contract Playbooks
Every day, countless agreements are drafted, negotiated, and signed. For legal departments, legal-ops teams, and procurement leads, it is essential to keep track: Which..
Trends 2025: AI in Contract Analysis
The latest trends in Large Language Models (LLMs) show a shift towards greater efficiency, advanced AI agents and wider adoption across all industries. AI models are becoming..