Legacy
Legacy Modernization: three AI agents doing the work of thirty devs.
Archaeologist, Architect, Cleaner. Our multi-agent method to migrate 20 years of COBOL in six months.
You have 1.2 million lines of Java 6 running your core business. The tech lead who wrote the architecture left in 2017. The docs live in dead Confluence wikis. And your board just asked for a cloud-native migration in twelve months. Let's be clear: with a human team alone, it's not feasible.
At Abbeal, we built three specialized AI agents that collaborate to do the work 60% faster than manual rewrite. Not to replace engineers, to multiply what they can tackle.
Agent 1: The Archaeologist
Its mission: map buried business logic. It ingests legacy code, DB schemas, production logs, historical Jira tickets. It produces a directed graph of critical functions, implicit business rules, and real (not theoretical) execution paths.
On a European retail client, the Archaeologist identified 73 undocumented business rules in four weeks, including 11 contradictory across modules. A human team would have needed eight months for the same result, reading code by hand.
Agent 2: The Architect
It takes the business graph produced by the Archaeologist and proposes a cloud-native target architecture: service decomposition, stack choice (Kotlin + Postgres + Kafka, or Go + DynamoDB depending on context), integration patterns, incremental migration strategy.
The Architect doesn't decide alone. It generates three quantified scenarios (effort, risk, time-to-market), with detailed ADRs (Architecture Decision Records). The human tech lead arbitrates. The agent saves three to five weeks of design that would have been spent in whiteboard workshops.
Agent 3: The Cleaner
Safe automated refactor. The Cleaner takes the Architect's decisions and generates target code, with behavioral equivalence tests. It refactors in small increments, never more than 500 lines per PR, and runs a full regression suite at each step.
python# Workflow Cleaner, simplifié class CleanerAgent: def refactor_module(self, legacy_path: str, target_arch: dict): old_behavior = self.capture_behavior(legacy_path) new_code = self.generate_target(legacy_path, target_arch) new_behavior = self.execute(new_code) if not self.behaviors_equivalent(old_behavior, new_behavior): return self.escalate_to_human(legacy_path, diff=...) return self.create_pr(new_code, tests=self.generate_tests(...))
Why multi-agent and not one big LLM?
A single generalist agent hallucinates, loses context after 30 files, and has no structural memory. Three specialized agents with their own roles, tools, and evals hold up for the nine-month duration of a migration project. Each agent has its own eval dataset, its own guardrails, its own human owner.
The numbers across 40 clients
- Migration time reduction: 58% on average, up to 73% on Java monoliths.
- Equivalence test coverage: 94% of legacy behavior automatically captured.
- Post-migration production bugs: -41% vs. equivalent manual rewrite.
- Time-to-first-PR: three weeks instead of three months.
« We migrated in seven months what our previous integrator estimated at two years. And our engineers learned the new architecture by doing, not by reading slides. »
This approach is not a SaaS product you deploy on Monday. It's a methodology, a stack, and a team of senior engineers piloting the agents. If your legacy costs you more each quarter than you invest in innovation, it's probably time to talk.
// Read next
IA
AI agents in production: avoiding the demo theatre.
Reliability, cost, security, evaluation. Seven patterns we actually use with our clients.
8 min
GreenOps
GreenOps: seven levers that cut 30% of your cloud bill.
Without sacrificing performance. Concrete cases: -30% on the bill, same SLOs.
6 min
Tech radar
Tech Radar 2026: why Rust and ROS 2 dominate.
Criteria, field reports, trade-offs. What we actually adopt vs what we assess.
10 min
