Your Legacy Systems Aren't the Problem. Your Approach to Modernizing Them Is.
Legacy modernization no longer has to mean a multi-year, budget-busting program. If you prioritize ruthlessly, AI can carry the mechanical load.
Key Takeaways
- On average, enterprises waste $370M annually due to legacy systems and technical debt – not from the wrong strategy, but simply from standing still (Pega).
- 40% of IT budgets go toward maintaining legacy systems rather than building new capabilities (McKinsey).
- 83% of migration projects fail or significantly exceed time and budget (Gartner).
- Agentic engineering can absorb 50–75% of the mechanical translation effort, cutting timelines by up to 50% and cost by up to 30%.
- Ruthless prioritization using a Cost of Delay framework, not comprehensive portfolio rewrites, is what makes modernization succeed.
- Test coverage is not overhead. It's the risk management layer that makes any migration trustworthy.
What I've seen and what most teams get wrong
I started at Mimacom as a developer. I've written the code, dealt with the legacy systems, and felt the frustration of spending days patching something that should have been replaced years ago. That experience shapes how I think about modernization today.
And here's what I've learned: the problem is rarely the legacy system itself. It's the approach teams take to modernizing it.
$370 million. That's what the average enterprise wastes every year because of legacy systems and the technical debt surrounding them. Not on the wrong strategy – just on standing still. McKinsey research finds that 40% of IT budgets go toward maintaining legacy systems rather than building new capabilities. Companies with fragmented architectures are 30% more likely to experience delays in AI implementation.
I've seen this pattern up close: talented engineers quietly leaving because they're spending their days patching code that hasn't meaningfully changed since 2008. Technical debt isn't just a cost center. It's an innovation ceiling that gets lower every quarter.
We want to use technology to actually move the needle. To unlock more value for our customers, to work smarter, and achieve better results. And that's exactly how we approached AI from the very beginning.
The conventional response – a large-scale rewrite, a multi-year modernization program, a rip-and-replace – is so expensive and risky that it rarely survives budget review. And so the debt compounds. But there is a smarter path; it requires rethinking both how you prioritize and how you deliver.
Two failure modes – and why both miss the point
In my experience, most organizations cycle between two broken approaches.
The patch-and-extend strategy adds workarounds, middleware layers, and API wrappers around aging systems to keep them functional. This feels pragmatic, and in the short term, it is. But over time, each patch adds complexity, degrades performance, and makes the underlying system harder to reason about. What started as a manageable codebase becomes a system that only a handful of engineers understand – and none of them want to touch.
The big-bang rewrite replaces the legacy system entirely, typically scoped at 18 to 36 months with a large team. According to Gartner, 83% of migration projects fail or significantly exceed their time and budget. The rest overrun, descope, or get cancelled midway, leaving organizations with two half-systems and a demoralized engineering team.
Neither approach fails because of the tools involved. They fail at the same upstream point: prioritization. Neither answers the question that actually matters: which applications should we modernize first, and why?
Ask: What's the cost of not acting?
Before writing a single line of new code, you need a principled answer to that question. The answer is not "the oldest system" or "the one engineers complain about most." It's the application where the cost of delay is highest.
Cost of Delay, a framework developed by Don Reinertsen, asks a simple question: What is the business impact of not acting, for every month we wait? For each legacy application in your portfolio, score it across three dimensions:
- Business value: What revenue, efficiency, or competitive advantage is this application blocking right now?
- Time criticality: How quickly is the opportunity decaying? A compliance deadline has a hard cutoff. A competitive window erodes gradually, and then suddenly.
- Risk reduction: What security vulnerabilities, regulatory exposure, or talent retention risk does this legacy system carry?
This scoring creates a modernization backlog that speaks the language of business outcomes, not IT hygiene – and that's what gets budget approved. The goal is not to modernize everything; it is to identify the two or three applications where staying on the legacy stack is most expensive, and go there first.
The instinct to "modernize the whole portfolio" is exactly what leads to big-bang programs that collapse under their own weight. Ruthless prioritization is not a compromise. It's the strategy.
How agentic engineering changes the economics
Once the right targets are identified, the delivery approach determines whether the economics work. This is where I've seen AI fundamentally change what's possible.
Agentic engineering – using AI agents to analyze, plan, and execute the mechanical work of migration – can absorb 50 to 75% of the translation effort that would otherwise fall to engineers.
|
What AI handles well |
What still requires engineers |
|
Analyzing legacy codebases and mapping cross-file dependencies |
Validating business logic fidelity |
|
Generating migrated code across frameworks (AngularJS → React, legacy Java → Spring) |
Data migration consistency across complex schemas |
|
Producing first-draft data transformation scripts |
Architecture and target state design |
|
Reconstructing documentation for undocumented business logic |
Review, judgment, and edge case detection |
Fujitsu's proof-of-concept trials showed that agentic AI can cut modernization timelines by up to 50%. Thomson Reuters now migrates 1.5 million lines of code per month at 30% lower cost. What used to take months can take weeks; what required a team of ten can be driven by a team of four.
One thing I always insist on: build your test suite before you migrate, not after. Comprehensive end-to-end and integration tests are the contract between the legacy system and its successor. They document current behavior as the baseline and validate that the migrated system produces identical outputs. Test coverage is not overhead – it is the risk management layer that makes the entire migration trustworthy.
What we've seen in practice
We've run this approach on real projects, with real constraints, real codebases, and real business logic that no AI understood out of the box.
AngularJS to React: Faster than a manual rewrite
We modernized an internal legacy application built on AngularJS, fully porting the business logic and application logic to React using agentic engineering. The AI handled the framework translation. Our engineers focused on reviewing the output, validating business logic fidelity, and catching the edge cases the model missed. The result: a modern React application delivered at a fraction of the timeline a traditional rewrite would have required.
What I took from this: agentic engineering works for framework migrations. Nonethless, the quality of the outcome depends entirely on engineers who can critically assess what the AI produces. This is not a process you hand off and walk away from. It's a collaboration, with AI doing the lifting and engineers providing the judgment.
A Liferay migration at half the cost
On a client project, we used Mistral as the AI engine – generating migration code directly from natural language descriptions of the target behavior – to drive a full Liferay platform migration. The result was approximately 50% lower project cost compared to an equivalent non-AI migration. Not because we cut scope or reduced quality, but because AI absorbed the repetitive, mechanical development work that would otherwise consume engineering hours.
With that burden removed, my team had the capacity to focus on what actually creates competitive value: custom functionality, UX improvements, and integration work that differentiates the platform.
What we learned at our internal AI hackathon
One of the moments that confirmed my conviction in this approach was our internal AI Hackathon, focused entirely on AI-accelerated engineering. I'd seen what AI could do on client projects, but I wanted to see what would happen when our own engineers had the space to push it further – no constraints, no predefined scope, just the question: how far can we go?
The results went well beyond what any of us had expected: not marginal gains in speed, but a fundamentally different pace of delivery, with output quality that held up under serious scrutiny.
It reinforced something I'd been saying for a while; we explored many different AI applications, from development to actual use cases, and we truly believe that there is so much value that we can unlock. We didn't approach AI to reduce headcount or cut costs. The goal was always better outcomes, for clients, for engineers, and for the quality of the work itself.
What AI cannot do on its own
I want to be direct about the limits, because setting accurate expectations is part of delivering this well.
Data migration is the hardest phase. AI-generated transformation scripts give you a starting point, but they do not give you consistency. Complex schemas, referential integrity constraints, data type mismatches, and edge-case nulls all require significant manual testing, iterative reconciliation, and domain expertise. Budget for this phase more generously than your first instinct suggests. Most teams that underestimate it regret it.
Business logic in legacy systems is often not legible to AI. Years of undocumented workarounds, tribal knowledge baked into conditionals, and implicit assumptions embedded in data flows mean AI agents will translate the structure but may miss the intent. Human review of any business-critical logic is non-negotiable.
The mindset shift matters as much as the tooling. Teams that treat AI output as a rigorous first draft – one to be reviewed, challenged, and refined – will succeed. Teams that treat it as a finished product will ship silent regressions.
Where we go from here
That's the approach I bring to every client conversation and every internal initiative at Mimacom. Not AI for its own sake. AI as a delivery tool – one that, when applied with the right prioritization and the right engineering discipline, makes modernization genuinely achievable.
If you want to know where to start, a Legacy Portfolio Assessment is the right first step. We'll map your application portfolio against a Cost of Delay framework, identify the highest-value modernization candidates, and give you a clear, costed roadmap, including an honest view of where AI-assisted migration delivers the most leverage, and where human judgment is irreplaceable.
Talk to our team, or explore our AI-Infused Engineering practice to see how we approach the full modernization lifecycle.
We are not just watching the AI wave happen. We are actually shaping it, and seeing what it means for us and for the future.
AI won't be replacing what we do. We now have new tools that help us better solve the challenges that our customers have.