From necessary evil to strategic edge: How Generative AI rewires the ROI of enterprise transformation
Agenda
-
Foundational transformation programs are often triggered by risk and cost pressure, not by a compelling value narrative.
-
Most digital and modernization transformations still underperform, with low sustained success rates across studies.
-
Generative AI creates value by reducing execution friction through better discovery, documentation, and knowledge transfer.
-
AI also increases the value of the target state by enabling modularity, automation, and future AI readiness.
-
Multi‑agent orchestration scales this impact when agents and humans collaborate on shared, structured transformation artifacts.
When necessity kills momentum
There are projects every enterprise knows it needs to do, but very few people are excited to sponsor.
Legacy modernization. System migration. Process standardization. Documentation cleanup. Application rationalization.
These programs are important. In many cases, they are unavoidable. But they are still hard to prioritize properly.
The reason is simple: the business case has usually been framed too narrowly.
Most of the time, these projects are explained in terms of:
- Lower risk
- Lower maintenance cost
- Less disruption
- Reduced technical debt
- End-of-support remediation
All of that is valid. But it is defensive. It does not create momentum.
And in practice, these programs rarely start from a position of strength. They usually start under pressure. In Ensono’s 2025 survey, 49% of respondents said modernization was being driven by rising risk due to vulnerabilities, loss of software support, or downtime; 37% said they were struggling to release new products and features at the speed the business needed; and 32% said maintenance costs were eating up the IT budget. At the same time, technical debt and redundant applications continue to drive avoidable spend across IT portfolios.
So the project gets approved. But usually too late, under pressure, and with the wrong value story.
The bigger problem: these programs are also hard to execute
This is the second part of the issue.
It is not only that foundational transformation programs are hard to sell. They are also hard to land.
The success rates are not encouraging. McKinsey found that only 16% of respondents said their digital transformations improved performance and sustained the gains over time, with another 7% saying performance improved but the gains did not last. BCG found that only 30% of digital transformations met or exceeded target value and created sustainable change. IDC found that only about half of cloud migration projects were rated “successful” or “very successful.” And in Wakefield Research data cited by vFunction, 79% of application modernization projects failed.
These are different studies with different definitions, so I would not treat them as a single benchmark. But the pattern is clear enough: transformation is still underperforming.
And when you look at the reasons, they are not surprising.
Application modernization programs fail because expectations are badly set, the organization is not structured to support the target state, teams do not have the right tools, skills are missing, and complexity is underestimated. The same Wakefield study found that 97% of respondents expected pushback on a proposed modernization project, and nearly 50% said securing budget and resources was the hardest step. Cloud programs fail for similar reasons: lack of cloud skills, lack of standards, integration issues, performance and reliability problems, weak leadership support, and poor control of cost.
So this is the real context.
These projects are triggered by pressure, justified defensively, and executed in conditions that make success harder than it should be.
Survey data shows modernization is most often triggered by risk, speed constraints, cost pressure, and tech debt.
This is where Generative AI matters
I think there is a tendency to frame Generative AI only around new products, new interfaces, or new customer experiences.
That is part of the story, but not the whole story.
Some of the most immediate enterprise value is elsewhere: in improving the economics and execution of the transformation work companies already need to do.
That matters because AI changes the equation in two ways.
1. It improves execution
A large share of the drag in transformation programs comes from manual effort, poor visibility, and weak knowledge transfer.
That shows up in very practical places:
- Understanding legacy code
- Discovering dependencies
- Documenting behavior
- Generating tests
- Planning transitions
- Extracting knowledge from fragmented sources
- Helping teams work through systems they did not originally build
This is where Generative AI is useful in a very concrete way.
It helps reduce ambiguity. It helps surface structure in places where people are used to working around uncertainty. And it makes some of the hidden effort in these programs easier to manage.
That does not remove the need for strong architecture, governance, or delivery discipline. But it does reduce the execution burden.
2. It increases the value of the destination
This is the more important shift.
If AI only made migration or modernization cheaper, it would still be useful. But that would be a limited story.
The bigger point is that the destination becomes more valuable once the transformation is done well.
A better target environment means:
- Cleaner interfaces
- More modularity
- Better reuse
- Better automation potential
- Stronger AI readiness
- More flexibility for future change
So the transformation is no longer only about reducing cost or retiring risk. It is also about building a more usable, adaptable, and scalable operating environment.
That is where the ROI changes.
One practical bottleneck: AI needs a usable project context
This is the part I think gets missed in a lot of enterprise discussions.
People say AI can help migration. That is true.
But in most migration programs, the context AI needs is not organized in a way that makes it reliably useful.
The source code exists. The business knowledge exists. The project information exists. The migration decisions exist.
But they are usually spread across too many places, at different levels of quality, with different owners, and no consistent structure.
That means teams are not only struggling with legacy systems. They are also struggling with fragmented project context.
And that matters because AI is only as useful as the context it can work with.
That is one of the reasons we have been building an internal platform to support complex migration work.
What we are building
We are developing an internal platform for migration teams that acts as a shared artifact layer between humans, coding agents, and the target system design process.
The idea is not just to store documents. It is to create a governed workspace where migration knowledge can be generated, refined, reused, and acted on.
What matters most is not the workflow mechanics. It is the potential this creates.
Coding agents can analyze legacy source code and generate structured artifacts into the platform through MCP. That means the raw analysis does not stay trapped inside a one-off prompt or in a notebook on one person’s machine. It becomes part of the project knowledge base.
Those artifacts can document things like:
- Business rules
- Architecture
- Functional requirements
- Non-functional requirements
- Migration risks
- Technical stack
- System boundaries
- Dependencies
- Assumptions that need validation
That is a very different operating model from the way many migration projects still run today.
Instead of treating source-code analysis as an isolated activity, it becomes part of a shared and evolving body of project knowledge.
The key shift is a shared artifact layer where humans and agents build, validate, and reuse project knowledge across legacy and next-gen work.
The important part is what happens next
What makes this interesting is not only that coding agents can generate artifacts.
It is that those artifacts can then become usable context for the next phase of the transformation.
Once you have an artifact base that captures the current system in a more explicit way, coding agents can read from it when working on the next-generation system.
That changes the flow.
Instead of asking an agent to generate a target-state component from a thin prompt, you can give it a much stronger context:
- The extracted business rules
- The interpreted architecture
- The functional requirements
- The non-functional requirements
- The migration risks
- The current tech stack
- The known gaps and constraints
That makes the work more grounded.
The target-state design is no longer based only on a human summary of the old system. It can be informed by a growing body of structured artifacts generated and refined through the migration program itself.
This is where the value starts to compound.
The same project knowledge that helps explain the old system can help shape the new one.
Multi-agent orchestration is where this becomes scalable
The next step is not just single-agent assistance.
The bigger opportunity is multi-agent orchestration running on top of the artifact base.
Different agents can take on different types of work:
- One analyzing legacy code
- Another extracting business rules
- Another identifying technical dependencies
- Another drafting migration risks
- Another validating consistency across artifacts
- Another using the artifact set to help generate components or services for the next-generation architecture
That starts to create a more scalable transformation model.
Not because agents replace engineering teams. But because they can take on more of the repetitive analysis, synthesis, and structuring work that usually slows teams down.
And importantly, humans still stay in the loop.
The artifact base is not there to bypass collaboration. It is there to improve it.
Human + agent collaboration is the real operating model
I do not think the right model here is “let the agents do the migration.”
The more credible model is:
- Agents generate and refine artifacts,
- humans review, challenge, and enrich them,
- agents use those approved artifacts to support downstream design and implementation,
- teams collaborate around a shared project context instead of fragmented knowledge.
That is the practical shift.
Humans are not just prompting from the outside. They are working with agentic assistants inside the same knowledge space, further improving and collaborating on the artifacts as the migration progresses.
That matters because transformation programs are never just technical translation exercises. They involve interpretation, judgment, trade-offs, validation, and alignment.
So the operating model I find most useful is not “AI replacing delivery.” It is:
Humans and agents collaborating on structured transformation artifacts.
That is a much stronger foundation for complex migration work.
Why this matters for migration work
System migration is a good example because the problem is easy to see.
Traditional migration programs are usually slowed down by:
- Fragmented discovery
- Poor documentation
- Hidden dependencies
- Manual coding effort
- Manual testing
- Implicit knowledge held by a few people
That is why they feel slow, risky, and expensive.
AI can help with parts of that. But it helps much more when the project has a structured artifact base that agents can read from and contribute to.
That gives the migration team a way to:
- Make business logic more explicit,
- document architecture progressively,
- capture requirements in a usable form,
- surface risks earlier,
- create reusable context for target-state design,
- reduce the amount of knowledge that remains implicit.
That is what we are trying to solve.
Not by pretending a platform replaces migration expertise. It does not.
But by creating a better knowledge layer for migration teams and the coding agents supporting them.
The old ROI model is too narrow
For years, most foundational transformation programs have been judged on a simple logic:
- Lower risk
- Lower cost
That is the destination.
And then on the execution side:
- High complexity
- High uncertainty
That is why these projects often look unattractive even when they are necessary.
The value side is too narrow. The delivery side is too heavy.
That is the old equation.
In practical terms, the project feels like this: we are being forced to do something difficult, expensive, and disruptive, mainly so that things do not get worse.
That is not a strong value proposition.
Generative AI makes a better ROI model possible
The better way to think about transformation ROI is this:
On the destination side, the value is not just:
- Lower risk
- Lower cost
It is also:
- More speed
- More readiness
- More flexibility
On the execution side, complexity still exists, but its effective weight can be reduced when you improve:
- Clarity
- Agility
That is the shift I find most useful.
Not “AI solves transformation.”
It does not.
But AI can improve the conditions under which transformation is executed, and it can improve the business value of what the organization gets at the end.
And if you want that to work in real migration environments, you need more than a model. You need a structured way for humans and agents to build, refine, and reuse project knowledge together.
System migration is still the clearest example
If I had to pick one example where this becomes very tangible, it would be system migration.
Traditional migration programs usually start with familiar triggers:
- End of support
- Rising maintenance cost
- Brittle architecture
- Low agility
- Delivery bottlenecks
And the work itself is usually slowed down by familiar issues:
- Fragmented discovery
- Poor documentation
- Hidden dependencies
- Manual coding effort
- Manual testing
- Tribal knowledge
That is why migration often feels like a necessary evil.
But AI-assisted migration changes the shape of the work.
You can accelerate discovery. You can generate documentation. You can make dependencies more explicit. You can reduce some of the manual burden in planning and testing. You can turn source-code analysis into reusable artifacts. And you can let coding agents use those artifacts again when building the target state.
Then the value of the target state becomes broader:
- Cleaner interfaces
- Easier automation
- Better scalability
- Stronger AI adoption readiness
So the migration is no longer just a remediation exercise. It becomes part of capability building.
That is a much stronger position for the business case.
This applies beyond migration
Migration is just the clearest example.
The same pattern shows up in other parts of the enterprise:
- Process harmonization
- Legacy decommissioning
- Test modernization
- Compliance transformation
- Documentation renewal
- Knowledge transfer
These are not always attractive programs. But they are foundational. And in many organizations, they are exactly the kinds of programs that slow down future change if they are delayed too long.
Generative AI helps here for the same reason: it improves visibility, reduces manual effort, supports execution, and increases the value of the resulting foundation.
But again, it works best when the work is structured well enough to support it.
These programs rarely show up as a single initiative. They show up as a portfolio of “forced” work.
What leaders should ask differently
If the old model was too narrow, then leadership questions also need to change.
Instead of asking:
- Do we really have to do this?
- How cheaply can we get through it?
- Can we postpone it another year?
A better set of questions is:
- What is actually triggering this program?
- Where is execution friction highest today?
- Where can Generative AI reduce that friction in a meaningful way?
- Is there a structured artifact layer that both humans and agents can work against?
- What future capabilities does the target state unlock?
- Are we measuring only savings, or also speed, readiness, and flexibility?
That last point matters.
Because if you still evaluate foundational transformation only through risk and cost, you are probably using an old model for a different environment.
Final thought
I do not think Generative AI makes foundational transformation glamorous. That is not the point.
What it does is more useful.
It helps turn these programs into something with a better execution profile and a better value profile.
And if organizations want to use AI seriously in migration work, they need to go beyond one-off prompts and isolated copilots. They need a better way to:
- Capture migration knowledge,
- structure it as reusable artifacts,
- let coding agents work against it,
- let humans collaborate on it,
- and use it again when designing and building the next-generation system.
That is the thinking behind the platform we are building internally.
Not as a replacement for migration expertise. But as a better knowledge and execution layer for complex migration work.
Because many of the projects that shape the future of the enterprise are not the exciting ones at the edge. They are the ones in the foundation that everyone knows are necessary, but few know how to position, or execute, well.
If Generative AI helps reduce complexity, reduce uncertainty, and broaden the value of the destination, then these programs should stop being managed as necessary evils.
They should be treated for what they are:
strategic investments in enterprise capability.