Why Today's IT Roadmaps Reflect Yesterday's Software Economics
For years, enterprise technology planning followed a familiar logic.
Building software was expensive. Maintaining and evolving it was even more expensive.
That logic shaped an entire generation of decisions. Organizations bought SaaS instead of building internal tools. They migrated legacy systems rather than replacing them. They treated old codebases as something to work around, not something to rethink.
That made sense in a world where software was costly to produce and even costlier to maintain.
But that world is changing.
The rise of generative AI in software development is not only affecting how fast code gets written. It is also changing how quickly teams can understand existing systems, refactor legacy code, generate tests, review pull requests, and move across unfamiliar codebases. McKinsey notes that generative AI can support software development not only through code drafting, but also through test generation, legacy integration and migration, and code review for defects and inefficiencies.
That matters because many IT roadmaps still reflect old software economics. And when the economics change, the roadmap eventually has to change with them.
The old economics of enterprise IT
Most long-range technology plans were built on two assumptions:
- Building software is expensive.
- Maintaining software is even more expensive.
Those assumptions pushed companies toward durability. If it was expensive to build and expensive to maintain, the rational response was to stretch the life of systems, outsource complexity where possible, and avoid opening up large legacy estates unless absolutely necessary.
That is one reason SaaS became so attractive. Buying software often looked safer than owning it. It moved cost and complexity onto the vendor. Likewise, large modernization programs were framed as multi-year transformation efforts because the alternative — rewriting, replatforming, or rebuilding — looked too risky and too expensive.
This logic is still embedded in many portfolio decisions today. But the cost curve is moving.
AI is changing the cost of building software
One of the clearest data points comes from a controlled experiment by Peng, Kalliamvakou, Cihon, and Demirer. In that study, developers using GitHub Copilot completed a coding task 55.8% faster than the control group.
That result should not be overinterpreted. It was a specific task in a controlled setting, not a blanket statement about every software team or every workflow. But it is still important evidence that AI assistance can materially compress software development time in the right conditions.
This is no longer niche behavior, either. In GitHub's 2024 survey of 2,000 enterprise respondents across the United States, Brazil, India, and Germany, more than 97% reported having used AI coding tools at work at some point. GitHub also reports that respondents say AI helps them work more productively and use saved time for more strategic work such as system design and collaboration.
For CIOs and CTOs, that combination matters: measurable task-level productivity gains plus broad real-world adoption.
The bigger shift: AI is changing the cost of maintaining software too
This is where many strategy discussions are still lagging.
The headline story around AI in software has focused on code generation. But maintenance is where the economics may become even more strategically important.
Maintenance is not just bug fixing. It includes understanding old systems, reviewing code, navigating unfamiliar repositories, generating and extending tests, updating interfaces, and safely changing existing workflows. McKinsey explicitly highlights code review, testing, and legacy integration and migration as areas where generative AI can create value. GitHub's 2024 survey likewise found that respondents see AI coding tools as helpful for understanding existing codebases and for adopting new programming languages faster.
A useful operating example comes from Duolingo. In GitHub's customer story, Duolingo reports a 67% decrease in median code review turnaround time and says GitHub Copilot increased developer speed by 25% for developers who were new to a repository or framework, and by 10% for developers already familiar with that codebase.
That is not a peer-reviewed benchmark, and it should be treated as a company-reported case rather than universal proof. But it is still directionally important because it shows the effect is not limited to writing greenfield code. It extends into review cycles, onboarding into unfamiliar code, and day-to-day engineering flow.
Once maintenance work gets cheaper as well as build work, the implications for enterprise planning become much larger.
Why this matters for the IT roadmap
If software becomes cheaper to build and cheaper to modify, a number of long-standing assumptions start to weaken.
A SaaS contract that looked obviously rational three years ago may now deserve a second look. A legacy modernization effort that once required a large external program may now be feasible with a smaller internal team using AI-assisted workflows. A build-vs-buy decision that was closed two budget cycles ago may no longer rest on current economics.
McKinsey's recent work on IT modernization makes this point more concrete. It argues that generative AI is "radically recalibrating the costs and benefits" of modernizing legacy technology. In one example, McKinsey says a transaction processing system modernization that would previously have cost well over $100 million could now cost less than half that with generative AI. The same article says McKinsey's experience indicates 40 to 50% acceleration in modernization timelines and around 40% reduction in costs derived from technology debt, while also improving output quality.
Those are not universal market averages; they are McKinsey experience-based figures from specific modernization contexts. But they reinforce the broader point: legacy systems that were once economically untouchable may not be untouchable anymore.
The three places outdated assumptions show up first
In practice, the first cracks usually appear in three places.
1. SaaS renewals
Many enterprise subscriptions were justified in an era when internal development looked slow, expensive, and maintenance-heavy. As those costs change, some categories of software — especially workflow layers, internal dashboards, admin tools, and operational interfaces — start to look less like products you must rent and more like capabilities you could selectively build or rebuild.
This does not mean every SaaS product should be replaced. It means renewals should be tested against current economics, not historical habit.
2. Legacy modernization
For years, legacy systems survived partly because the cost of understanding and changing them was too high. That is where AI may be especially consequential. McKinsey describes generative AI as a tool for translating "often-impenetrable" legacy artifacts into understandable process descriptions and helping teams determine what should be updated, kept, or discarded.
That changes the shape of modernization. In some cases, the right answer may still be a major program. In others, work that was previously framed as transformation may start looking more like targeted refactoring and selective rebuild.
3. Build-vs-buy decisions
This is becoming less binary. McKinsey notes that organizations are moving from a simple "build versus buy" framing toward a mix of off-the-shelf tools, customization, and proprietary development.
That is exactly where leadership teams need a fresher decision framework. The question is no longer just whether to build or buy. It is which layers of the stack should be bought, which should be customized, and which are now strategically worth owning because the cost of ownership has fallen.
This is not just an AI adoption issue
It is tempting to frame all of this as an "AI initiative." That is too narrow.
This is a planning issue. It is an investment issue. In many organizations, it is also a capital-allocation issue.
McKinsey's 2024 State of AI survey found that 65% of respondents say their organizations are regularly using generative AI in at least one business function, and 67% expect their organizations to invest more in AI over the next three years. At the same time, 44% say their organizations have already experienced at least one negative consequence from generative AI use, with inaccuracy reported most often.
That combination matters. Leaders should not respond to changing software economics by launching uncontrolled experimentation. They should respond by revisiting the assumptions behind the roadmap with appropriate governance, risk management, and business discipline.
Questions leaders should ask now
If the economics of software are shifting, these are the questions worth putting on the table:
- Which SaaS subscriptions are we renewing out of inertia rather than current economics?
- Which legacy systems have become more changeable than our roadmap assumes?
- Which build-vs-buy decisions should be reopened because the cost model has moved?
- Are our procurement, amortization, and vendor-management practices still optimized for a world in which software ownership was dramatically more expensive?
- Do our roadmap horizons still reflect technology reality, or just budgeting tradition?
These are not edge questions. They are becoming core portfolio questions.
A better way to think about "disposable software"
The phrase can be misleading if it sounds like a defense of low-quality code. That is not the point.
"Disposable software" is better understood as software with a shorter economic half-life. Not every system should be rebuilt. Not every custom workflow should be owned. But more software components may become replaceable, rebuildable, or strategically temporary than enterprise planning models were built to assume.
That creates freedom of movement.
It can reduce lock-in. It can shorten the gap between decision and delivery. It can also change how organizations think about what is strategic to own versus what is merely convenient to rent.
The real implication for CIOs and CTOs
The organizations that adapt earliest will not necessarily be the ones running the most AI pilots. They will be the ones that notice sooner that their planning assumptions are aging.
That is the real leadership challenge.
When the economics of software shift, roadmaps that were once prudent can quietly become conservative. Vendor choices that once looked efficient can become expensive by default. Modernization plans that once looked unrealistic can become newly viable.
In other words: the roadmap does not fail all at once. It starts by drifting away from reality.
The job now is not to abandon long-term planning. It is to update the economic assumptions underneath it.
Source notes
Key figures referenced above:
- 55.8% faster coding task completion — Peng et al., The Impact of AI on Developer Productivity: Evidence from GitHub Copilot.
- Duolingo case study — 25% increase in developer speed, 10% for developers already familiar with a codebase, and 67% decrease in median code review turnaround time. Treated as company-reported evidence rather than independent research.
- GitHub 2024 survey — more than 97% of enterprise respondents reported having used AI coding tools at work.
- McKinsey 2024 State of AI — 65% of respondents say their organizations are regularly using gen AI in at least one business function; 67% expect more AI investment over the next three years; 44% report at least one negative consequence from gen AI use.
- McKinsey on IT modernization — experience-based claims on modernization acceleration (40–50%) and cost reduction (~40%) with gen AI. Presented as McKinsey experience, not market-wide averages.
Related Articles
Compliance-First Software Delivery
Why building compliance into the delivery process from day one is cheaper, faster, and less risky than retrofitting it later.
Why AI-Augmented Consulting Delivers Faster
How SprintOS combines domain expertise with AI tooling to deliver production software in weeks, not months.