Why IT Projects Go Over Budget—And How CFOs Can Fix It with Data-Driven Forecasting

Introduction: The IT Cost Overrun Crisis

IT projects are notorious for going over budget. Research by Bent Flyvbjerg, the leading expert on megaproject risks, shows that IT projects consistently rank among the worst offenders in terms of cost overruns, not necessarily in terms of average cost overrun but because they are susceptible to occasional spectacular overruns: low-probability, high-impact events. Major failures—like healthcare.gov’s disastrous rollout (source and more details here) or countless over-budget ERP implementations—underscore the systemic nature of the problem.

The consequences are severe. IT projects that spiral out of control drain resources, delay business transformations, and can even lead to executive-level resignations. Yet, despite decades of evidence, companies still rely on faulty cost estimation methods that fail to predict these overruns accurately.

Why Traditional IT Cost Estimates Fail

The core issue isn’t just bad luck—it’s systemic underestimation of risks. Flyvbjerg’s research highlights four major causes:

  • Optimism Bias & Planning Fallacy – Project teams often assume their initiative will be different from past failures. They believe they have superior processes, better teams, or unique conditions that will prevent overruns. However, research shows that most IT projects fall into the same patterns of cost escalation, regardless of initial optimism. Without referencing historical data, teams tend to underestimate the likelihood of complications, leading to budgets that are too low from the outset.
  • Strategic Misrepresentation – IT project proposals are often intentionally optimistic to secure funding. Decision-makers, whether due to internal pressure or competitive selection processes, understate costs and overstate benefits to get approval. This creates a perverse incentive: the projects that appear cheapest and fastest to implement are often the ones with hidden risks and inevitable overruns. Once a project is greenlit, these risks emerge, leading to escalated costs that were foreseeable but ignored.
  • Scope Creep & Complexity – IT projects rarely remain static. Requirements evolve, new features get added, and integration challenges arise—especially in enterprise environments where multiple systems interact. Each change, no matter how small, introduces cost escalations. Additionally, IT projects involve high degrees of technical uncertainty, meaning that even well-defined plans encounter unforeseen obstacles. Without rigorous contingency planning, projects spiral beyond their original scope and cost expectations.
  • Flawed Selection Process – Organizations often approve projects based on optimistic, unrealistic estimates rather than rigorous cost-risk analysis. The most aggressively low-cost proposals tend to win funding, even if they are less feasible than more conservative alternatives. This leads to a cycle where underestimation is rewarded, while realistic estimates are seen as too expensive. As a result, executives are repeatedly surprised when costs balloon beyond initial projections.

A Data-Driven Solution: Using Reference Class and Probability-based Forecasting

Instead of relying on internal intuition and overly optimistic estimates, organizations can leverage data-driven forecasting methods that incorporate historical insights and statistical modeling:

Instead of estimating costs based on isolated project assumptions, Reference Class Forecasting involves comparing a new project to a database of similar past projects. This method, pioneered by Bent Flyvbjerg, shifts the focus from internal expert judgment to empirical data. By analyzing the costs of similar, completed projects across different project types, industries, and vendors, organizations can generate fact-based estimates rather than relying on internal predictions. The key is that with this approach you don't need to try to guess which unforeseen events will lead to the delays and cost increases – the reference class already includes an average of these events and their impact.

This approach helps decision-makers anchor their forecasts in reality and adjust budgets based on actual risk patterns. The key advantage of reference class forecasting is that it counteracts optimism bias and strategic misrepresentation by providing a probabilistic distribution of expected outcomes rather than a single best-case scenario. It elevates the discussion in the decision-making forum by shifting the focus towards topics and questions like:

  • Median and percentiles of cost overruns – What is the typical overrun, and what does the 90th percentile look like?
  • Standard deviation and variance – How much do overruns fluctuate between projects of the same type?
  • Industry-specific comparisons – Are certain technologies or vendors associated with higher risks?

Where do you want to go from here?

If you are a senior decision-maker or responsible for governance, dive into how to counter the cognitive biases and manage the behavioral aspects of decision-making in our next article here.

OR

If you are interested in the technical aspects of Monte Carlo simulations and probability-based forecasting, continue reading here.

Further Reading

How Big Things Get Done – The Surprising Factors Behind Every Successful Project, from Home Renovations to Space Exploration, by Bent Flyvbjerg and Dan Gardner, New York, Penguin Random House, 2023, 304 pp., £18.99 (hardback), ISBN: 9781035018932