Regional council last week awarded tenders for some $12 million in work, part of the latest expansion at the Region of Waterloo International Airport in Breslau.
The spending is predicated on a massive near-term expansion in the number of passengers using the airport, though the pre-pandemic numbers were trending downwards – some 80,000 in the most recently available numbers from 2018, a long way from the milestones of 500,000 and one million being touted.
Missing in this and previous spending decisions at the airport – and all other projects, for that matter – was a discussion of contingencies should the numbers not materialize, as has been the case in past forecasts. Who is held accountable? What are the ramifications? What of the wasted money? (Also missing was talk of ending the operating subsidy or recouping the tax dollars spent on capital projects for what is a non-essential service, but that’s another story.)
Failure is not uncommon with large-scale projects, with most of the failures involving errors of omission rather than commission. Quite simply, those pushing for projects overhype the benefits and pass on spelling out all of the negatives, downplaying those that are presented. Reports presented to government bodies that vote on such projects paint too rosy a picture, and the politicians often fail to question the rationale, let alone the details.
Such is the “optimism bias” identified by Danish economist Bent Flyvbjerg in studying the gap between the hype and subsequent deflated reality of big infrastructure projects such as public transit.
In a much-cited 2009 paper with Australian colleagues Massimo Garbuio and Dan Lovallo entitled “Delusion and Deception in Large Infrastructure Projects,” he posits three categories of explanation for the regular occurrence of bad forecasting when it comes to both budgets (overruns) and results (underwhelming).
“The underlying reasons for all forecasting errors can usefully be grouped into three categories: 1) delusions or honest mistakes; 2) deceptions or strategic manipulation of information or processes or 3) bad luck. Bad luck or the unfortunate resolution of one of the major project uncertainties is the attribution typically given by management for a poor outcome.”
No surprise that those involved turn to bad luck rather than poor decision making to explain the often poor results.
In fact, public policy-making is rife with bias and faulty logic in governments around the globe, notes Andrew Graham, a professor in the School of Policy Studies at Queen’s University.
“Policy designers and those who must implement government projects or infrastructure are often guilty of what’s known as optimism bias (‘What could possibly go wrong?’) when, in fact, they should be looking at the end goal. They should be working backwards to identify not only what could go wrong, but how the whole process will roll out,” he writes in a 2019 paper.
“Instead, they focus on the beginning — the announcement, the first stages.”
There is often no follow-up to ensure the project is done correctly, nor any review of what went wrong. The politicians move on to the next shiny thing, and the bureaucrats and staff involved shuffle off somewhere else.
Such tendencies were identified in a 2018 paper by UK researchers Bob Hudson, David Hunter and Stephen Peckham.
“Politicians tend not to be held accountable for the outcomes of their policy initiatives – in the event of failure the likelihood is that that they will have moved on or moved out. One consequence of this is that they are too easily attracted to the prospect of short-term results. This can lead to the pushing through of policies as quickly as possible, rather than getting involved in the messy, protracted and frustrating details of how things might work out in practice. In general, there is evidence to suggest that the political will necessary to drive long-term policy-making tends to dissipate over time. The concern here is that policy-makers are more likely to get credit for legislation that is passed than for implementation problems that have been avoided. Indeed the latter will probably tend to be seen as “someone else’s problem,” they write.
Pointing to a review of failure in major government projects in the UK by the National Audit Office in 2013, they found issues in keeping with other such reviews of public policy failures.
The National Audit Office report was succinct in its synopsis:
“A long-standing problem widely recognised that too frequently results in the underestimation of the time, costs and risks to delivery and the overestimation of the benefits. It undermines value for money at best and, in the worst cases, leads to unviable projects.”
The study identified five interacting factors contributing to such overoptimism: complexity (underestimation of the delivery challenges); evidence base (insufficient objective, accurate and timely information on costs, timescales, benefits and risks); misunderstanding of stakeholders (optimism about the ability to align different views); behavior and Incentives (interested parties boosting their own prospects); and challenge and accountability (decision-makers seeking short-term recognition).
No surprise that all of that sounds familiar on this side of the pond.
“An interesting element in all of this research is the confirmation that cognitive biases play a significant role in assessing risks in policy implementation in a number of ways, often in the face of a mountain of contrary evidence,” writes Queen’s University’s Graham.
“Cognitive biases tend to confirm beliefs we already have. Biases block new information. While we need biases to short-hand our interpretation of events, they often filter and discount new information. Our experiences are our greatest asset and greatest liability in this process.”
Recognizing such biases is in line with questions surrounding rational choice theory, in which bureaucrats – and many others outside government, to be sure – making decisions, including major policy, based on assumptions of human behaviour that may be far from what actually happens in the end.
“Public policy fails for many reasons. Even a relatively simple objective, such as a vaccination campaign, requires myriad pieces of information and expertise, and involves the mobilization, cooperation and coordination of a great number of people and organizations that must act in certain ways at precise times. Many policies fail because the tasks are hard to do. Add to that the propensity for corruption, incompetence and political motivations, to which many public policies are prone, and it seems quite natural that things often do not turn out as expected,” writes Bernardo Mueller of the Department of Economics at the University of Brasilia in a 2019 paper (Why public policies fail: Policymaking under complexity).
“But although these evident frailties of the policymaking process are serious predicaments, they are problems that can in principal be dealt with. More effort, more information, better governance, smarter experts, more transparency and good will, all can do much to mitigate those problems and improve the delivery of public policy. Whole disciplines of economics, project management, and public administration provide theories, ideas and techniques for how to achieve better public policy results. Much improvement can certainly be achieved through such means. Better checks and balances on political organizations and improved accountability, for example, can surely do wonders to make public policy better serve the public interest.”
Accountability. Just the thing.