A century of australian cost-benefit analysis
aMark Twain (1897) gives the journey time from Sydney to Melbourne as 17 hours, but it is not clear how long it took to change trains (and go through Customs) at Albury. The number of passengers passing through Albury is also unknown, so no estimate has been made of the ‘time saving to passengers’.
Sources: Australasian Railways 1904; Commonwealth of Australia 1920, 1921.
A sanity check on the result
Despite popular wisdom over the last century about the benefits and desirability of standardising the Australian railway gauges, considered economic analysis shows that any benefits at the time of Federation would have been outweighed significantly by the costs. This finding should not come as too much of a surprise if we take into account some key features of Australian railway systems at the time of Federation:
Some lessons from history
The problem of incompatible gauges, and hence the implied wisdom of unifying the various railway lines is embedded in the Australian psyche. The various debates, conferences, studies and calls for unification since the 1890s reflect this belief, as do various publications and popular depictions of the issue: for example Coleman & Tanner (1967, p. 91, 1920 cartoon ‘Exorcising the Australian Devil’ of break-of-gauge), Blainey (1966, p. 245).
However, considered analysis shows that, given the initial blunder of different gauges, unification, at least at the time of Federation, would not have been sensible. Whether the balance of costs and benefits would have swung the other way in later years is a moot point: the costs of standardisation also rose dramatically in the early 20 century because of increasing wage levels, a Commonwealth tariff on steel (and hence rails) and more extensive railway networks.
The clear lesson is that there is no substitute for rigorous economic analysis as an input into informed decision-making. This is most particularly true for large, ‘nation-building’ projects where it may seem perfectly obvious to the man in the street that they should proceed.
A second lesson is that Australia, as well as the United States, was capable at the time of Federation of producing analyses that come very close to today’s use of formal cost-benefit analysis. The unfortunate aspect of the lesson, however, is that the figures produced by the Railway Accountants in 1904 on the benefits of unification were totally ignored. Subsequent government sponsored studies into unification of rail gauge focused solely on the issues of cost of standardisation, and which gauge was to be preferred, although at least several state governments did do their own internal sums on the financial implications to them.
Unlike the USA in 1936, Australia never institutionalised the use of economic analysis as an input to the political decision-making process. It is not entirely clear how much the USA has benefitted by requiring its regulatory agencies to undertake cost-benefit analysis before proposing regulations, for example. Fuchs and Anderson (1987) conclude that, despite a decade and a half of presidential decrees, cost-benefit analysis had not become institutionalised in the regulatory process at the agency level, with only 1 per cent of rules being subjected to analysis by the agencies. Hahn and Tetlock (2007) claim that agencies often fail to comply with Office of Management and Budget guidelines for analysis, and raise the possibility of legislative mandating of cost-benefit analysis for important regulations.
However, it would be reasonable to speculate on whether the current burden of red tape in Australia could have been significantly lower had we taken a similar path to that of the USA. It is instructive to read Roger Clarke’s (1994) passionate examination of the abortive introduction of the Australia Card scheme by the Minister for Social Security. An excerpt is worth quoting at length:
The over-enthusiasm of the Department for the program is of historical interest. Of ongoing concern, however, was the Department’s failure to apply conventional cost/benefit analysis principles to the exercise. Indeed, there was evidence of failure to even understand the concepts involved. In the 1992 [Annual] Report, for example, net present value techniques were not applied, hardware and maintenance costs were overlooked, no costs were imputed for the efforts of other agencies and clients (which in the case of a program of such wide scope is essential), and the bases on which savings were projected into the future were not stated. The most glaring error was the complete omission of the staff costs involved in 137,000 manual examinations of files, 18,000 actual reviews, 10,000 actions against clients, 1,300 queries by clients, 150 formal appeals, 1,500 debt recovery actions (of which 700 involved negotiations with the debtor), and 100 briefings of the Director of Public Prosecutions. This omission was despite statements that ‘the real cost has been in the time and effort of staff administering the program’ and ‘the reporting requirements are stringent and a lot of time and effort is needed to comply with them’...
The Privacy Commissioner expressed similar concerns, albeit more gently ... An external audit [by the ANAO] of the Parallel Data Matching Program also criticised the quality of cost/benefit analysis undertaken, and pointed out that the Act ‘requires the tabling of a comprehensive report in both House of Parliament ... Sufficiently comprehensive cost/benefit information had not been included in either Report ...’
Clarke’s example highlights the importance of requiring comprehensive analysis of the costs and benefits of regulatory programs. More importantly, it demonstrates that mandating the use of rigorous cost-benefit (or other) analysis will not be effective unless the bureaucracy understands the underlying principles, and applies them of its own volition. Cultural factors are more important, therefore, than formal guidelines and rules.
Current use of Cost-Benefit Analysis in Australia
Ascertaining the extent to which cost-benefit analysis is used by Australian governments is difficult; not least because such analyses are rarely published even if – or particularly if – they are used in the political decision-making process. One example of a well written, published report is that of the Auditor General of the Australian Capital Territory (2002), on the V8 Super Car races that examined the claimed economic benefits of the event.
Despite media and Parliamentary interest over a period of some years about shortcomings in the Regional Partnerships program in the then Department of Transport and Regional Services (DOTARS), little serious economic analysis appears to have been undertaken. There is no evidence of the use of cost-benefit analysis, despite its feasibility being demonstrated by Dobes (2007). Further, the Australian National Audit Office (2007, vol. 2, pp. 413-415) found that departmental officials rarely made use even of basic discounted cash flow analysis to evaluate projects or to advise ministers; partly because of lack of any guidance in the internal procedures manual. Since no cost-benefit analysis was conducted by the Department itself prior to the establishment of the program, its national benefit has never been established.
A comprehensive overview is not possible, but discussions were held with three Australian Government departments – the then DOTARS, Defence, and Health and Ageing – in September and October 2007 in order to obtain a cross-sectional perspective on current practice. AusLink (DOTARS) deals with large infrastructure projects, Defence has a range of large projects that involve significant uncertainty of outcome as well as difficulty in measurement of benefits, and the focus of Health and Ageing is on regulatory activity.
AusLink (Department of Infrastructure, Transport, Regional Development and Local Government)
Australia’s federal system has meant the involvement of three tiers of government in the planning, funding, construction and maintenance of roads. Modern roads are not subject to gauge constraints like railways, but even road construction requires a degree of coordinated planning and implementation. Proposals to increase weight limits for trucks in the 1990s, for example, led to the sudden realisation that bridges on local roads might not be strong enough to take weights regularly used on national highways.
Following publication of Green and White Papers in 2002 and 2004 respectively, the Australian Government and the states have adopted a joint approach to planning and developing the land transport component of AusLink within 24 designated ‘strategic’ transport corridors. Within the framework of these designated corridors, the states are to identify projects which will be subjected to a broadly based merits test by the Australian Government. The merits test will include some basic cost-benefit analysis, with further, more detailed analysis undertaken during the scoping and development stages.
The AusLink arrangements raise a number of issues in terms of cost-benefit analysis. For example, distributional weights to incorporate equity considerations or to favour freight traffic over passengers (eg by assigning zero value to passenger travel times, but not to trucks) may be used in presenting a business case for a project. An example given in Australian Transport Council (2006, volume 3, p. 82) is that of weighting of benefits by geographic area: benefits accruing to regional residents are multiplied by a weight of 1.5 compared to urban areas whose weight is only 0.5. It is professionally accepted practice, where cost-benefit analysis is adjusted in an arbitrary way such as this, to present both adjusted and unadjusted results to ensure transparency; but this course of action does not appear to be recommended by the Australian Transport Council.
Another major issue is the fact that Commonwealth funds are allocated to road and rail projects within designated corridors. It is possible that other road and rail projects – for example, within congested cities – would generate greater national benefits than those within the corridors, but these are not considered because they fall within the purview of the states. Allocation of funding on the basis of jurisdictional differences risks the achievement of second-best outcomes nationally, even if AusLink funding decisions are based in future on genuine cost-benefit analysis.
In some ways, the proposed implementation of cost-benefit methods under the second stage of AusLink from 2009-2010 is simply a catch-up to the 1936 United States Flood Control Act. Nevertheless, the AusLink approach to infrastructure investment is a major step forward. It can only be hoped that similar approaches can be adopted in other areas of Commonwealth-State funding and regulatory policy.
The health sector
Discussion with senior officials of the Department of Health and Ageing indicated that the dominant analytical paradigm in the health sector is cost-effectiveness analysis, rather than cost-benefit analysis. Indeed, the Department has only one or two officers with a background in cost-benefit analysis.
Recommendations on listing drugs under the Pharmaceutical Benefits Scheme, for example, are made by committees on the basis of qualitative judgement, or using measures such as Quality Adjusted Life Years (QALY). QALYs are measures of health that combine the time that a person can be expected to survive and the state of health that the person can expect to enjoy till their death. Drugs and other treatments are compared in terms of cost per QALY. Boardman (2006, ch. 17) and Drummond (1997, chs. 5-6) provide more detailed expositions.
In general, cost-effectiveness analysis (CEA) compares the cost of a particular action or treatment with a single output such as a QALY. For specific treatments, outputs can also be more specific measures such as reduced blood pressure, or the improvement in cholesterol levels. CEA is more limited in scope than cost-benefit analysis because outputs are not monetised. In particular, CEA provides ratio measures only (cost/output), so that comparisons between treatments may be bedevilled by scale effects (see Boardman 2006, ch. 17). More seriously, CEA does not permit comparisons between projects with different outputs: for example, a hospital versus a school.
It is possible that decision-making in the health sector is subject to a form of path dependence. The phrase ‘path dependence’ – sometimes referred to as ‘lock-in’ – has come to have multiple meanings; from the broad concept of ‘history matters’ to the more narrow one of ‘institutions are often self-reinforcing’. A popular example among economists (eg Krugman 1994, ch. 9) is the QWERTY keyboard: it has been claimed that the keyboard was designed to slow down typists, to avoid jamming in early mechanical typewriters. But the keyboard has persisted into the electronic age because typing schools have continued to teach on that basis, and despite the well publicised existence of a more efficient Dvorak keyboard in the 1930s. Much of this view is debunked by Liebowitz and Margolis (1990), but the debate about similar cases continues.
Because past analyses and decisions in the health sector have been based predominantly on cost-effectiveness analysis, academics and government advisory committees are more familiar with it, and may consider it to be the ‘correct’ approach from the perspective of the health sector. Another factor that probably results in path dependence is that the data and information collected by health bureaucracies continues to be tailored to the needs of cost-effectiveness analysis (because that is what is used), but does not include the sort of data (eg willingness to pay) that is required for cost-benefit analysis. So switching to cost-benefit analysis would take a lot of effort compared to continued use of cost-effectiveness analysis.
The health system therefore continues to perpetuate the use of cost-effectiveness analysis. Yet it is clear from the academic literature that cost-benefit studies are in fact feasible, despite the relative neglect of the method. Studies such as that by Abelson (2003) are still fairly rare.
An aversion to placing monetary values on (statistical) life is sometimes claimed to be an impediment to the use of cost-benefit analysis in the health sector. While some reluctance on the part of medical practitioners is understandable, health administrators routinely make decisions that determine the provision of treatments and hence the probability of saving lives, or at least affecting the quality of life. So in actual fact, implicit values are placed on human lives by medical administrators and government medical committees.
An example of implicit valuation of human life used by medical practitioners is the triage system. In resource-constrained emergency situations triage may require a trade-off that allows some lives to be lost in order to preserve others. Similarly, a hospital may purchase a sophisticated machine to treat a relatively small number of patients who have rare life-threatening disease; possibly at the cost of increased risk of loss of life in other sections of the hospital due to a shortage of drugs or other less iconic equipment.
Further, the National Health and Medical Research Council (2001) reports monetary thresholds per life-year gained that are used as guidelines by the Pharmaceutical Benefits Advisory Committee to recommend pharmaceuticals that qualify for government subsidies. In other words, the guidelines specify maximum costs per life saved as a means of deciding whether a specific drug is sufficiently cost-effective to approve. A similar approach is used in the UK where the National Institute for Health and Clinical Excellence ‘rarely accepts that drugs are cost-effective if they cost more than £25,000-35,000 per QALY’ (The Economist, 25 February 2006, p. 53).
Greater use of cost-benefit analysis in the health sector would simply make explicit currently implicit values placed on (statistical) life. However, even if this were a step too far, it is not clear why the health sector uses relatively unsophisticated forms of cost-effectiveness analysis. In particular, greater use could be made of Data Envelopment Analysis (see below) which permits comparisons between treatments or projects with multiple inputs and outputs.
The allocation of Defence resources between the many competing (and sometimes complementary) demands for weapons platforms, training, ammunition, infrastructure, repair and maintenance facilities, etc, is best viewed from the perspective of three broad levels:
Greater attention has been given in recent years to increasing the efficiency of acquiring equipment: the lowest, ‘procurement’ level in the decision-making hierarchy. In particular, the Defence Materiel Organisation has pursued a more commercial approach to managing the process of procurement. A few analytical studies have also appeared, including those by Ergas and Menezes (2004), Throsby and Withers (1999), and Thomson (2003) on Defence spending.
Force Structure and Capability Analysis, however, appears to be largely based on qualitative analysis by various committees. Although the process is not transparent to external observers, anecdotal material gathered from various sources suggests that relatively rigorous cost-effectiveness analyses – let alone cost-benefit analysis – are seriously lacking. Given the development and use of some of these techniques under United States Defense Secretary Robert McNamara in the 1960s (Hitch, 1967), and the contributions of McKean (1963), this is surprising.
As the need for better integration of weapons systems in the three Services grows, there is an increasing probability of a form of path dependence unduly influencing the purchase of equipment, or the disposition of forces. Replacement of radio equipment by the Army, for example, may well be constrained by the type and capabilities of existing older equipment used by the Navy or Air Force unless all are replaced together on the basis of common, complementary objectives. This implicit constraint on obtaining compatible equipment could in effect condemn the Defence Forces to ‘fighting the last war’.
While a purely budgetary perspective or constraint might conclude that it is possible to only replace the Army radios, a full cost-benefit analysis that took into account the benefits of full communications inter-operability, as well as newer technology for all three Services, would be more likely to provide a better guide to overall Defence priorities. In particular, unless a technique such as cost-benefit analysis is used, there is a risk of large ‘nation-building’, iconic projects crowding out the ‘boots and bullets’ that are essential for on-the-ground support of personnel in conflict situations. A similar problem appears to have beset American forces in Iraq: although well equipped to fight conventional high-intensity wars, they, initially at least, lacked the intelligence resources and counter-terrorism capability necessary for low-level urban conflict.
Like the health sector, Defence relies on committees to make recommendations, presumably based mainly on qualitative or purely budgetary analysis. Despite the complexities, however, it is at least arguable that considerable scope exists to undertake cost-effectiveness analysis.
About 30 outputs are identified in the Defence budget. The cost-effectiveness of increasing specific outputs could be assessed using some index of their contribution to achieving overall strategic objectives, or their destructive capacity (sometimes expressed as ‘kill capability’). Focus groups of experts could be used to generate such indexes; that is, using a form of the Delphi method (http://en.wikipedia.org/wiki/ Delphi_method).
Data Envelopment Analysis (DEA) could also be used where there are changes in more than one output or outcome. Using linear programming techniques, DEA measures the productive efficiency of different decision making units (which could be a weapons platform, or set of outputs) relative to a common production frontier. See, for example, Coelli et al (2005). In effect, DEA is a form of cost-effectiveness analysis that permits comparisons of more than one output at a time.
Cost-benefit analysis may also be feasible for some Defence projects or issues. So-called contingent valuation methods (essentially questionnaires designed to estimate the ‘willingness to pay’ of respondents) can be used to estimate non-market items such as the preservation of Kakadu National Park or the reduction of risk to life. Boardman et al (2006, ch. 14) provide a good overview of the approach. The Defence Science and Technology Organisation (1999) appears to have at least proposed work to assess the use of insensitive munitions using contingent valuation methods, but little other material of a similar nature appears to be available for defence-related studies.
Time for a renaissance of Cost-Benefit Analysis?
Apart from the perennial debate over national Defence, Australia faces major issues such as Climate Change and an ageing population in the near future. Soon after its election in November 2007, the Australian Government signalled its intention to address education, provision of health services, increased infrastructure investment, provision of faster broadband services, water supplies, mitigation of greenhouse emissions, etc. These issues encompass a very wide range of competing and major calls on national resources.
Unless some form of comprehensive, analytically consistent and comparable decision-making process is adopted, resources will not be used in the most efficient way possible, reducing Australia’s standard of living unnecessarily. Given the number and magnitude of issues facing us, this is not an inconsequential matter.
Cost-benefit analysis is the only method that allows comparisons between sectors such as roads, hospitals, Defence, etc, as well as being capable of comprehensive analysis that takes into account factors such as environmental effects and other social costs and benefits. It is therefore best placed for ‘whole of government’ determination of spending priorities.
Fostering increased use of Cost-Benefit Analysis
Even if the desirability of applying cost-benefit analysis to government decision-making about projects and regulatory proposals is generally accepted, implementation of such a policy is likely to be far from easy.
One means of introducing the use of cost-benefit analysis into the provision of public service advice to ministers would be to mandate its use for all proposed regulations or projects.
But compulsion is always problematic. Monitoring and enforcement of rules or regulations is generally resource-intensive. And, more often than not, bureaucracies treat rules as mere impediments, or, at best, a necessary evil that needs to be recognised formally but circumvented in practice: ‘public checking in, private checking out’ in the management argot.
Boyfield (2007) summarises the reasons for the failure of Regulatory Impact Assessments (RIAs) in Britain to live up to expectations:
An analogous situation in Australia has been the creative approach of the bureaucracy to evading the more prescriptive procurement guidelines imposed on it in January 2005. There is similarly little hope of positive outcomes from imposing the use of cost-benefit analysis on a potentially uncooperative Australian Public Service.
If the use of cost-benefit analysis is to become a genuinely organic part of government decision-making processes, it needs to become a naturally desirable and accepted part of the bureaucratic culture. However, adoption of a relatively persuasive approach – rather than compulsion - is likely to require a sustained effort over a lengthy period of time. And even if the bureaucracy ultimately comes to appreciate the utility of cost-benefit analysis, it is necessary to also overcome any scepticism or resistance at the political level. It is often at the political level where the findings of an objective cost-benefit analysis can be the least welcome when a government has a specific, predetermined agenda.
Conceptually, there are two major approaches to fostering an increased usage of cost-benefit analysis in Australia. The costs of producing and using cost-benefit studies can be reduced (a shift downwards of the supply curve for cost-benefit studies), or demand for such studies can be increased (a shift upward of the demand curve). Achievement of reduced costs and increased demand can be promoted in a number of ways.
Reducing the cost of producing cost-benefit analyses
Encouraging greater use of cost-benefit analysis
As well as lowering the cost of cost-benefit studies, demand for them can be encouraged in a number of ways:
Fostering a cost-benefit analysis culture in government would be more likely to succeed if supported by appropriate institutional arrangements. Some possibilities include:
Just as individuals maximise their income from investments in shares by choosing those with the highest expected net yield, governments can maintain social returns on expenditure (and hence the standard of living of the community) at the highest level possible by choosing projects and policies with the highest levels of net present value.
Because governments depend on considered advice from their officials in making decisions about expenditure, they will be best served by a Public Service that is comfortable with commissioning, or itself undertaking, economic analysis that complements purely budgetary perspectives.
However, considerable cultural change is a prerequisite for the successful implementation of improved analyses of alternative uses of government funds. Training, data collection, and a degree of standardisation in approach are essential. But sustaining such a change in the long term also requires a more open approach, including publication of results of cost-benefit studies in order to make them familiar to the public and to help governments to justify their expenditure decisions.
Abelson, P. 2003, Cost-Benefit Analysis of Proposed New Health Warnings on Tobacco Products, Applied Economics, Report Prepared for Commonwealth Department of Health and Ageing, Canberra.
Anderson, J. (Minister for Transport and Regional Services) 2002, AusLink: Towards the National Land Transport Plan, Green Paper, Department of Transport and Regional Services, Canberra.
Anderson, J. and Campbell, I. 2004, AusLink White Paper, Commonwealth of Australia, Canberra.
Arnold, J. L. 1988, The Evolution of the 1936 Flood Control Act, Office of History, United States Army Corps of Engineers, Fort Belvoir, Virginia.
Australian Capital Territory Auditor General 2002, V8 Car Races in Canberra – Costs and Benefits, Performance Audit Report, Report no. 5, July, Publishing Services, Department of Urban Services, publication no. 02/1374.
Australian National Audit Office 2007, Performance Audit of the Regional Partnerships Programme, Audit Report no. 14 2007-08, Commonwealth of Australia, Canberra.
Australian Transport Council 2006, National Guidelines for Transport System Management in Australia: volume 3 Appraisal of Initiatives, Commonwealth of Australia, Canberra.
Australasian Railways 1904, Conference of the Commissioners and General Managers of the State Railways, Sydney, May, 1904, William Applegate Gullick, Government Printer, Sydney.
Blainey, G. 1966, The Tyranny of Distance: how Distance Shaped Australia’s History, Sun Books, Melbourne.
Boardman, A. E., D. H. Greenberg, A. R. Vining, D. L. Weimer 2006, Cost-Benefit Analysis: Concepts and Practice, 3thrd edition, Pearson Prentice Hall, New Jersey.
Boyfield, K. 2007, RIAs: why don’t they work?: a submission to the Business Council for Britain, Centre for Policy Studies, London. [RIA = Regulation Impact Assessment]
Clarke, R. 1994, ‘Computer matching by government agencies: the failure of cost/benefit analysis as a control mechanism’, November 1994. Viewed (http://www.anu.edu.au/people/roger.clarke/DV/MatchCBA.html), published in Information Infrastructure & Policy, March 1995, pp. 29-65
Coelli, T. J., Rao, P., O’Donnell, C. & Battese, E. 2005, An Introduction to Efficiency and Productivity Analysis, 2nd edition, Springer, USA.
Coleman, P. & Tanner, L. 1967, Cartoons of Australian History, Thomas Nelson, Australia.
Commonwealth of Australia 1920, Report of the Resolutions, Proceedings, and Debates of the Premiers’ Conference held at Melbourne, May, 1920; Together with Appendices, Albert J Mullett, Government Printer, Melbourne.
—— 1921, Uniform Railway Gauge. Resolution of Conference of Commonwealth and State Ministers. Melbourne, November, 1921, Albert J Mullett, Government Printer, Melbourne.
Deane, H. 1902, The Central Railway Station, Sydney: a lecture delivered before the Sydney University Engineering Society, on the 19th December, 1902, publisher unknown, 1902(?), Printer: F. Clarke, Sydney.
Department of Finance and Administration 2006, Handbook of Cost-Benefit Analysis, Financial Management Reference Material no. 6, January, Commonwealth of Australia, Canberra.
Dobes, L. 2007, ‘Turning isolation to advantage in regional cost-benefit analysis’, Economic Papers, vol. 26, no. 1, pp. 17-28.
Drummond, M. F., O’Brien, B., Stoddart G. L. & Torrance G. W. 1997, Methods for the Economic Evaluation of Health Care Programmes, 2nd edn, Oxford University Press, UK.
Dupuit, J. 1844, ‘On the measurement of the utility of public works’, Annales des Ponts et Chaussees, translation reprinted in Munby, D 1968, Transport: Selected Readings, Penguin Modern Economics Readings, Penguin Books, UK.
Ergas, H. & Menezes, F. 2004, Some Economic Aspects of the Weapons Systems Acquisition Process, Working Paper no. 1, 30 March, Australian Centre of Regulatory Economics, Australian National University, Canberra.
Fuchs, E. P. & Anderson, J. E. 1987, ‘The institutionalization of Cost-Benefit Analysis’, Public Productivity Review, Summer, no. 42, pp. 25-33.
Gramlich, E. M. 1981, Benefit-Cost Analysis of Government Programs, Prentice-Hall, New Jersey, USA.
Hahn, R. W. & Tetlock, P. C. 2007, ‘Has economic analysis improved regulatory decisions?’, Working Paper 07-08, AEI-Brookings Joint Center for Regulatory Studies.
Hitch, C. J. 1967. Decision-Making for Defense, University of California Press, Berkeley and Los Angeles.
Kernot, W. C. 1906, ‘Railway gauge’, Proceedings of the Victorian Institute of Engineers, vol. VII, January to December, 1906, Victorian Institute of Engineers, Melbourne.
Krugman, P. 1994, Peddling Prosperity: Economic Sense and Nonsense in the Age of Diminished Expectations, W. W. Norton, USA.
Liebowitz, S. J. & Margolis, S. E. 1990, ‘The fable of the keys’, Journal of Law and Economics, vol. 33, pp. 1-27.
McKean, R. N. 1963, ‘Cost-Benefit Analysis and British Defence Expenditure, Scottish Journal of Political Economy, pp. 17-35.
Mudie, I. 1965, Riverboats, Sun Books, Melbourne.
Puffert, D. J. 2000, ‘The standardization of track gauge on North American railways, 1830-1890’, Journal of Economic History, vol. 60, no. 4, pp. 933-960.
Reuss, M. 1992, ‘Coping with uncertainty: Social Scientists, Engineers, and Federal Water Resources Planning’, Natural Resources Journal, volume 32, Winter, pp. 102-135.
Shabman, L. 1997, ‘Making benefit estimation useful: lessons from flood control experience’, Water Resources Update [Universities Council on Water Resources], vol. 109, Autumn, pp. 19-24.
Thomson, M. 2003, The Cost of Defence: ASPI Defence Budget Brief 2003-04, The Australian Strategic Policy Institute Limited, Canberra.
Throsby, D. & Withers, G. A. 1999, ‘Individual preferences and the demand for military expenditure’, Defence and Peace Economics, vol. II, pp. 1-16.
(Mark) Twain (S. L. Clemens) 1897, Following the Equator: a Journey Around the World, The American Publishing Company, Hartford, Connecticut.
White, A. & Parker, R. P. 1999, Cost-Benefit Analysis Concepts for Insensitive Munitions Policy Implementation, AR-011-169, Commonwealth of Australia, Aeronautical and Maritime Research Laboratory, Defence Science and Technology Organisation, Melbourne.
1* Paper prepared for the conference ‘Delivering better quality regulatory proposals through better cost-benefit analysis’ hosted by the Office of Best Practice Regulation on 21 November 2007. With thanks to Rod Bogaards, Scott Austin and Dr Mark Harrison for their valuable comments on a previous version of this paper. The views expressed in this paper are solely those of the author and should not be attributed to the Office of Best Practice Regulation.
The database is protected by copyright ©essaydocs.org 2016