Comparing impact, mitigation, and adaptation costs that occur at different points in time requires them to be discounted. There is longstanding debate about the appropriate rate of discount to use (e.g., Arrow et al., 1996; Portney and Weyant, 1999). Uncertainty regarding the discount rate relates not to calculation of its effects, which is mathematically precise, but to a value judgment about the appropriateness of the present generation valuing various services for future generations (see Section 2.3.1 for elaboration).
Two different approaches to discounting are presented in the SAR (Arrow et al., 1996). The descriptive approach focuses on intertemporal cost-efficiency, and the discount rate is based on observed market interest rates. The prescriptive approach emphasizes that normative issues are involved in valuing the future. One important problem for both approaches is the fact that we cannot observe future market interest rates or know the level of income that will prevail in the future (at least for time horizons involved in the climate change debate). Most analysts have resolved this dilemma by using constant discount rates over the entire horizon, despite the fact that they are likely to change. Others have suggested or used non-fixed discount rates that apply strong short-term discounting but entail little further discounting for the very long-term future (e.g., Azar and Sterner, 1996; Heal, 1997). That would cause events a decade or two hence to be significantly discounted but would not cause events a century hence to be reduced in value by powers of 10, as is the occurrence with conventional exponential (compound interest) discounting. Because the largest costs from climate change usually are believed to occur many decades in the future, conventional discounting renders the present value of such future damages very small, whereas non-fixed discount rates (e.g., “hyperbolic discounting”) would cause present generations to take serious notice of very large potential damages, even a century hence. Because both the value of the discount rate and the choice of discounting approach involve value judgments about the ethics of intergenerational transfers, it is important for all assessments to be clear about what discounting formulations have been used and the sensitivity of the conclusions to alternative formulations.
Several issues raised in this section are discussed primarily in the report of Working Group III. However, because this chapter is intended to provide a context for impact, adaptation, and vulnerability issues, this section briefly reviews several emissions abatement complexities that have a bearing on the adaptation/mitigation tradeoff issues (see Section 1.4.4.2 and Chapter 2). Because estimates of the monetary costs of impacts span a wide range of values given the many uncertainties and often are value laden, some analysts have argued that climate change targets should be based on physical or social, rather than economic, indicators—for example, past fluctuations in temperature or expected climate-related deaths or some general reference to sustainability or the precautionary principle (see Section 1.5.4). This precautionary approach is used in European negotiations on emissions of acidifying substances and is acknowledged in Article 3, paragraph 3, of the UNFCCC, which states as a principle that “The Parties should take precautionary measures to anticipate, prevent or minimize the causes of climate change and mitigate its adverse effects. Where there are threats of serious or irreversible damage, lack of full scientific certainty should not be used as a reason for postponing such measures....” Such threshold levels (see Section 1.4.3.5) also have been used as upper ceilings on the amount of warming considered “tolerable” in the academic sphere (see Alcamo and Kreileman, 1996; Azar and Rodhe, 1997) and the political sphere (for instance, the European Union has adopted a maximum of 2°C temperature change above pre-industrial levels or a maximum of 550 ppm CO2 concentration target). Implicit in this approach is the assumption of the possibility of very nonlinear damage functions. One drawback with this approach is that necessary tradeoffs between climate damage avoidance and the opportunity costs of resources used to mitigate that climate change often are not made explicit.
Even if the precautionary approach were taken, cost-efficiency analysis would be used to identify the lowest cost of meeting the predefined target. Several studies have made an argument that “where” and “when” flexibility in emissions reductions can greatly reduce its costs (Wigley et al., 1996). Ha-Duong et al. (1997) and Goulder and Schneider (1999) show that preexisting market failures in the energy sector could reduce the costs of immediate climate policies substantially or that neglect of inducing technological changes by delaying incentives associated with immediate climate policies could reverse the conclusions that delayed abatement is more cost-effective. Unfortunately, there is very little literature on how climate policies might induce technological change (see WGIII TAR). Another reason for the controversy in the literature about abatement timing is a misreading of Wigley et al. (1996) that they do not endorse efforts over the next 30 years to make abatement cheaper in the future. Azar (1998) argues, however, that if stabilization targets would be at or below 450 ppm CO2, early abatement (not just efforts to make future abatement cheaper) would be cost-efficient, even in the Wigley et al. (1996) model.
Furthermore, the problem of valuing impacts in monetary terms cannot be avoided entirely even under the cost-efficiency approach. Different trajectories toward the stabilization target have different impacts and costs associated with them. How does delaying mitigation affect the impacts, including distributive consequences? The answer to this question is unclear, partly because of large remaining uncertainties about the extent to which rapid forcing of the climate system could trigger threshold events (e.g., Tol, 1995). Moreover, the difference in impacts between early and delayed mitigation responses appears to be sensitive to assumptions about sulfate aerosol cooling and whether small transient temperature differences can have significant effects.
Validation of models and assessments that deal with projections over many decades is a serious issue. Often it is not helpful in the context of sustainable development to suggest postponing policy responses until model predictions can be directly compared against reality because that would require experiencing the consequences without amelioration. Instead, models and assessments are subjected to varying levels of quality control, intercomparison with standard assumptions, comparisons with experiments, and extensive peer review. Some authors have argued (e.g., Oreskes et al., 1994) that it is impossible in principle to “validate” models for future events when the processes that determine the model projections contain structural uncertainties (see Boxes 1-1-and 2-1). Although the impossibility of direct before-the-fact validation is strictly true, this does not mean that models cannot be rigorously tested. Several stages are involved. First, how well known are the data used to construct model parameters? Second, have the individual processes been tested against lab experiments, field data, or other more comprehensive models? Third, has the overall simulation skill of the model been tested against known events? Fourth, has the model been tested for sensitivity to known shocks (e.g., an oil price hike in an economic model or a paleoclimatic abrupt change in a climate model)? For example, crop yield models are tested against actual yield variation data (Chapter 5), and sea-level increase models are tested for their ability to reproduce observed changes in the 20th century. The ability of a model to reproduce past conditions is a necessary, but not necessarily sufficient, condition for a highly confident forecast of future conditions, unless the underlying processes that gave rise to the phenomena observed in the past will be fully operative in the future and the model captures the influence of such phenomena. Finally, has the comparison between model and data been done at commensurate scales, so that small-scale data are first aggregated to the scale of the lowest resolved element of the model before attempting evaluation (e.g., Root and Schneider, 1995)? When such validation protocols are performed and a model performs “well,” subjective confidence that assessment teams can assign to various projections based on such models increases considerably (see Section 2.6), even if “definitive proof” of a specific forecast before the fact is impossible in principle.
All of these considerations demonstrate how the complexities of analysis have led Working Group II TAR authors to emphasize risk management approaches to climate change and policy assessment, rather than just an optimizing framework (e.g., see Section 2.7). These complexities of analysis are not problematic only for the assessment of impacts, vulnerabilities, and adaptability; they also carry forward to questions of tradeoffs between investments in adaptation and mitigation strategies and make a connection between the purviews of Working Groups II and III.
Other reports in this collection |