The response to anthropogenic changes in climate forcing occurs against a backdrop of natural internal and externally forced climate variability that can occur on similar temporal and spatial scales. Internal climate variability, by which we mean climate variability not forced by external agents, occurs on all time-scales from weeks to centuries and millennia. Slow climate components, such as the ocean, have particularly important roles on decadal and century time-scales because they integrate high-frequency weather variability (Hasselmann, 1976) and interact with faster components. Thus the climate is capable of producing long time-scale internal variations of considerable magnitude without any external influences. Externally forced climate variations may be due to changes in natural forcing factors, such as solar radiation or volcanic aerosols, or to changes in anthropogenic forcing factors, such as increasing concentrations of greenhouse gases or sulphate aerosols.
Definitions
The presence of this natural climate variability means that the detection and
attribution of anthropogenic climate change is a statistical “signal-in-noise”
problem. Detection is the process of demonstrating that an observed change is
significantly different (in a statistical sense) than can be explained by natural
internal variability. However, the detection of a change in climate does not
necessarily imply that its causes are understood. As noted in the SAR, the unequivocal
attribution of climate change to anthropogenic causes (i.e., the isolation of
cause and effect) would require controlled experimentation with the climate
system in which the hypothesised agents of change are systematically varied
in order to determine the climate’s sensitivity to these agents. Such an
approach to attribution is clearly not possible. Thus, from a practical perspective,
attribution of observed climate change to a given combination of human activity
and natural influences requires another approach. This involves statistical
analysis and the careful assessment of multiple lines of evidence to demonstrate,
within a pre-specified margin of error, that the observed changes are:
Limitations
It is impossible, even in principle, to distinguish formally between all conceivable
explanations with a finite amount of data. Nevertheless, studies have now been
performed that include all the main natural and anthropogenic forcing agents
that are generally accepted (on physical grounds) to have had a substantial
impact on near-surface temperature changes over the 20th century. Any statement
that a model simulation is consistent with observed changes can only apply to
a subset of model-simulated variables, such as large-scale near-surface temperature
trends: no numerical model will ever be perfect in every respect. To attribute
all or part of recent climate change to human activity, therefore, we need to
demonstrate that alternative explanations, such as pure internal variability
or purely naturally forced climate change, are unlikely to account for a set
of observed changes that can be accounted for by human influence. Detection
(ruling out that observed changes are only an instance of internal variability)
is thus one component of the more complex and demanding process of attribution.
In addition to this general usage of the term detection (that some climate change
has taken place), we shall also discuss the detection of the influence of individual
forcings (see Section 12.4).
Detection and estimation
The basic elements of this approach to detection and attribution were recognised
in the SAR. However, detection and attribution studies have advanced beyond
addressing the simple question “have we detected a human influence on climate?”
to such questions as “how large is the anthropogenic change?” and
“is the magnitude of the response to greenhouse gas forcing as estimated
in the observed record consistent with the response simulated by climate models?”
The task of detection and attribution can thus be rephrased as an estimation
problem, with the quantities to be estimated being the factor(s) by which we
have to scale the model-simulated response(s) to external forcing to be consistent
with the observed change. The estimation approach uses essentially the same
tools as earlier studies that considered the problem as one of hypothesis testing,
but is potentially more informative in that it allows us to quantify, with associated
estimates of uncertainty, how much different factors have contributed to recent
observed climate changes. This interpretation only makes sense, however, if
it can be assumed that important sources of model error, such as missing or
incorrectly represented atmospheric feedbacks, affect primarily the amplitude
and not the structure of the response to external forcing. The majority of relevant
studies suggest that this is the case for the relatively small-amplitude changes
observed to date, but the possibility of model errors changing both the amplitude
and structure of the response remains an important caveat. Sampling error in
model-derived signals that originates from the model’s own internal variability
also becomes an issue if detection and attribution is considered as an estimation
problem – some investigations have begun to allow for this, and one study
has estimated the contribution to uncertainty from observational sampling and
instrumental error. The robustness of detection and attribution findings obtained
with different climate models has been assessed.
Extensions
It is important to stress that the attribution process is inherently open-ended,
since we have no way of predicting what alternative explanations for observed
climate change may be proposed, and be accepted as plausible, in the future.
This problem is not unique to the climate change issue, but applies to any problem
of establishing cause and effect given a limited sample of observations. The
possibility of a confounding explanation can never be ruled out completely,
but as successive alternatives are tested and found to be inadequate, it can
be seen to become progressively more unlikely. There is growing interest in
the use of Bayesian methods (Dempster, 1998; Hasselmann, 1998; Leroy, 1998;
Tol and de Vos, 1998; Barnett et al., 1999; Levine and Berliner, 1999; Berliner
et al., 2000). These provide a means of formalising the process of incorporating
additional information and evaluating a range of alternative explanations in
detection and attribution studies. Existing studies can be rephrased in a Bayesian
formalism without any change in their conclusions, as demonstrated by Leroy
(1998). However, a number of statisticians (e.g., Berliner et al., 2000) argue
that a more explicitly Bayesian approach would allow greater flexibility and
rigour in the treatment of different sources of uncertainty.
Other reports in this collection |