Has modelling replaced science?

Mick Keogh: Australian Farm Institute

The role of science in policy-making is becoming more important, as governments tackle complex environmental issues such as climate change, biodiversity conservation and sustainable land and water management. However, close scrutiny of some of the ‘science’ utilised to inform policy decisions reveals that there is an increasing reliance on modelling, rather than actual science. This raises some difficult questions for governments, in that it is often more difficult to scrutinise the results of modelling, and small variations in assumptions can result in large changes in outcomes. An examination of some recent case studies suggests that the reliance on modelling – in some instances because governments have not adequately resourced science agencies – can result in poor policy decisions and significant community cost. Governments need to adopt a more sceptical attitude to modelled ‘science’ in formulating future environmental policies.

Modelling the extent of dryland salinity

During the 1990s, concern about the potential risk that dryland salinity posed for Australian agriculture increased, and in 2000 Australian governments adopted the National Action Plan for Salinity and Water Quality (NAP), committing $1.4 billion in funding over seven years to tackle the problem. As part of this program, state governments were required to map priority areas of salinity risk or hazard within their jurisdictions. The various state government salinity audits were incorporated into a national salinity mapping project, and in January 2001, the Australian Dryland Salinity Assessment report (NLWRA 2000) was released.

It estimated that there was approximately 5.7 million hectares (ha) of land within regions at risk of or affected by dryland salinity. The report also concluded that this could increase to some 17 million ha within 50 years. While this was significantly higher than earlier estimates, it was used to allocate funding under the NAP.

A review of state salinity reports leads to the conclusion that there were major shortcomings in relation to the available data, the assumptions used in developing projections from that data, and the overall conclusions of the audit. The so-called audit relied very heavily on modelling, because the necessary data to accurately assess the extent of dryland salinity was not available.

Dryland salinity is believed to be caused by changes to land use (such as tree clearing and cropping) that result in rising groundwater tables which bring to the surface mineral salts which are deposited there when the groundwater evaporates. Salinity greatly reduces agricultural productivity, and causes damage to infrastructure and buildings. A key factor in understanding dryland salinity risk is depth to groundwater, which can be measured using data from bores.

The lack of availability of this data was a major deficiency for the salinity audit. Apart from Western Australia, the coverage and quality of bore data available was quite poor. In many cases, depth to groundwater information was only available from a very limited number of sites, and these were often located in lower areas of the landscape, resulting in distorted data, especially in hilly regions.

A second major deficiency was the method used to estimate future trends in groundwater levels. In many cases, groundwater trend estimates were extrapolated using only two or three bore readings taken relatively short periods of time apart – often just one or two years. Extrapolating forward over a 50 year period based on such limited data is inherently risky, as other researchers highlight:

Areas of shallow water levels extrapolated from two point bore water level rises suggest that approximately two thirds of the catchment could suffer from shallow water levels by the year 2100. Bore hydrographs and groundwater modelling, however, indicate that water levels have remained high and stable for up to 25 years, and that a dynamic equilibrium may exist. (Creswell et al. 2003)

Such extrapolation is even riskier when, with the exception of Victoria and South Australia, no allowance was made for the variations in groundwater levels due to seasonal rainfall conditions. The effect of seasonal rainfall was recognised in preparing the Victorian assessment, and two projections were developed: a worst-case (wet climate) and a best-case (dry climate) estimate of likely salinity extent in 50 years. The best-case estimate of 1.6 million ha was approximately half the worst-case estimate, yet in compiling the national assessment, the worst-case figure was utilised.

A third major deficiency in all except the South Australian assessment was the assumption that future groundwater trends would be a simple linear extension of observed trends. This was equivalent to assuming that the rate of increase of water levels in a filling bathtub will continue unchanged, even after the bathtub overflows! Virtually all assessment reports noted that this linear projection was an unrealistic assumption.

The Queensland audit result was perhaps the most questionable. It was clearly explained that there was little or no groundwater data available on which to base the Queensland assessment, which was a ‘hazard’ rather than a ‘risk’ assessment. The report states that the approach used will ‘result in a significant overestimation of areas at risk’ and ‘should not be directly compared with 2050 predictions using the groundwater trend approach.’ Despite this qualification, the national audit used the Queensland data to calculate salinity projections for 2050.

These and other issues were all considered in a subsequent technical review of the audit (Webb 2000). The key conclusion of that review was:

Existing monitoring and assessment systems for dryland salinity are inadequate for determining with confidence, the current and future extent of dryland salinity across the continent, or for assessing the effects of any remedial or preventative management responses.

This conclusion hardly generates great confidence in the usefulness of the audit findings, nor does it provide reassurance that the modelled outcomes should be given the high degree of credibility and recognition that they have subsequently been accorded.

Modelling environmental conditions in the Murray-Darling River system

One of the most significant environmental reforms currently being undertaken in Australia is the development of a plan of management for the waters of the Murray-Darling Basin. The process of finalising a future plan has been underway for several years, and has been the subject of considerable controversy.

At the core of disputes about the plan is the science that is being utilised for decision-making. Those arguing that the draft plan makes insufficient water available for the environment frequently refer to the ‘science’ supporting the need for more environmental water.

One key piece of ‘science’ that has been important in establishing a baseline condition assessment for the Basin Plan is the Sustainable Rivers Audit (SRA) which was released in June 2008. As explained by the Murray-Darling Basin Authority (MDBA), ‘The data collected by the SRA is a key input to the Basin Plan and other programs of the Murray-Darling Basin Authority.’ (MDBA 2012)

According to the authors of the Sustainable Rivers Audit report (Davies et al. 2008) the SRA involved two main processes. The first was a large-scale survey to collect relevant physical data from numerous sampling sites within the Murray-Darling Basin, with the data initially focused on three themes – hydrology, fish and macroinvertebrates.

The physical data used to inform the current condition assessment for each of these was collected over the period 2004 to 2007, which was during a period of severe drought. The authors noted, ‘A severe drought has prevailed over the Basin during the audit period. It is too soon to say how much this has affected fish and macroinvertebrate communities.’ It would seem only logical that the prevailing drought conditions would have had a negative impact on the fish and macroinvertebrate populations.

There were also limitations to the data collected to assess current environmental conditions. For example, in relation to macroinvertebrates the report notes:

The AUSRIVAS sampling method does not accurately represent the abundances of macroinvertebrates, and numeric data from samples therefore are not used here in the assessment of Condition. Nor does the method adequately sample several groups of molluscs and crustaceans, especially larger species like freshwater mussels and crayfish. These limitations will be addressed in future refinements of the sampling protocol.

In relation to hydrology, the SRA report notes:

Extensive delays in determining the location of representative sites and the delivery of modelled data by some States caused substantial problems, and there were inconsistencies between models, estimations of Reference Condition and the durations of site records. The hydrological assessments in this report therefore do not meet rigorous SRA design principles.

The second step in the process involved computer modelling to develop estimates of the ‘reference condition’ for each of these indicators, that it was assumed would have been the case if there had been no significant human intervention in the landscape. The SRA report explains that, ‘Historical data, expert knowledge and modelling are used where possible, but sometimes these may not be sufficient for reliable estimates of some variables.’

For the hydrology theme, the report explains:

Reference Condition for Hydrology is estimated using models run under assumptions of no direct human influence on water management (that is, with storages, diversions and inter-valley transfers set to zero). The effects of farm dams, reafforestation, land clearing, groundwater extraction and other land management activities, will be incorporated when they can be quantified.

For the fish theme, the SRA report noted:

As it is not possible to measure Reference Condition directly, it is determined by combining expert knowledge, previous research, museum collections and historical data, and is used in the calculation of several indicators.

The final step in the SRA was to compare the data detailing the current state of environmental factors with the modelled reference condition for each to create a condition assessment, with that assessment being either good, moderate, poor, very poor or extremely poor. Assessments for each of the three themes were also combined to produce an overall indicator of ecosystem health for each river valley.

The assessment was based on how different each of the theme indicators was from their modelled ‘reference’ conditions. Of the 23 river valleys included in the analysis, 13 were classified as having ‘very poor’ ecosystem health, seven were assessed as ‘poor’, two were assessed as ‘moderate’ and one was assessed as ‘good’.

The reliance of this process on theoretical modelled assumptions of the state of the environment prior to human intervention, the use of physical data collected during an extreme drought, and the limitations of the physical data collection process all raise serious questions about the robustness of the modelling and science used to assess the Basin's environmental health. This, in turn, puts in serious doubt the usefulness of that modelling in calculating the amount of water that needs to be allocated to the environment to restore environmental health.

The modelled ‘pre-human’ state of the environment cannot be verified against any physical data, and is essentially a subjective assessment by ecologists and scientists of the presumed benchmark condition of the river ecosystem in good health. There is little if any opportunity for other scientists to scrutinise or contest this work, because of the limitations in available resources and data, and the cost involved.

These limitations were recognised by the authors of the SRA report, nevertheless, decisions about how much water should be allocated to the environment are being made largely on the basis of these unverifiable modelling results, rather than hard and tested science.

Climate change modelling

A third environmental issue on which there has been a very strong reliance on modelling is human-induced climate change. That all projections of future climatic changes rely on modelled outcomes should not be surprising, given the scale and complexity of the earth’s climate system, and the sheer impossibility of conducting experiments to test outcomes at a global scale. Limitations in the availability of recorded climatic data are also a constraint. Reliable climate records are rarely available for more than 100 years at any location, and the factor causing concern – rising atmospheric greenhouse gas concentrations – has only been recorded over the past 50 years.

As a result it is difficult to adequately test climate models to determine how robustly they are able to predict future climate changes. For example, it is particularly important to be able to calibrate climate models using one set of data and then validate them on independent data to ensure the models are predictive, rather than just a curve-fitting exercise. This is problematical given the slow response rates of the climate, the complexity of climate systems, and the limited climatic data sets available. These factors, plus the practice of ‘tuning’ climate models to reflect current conditions also makes it difficult for researchers not involved in the development of a specific model to check assumptions used.

While policy-makers proclaim that ‘the science is settled’, climate researchers are more circumspect, and have expended much effort to develop robust climate datasets, and to objectively test the performance of models.

A major example is the World Climate Research Programme’s Coupled Model Intercomparison Project phase 3 (CMIP3). This has involved comparisons of the performance of over 20 major climate models utilising a wide range of parameters. There have been many reports from this project, and there have also been a number of reviews conducted at a national level, such as the 2008 report by the US Climate Change Science Program (CCSP 2008). Groups such as the UK Royal Society (Knutti 2008) and the American Meteorological Society (Reichler & Kim 2008) have also published reviews on this topic.

Broadly speaking, these reviews find that current climate models are reasonably ‘skilful’ at mimicking current climatic conditions at a broad scale. They are also getting better at incorporating the effects of factors such as El Niño and La Niña events. However, different models exhibit different strengths and weaknesses, and no single model performs well in all respects. The limitations of the models are highlighted by the fact that the range of uncertainty in projected future temperature change (between 1.5 and 4 degrees C) associated with a doubling of carbon dioxide concentrations in the atmosphere has not been reduced significantly.

Global climate models were also found to have limited usefulness when downscaled to regions, have higher levels of uncertainty in predictions of future changes in rainfall, and be limited in the extent to which they incorporate all known processes and feedbacks from factors such as aerosols, clouds and the terrestrial carbon cycle.

The limitations of climate models are recognised by researchers working in the area, who also recognise the dilemma inherent in communicating the limitations of the projections arising from these models:

There is a delicate balance between giving the most detailed information possible to guide policy versus communicating only what is known with high confidence. In the former case, all results are used, but there is a risk of the science losing its credibility if the forecasts made a few years later are entirely different or if a forecast made a few years earlier is not verified. The other option is to communicate only what we are confident about. But being conservative (i.e. not being wrong by not saying anything) may be dangerous in this context; once we are sure about certain threats, it may be too late to act. (Knutti 2008)

Is modelling replacing science?

These three examples highlight the increasing reliance of policy-makers on modelling, rather than actual science, in making decisions in response to complex environmental challenges.

No doubt modellers would protest that their models are based on science and simply provide a means of better understanding complex scientific problems when it is not possible to empirically test all aspects of a problem. While this is true, it also needs to be recognised that models often have major limitations and are subject to modification or assumptions that may bias results in a particular direction.

The modelling utilised to estimate the likely future extent of dryland salinity in Australia is a case in point. That modelling relied on limited and questionable physical data, and the promise of substantial government funding to ‘fix’ the problem created a quite strong incentive to maximise the projected salinity risk.

The modelling to establish ‘reference conditions’ for environmental factors in the Murray-Darling Basin also apparently relied on limited physical data, and it is virtually impossible for any independent person or group to determine how realistic the resulting ‘reference conditions’ are. Despite this, the modelled reference conditions were used as the basis for comparison with present conditions (during the middle of a devastating drought). Not surprisingly, this resulted in the conclusion that environmental health was poor or very poor in almost all the valleys of the Basin. This finding was, in turn, used to determine the amount of additional water needed by the environment. It is difficult to conclude that these factors, in combination, have not biased modelled outcomes in a way that paints an overly negative picture of the environmental health of the Murray-Darling Basin.

The modelling being used to better understand possible future changes in climate has, by comparison, been subject to a great deal more scrutiny. The cost of the development of global climate models and their complexity limits the contestability and transparency of the science underpinning them, but concerted efforts have been made to validate the performance of the models. Limited timeframes and other factors have constrained the extent that these models can be tested, but the current slowing in the rate of warming will provide a good test of their robustness, and enable them to be further refined.

Neither of the other two Australian modelling cases discussed above were subject to the same degree of scrutiny, and in the case of the dryland salinity example, this has resulted in considerable public expenditure achieving very questionable outcomes (Pannell & Roberts 2010).

Science and modelling are not the same, and while models normally incorporate some of the science base associated with an issue, they are still just models. They are often not able to be scrutinised to the same extent as ‘normal’ science, and their cost and complexity can reduce the ability of others to contest their results. Modellers may not necessarily be purely objective, and ‘rent-seeking’ can be just as prevalent in the science community as it is in the wider economy.

The lesson for governments is that much greater caution is required when considering policy responses for issues where the main science available is based on modelled outcomes.

Governments should consider the establishment of truly independent review processes in such instances, and adopt iterative policy responses which can be adjusted as the science and associated models are improved. Ill-considered or rushed responses can result in major cost and little reward, as previous examples demonstrate.

References

Davies, P, Harris, J, Hillman, T, Walker, K (2008), SRA Report 1; A report on the ecological health of rivers in the Murray-Darling Basin, Prepared for the Murray-Darling Basin Ministerial Council.

CCSP (2008), Climate models: An assessment of strengths and limitations, A report by the US Climate Change Science Program, Department of Energy, Washington DC.

Reichler, T, Kim, J (2008), How well do coupled models simulate today’s climate?, American Meterological Society BAMS, March 2008.

Knutti, R (2008), Should we believe model predictions of future climate change?, Philosophical Transactions of the Royal Society, vol. 366, no. 1885, pp. 4647–64.

Pannell, D, Roberts, A (2010), Australia’s national action plan for salinity and water quality: a retrospective assessment, Australian Journal of Agricultural and Resource Economics, vol. 54, issue 4, pp. 437–56.

MDBA (2012), accessible at: www.mdba.gov.au/programs/sustainable-rivers-audit

Webb, A (2000), Australian dryland salinity assessment 2000 technical report, National Land and Water Resources Audit Dryland Salinity Project Report, Canberra.

National Land and Water Resources Audit (2001), Australian dryland salinity assessment 2000, National Land and Water Resources Audit, Canberra.

Back to August 2012 Insights contents page.

Back to newsletter archive.