4.11 Data analysis and interpretation

Although consistently meeting regulatory requirements in relation to monitoring results is an important component of leading practice, it does not, by itself, represent leading practice. Leading practice requires that analysis and interpretation of monitoring data commences early and remains an ongoing process, so that companies can identify and address problems as soon as possible, preferably before they become significant issues. For example, staff should be encouraged to note any unexpected readings as soon as possible while conducting field monitoring or when reports are received from an analysis laboratory—not days or even months later, when results are analysed in more detail for the preparation of a compliance report. Results should be assessed against ‘zones of comfort’ and ranges where risks of significant impacts could occur; this will help trigger action response plans and enable early preventive or remedial action.

As well as following routine monitoring procedures, staff should observe and report aspects that could help with subsequent data analysis and interpretation, such as the presence of:

  • sick or dead fish, when monitoring water for heavy metals, dissolved oxygen and so on
  • algal blooms, when collecting water samples for nutrient analysis
  • tree-yellowing or other possible signs of nutrient deficiency or dieback, when monitoring rehabilitation plantings or unmined native woodlands.

Unusual or extreme events, such as floods, could be filmed or photographed to record visible quality indicators, such as turbidity. Anomalies in monitoring data compared with previously measured values may indicate problems with the maintenance or calibration of monitoring equipment, which need to be identified and corrected as soon as possible.

Leading practice monitoring and data analysis require a conscious effort to go beyond routine regulatory requirements in:

  • collecting data, for example by including observational data and taking extra samples if required
  • ensuring that samples are representative of what is really happening, by adapting the monitoring schedule to the nature of the event (which rarely occurs when routine prescriptive procedures are being followed for compliance monitoring).

Early analysis of monitoring data can also be very helpful in refining monitoring procedures. A leading practice approach is to run a pilot study and analyse the data so that any problems with sampling and analysis can be identified and rectified before implementing the monitoring program at full scale. This can include ensuring that the sampling design is compliant with the implicit assumptions in the preferred statistical analysis design, understanding variation, and using power analysis to optimise the amount of sample replication and other aspects of data analysis.

So, when sampling, how do we know when we have enough samples? Determining the optimal number of samples or replicates for a study to account for the physical and natural variation within a site ensures an adequate power to detect statistical significance, should it exist. If a study is underpowered, the results will be inconclusive and increase the risk of failing to detect a change when it has occurred. On the other hand, collecting too many samples is a waste of resources. Leading practice monitoring programs use power and precision analyses to ensure that they are providing adequate statistical power to detect meaningful effects in the most cost-effective way.

Typically, calculating required sample sizes using a power analysis requires particular information and parameters: the statistical test to be used, the sample size used, the significance level (alpha), power, effect size, mean and variance. These values are used to test the hypothesis, which is typically a statement of whether an effect exists or not. Power analysis is used to estimate the minimum sample sizes needed to detect a particular effect (see the case study on determining sample size), or the realised power for a statistical test that has already been conducted but where an effect was not detected (that is, whether the non-detection of effect was reliable).

Collapsed - Case study: Sample size estimated for monitoring impacts of undermining on a plant growing on damp rock faces

Epacris muelleri (family Ericaceae) is a weak, straggly shrub growing on damp, sheltered, sandstone faces in the western Blue Mountains in New South Wales where there is underground coalmining (Figure 5a). It occupies a habitat that may be sensitive to impacts related to subsidence, as well as sites where conventional sampling techniques might not be possible.

Research staff from the University of Queensland’s Centre for Mined Land Rehabilitation ran a short pilot test comprising forty 1 m2 plots in a variety of locations to understand the variation in the population. This revealed an average density of 4.6 plants per square metre and a standard deviation of 4.3. From this, it was possible to develop the hypotheses for a theoretical decrease of 30% in the density in the population as an indicator of impact on E. muelleri as:

H (null): mean density at current levels (such as 4.6 plants per m2)

H (alternate): mean density with a 30% decrease in abundance (for example, 3.2 plants per m2)

A power analysis revealed that a minimum of 45 sample plots would be needed to detect a decrease of 30% at 80% power (a conventional rule of thumb)—see Figure 5b.

Figure 5: (a) Sampling Epacris muelleri plants growing on damp rock faces using 1 m2 plots
Figure 5: (a) Sampling Epacris muelleri plants growing on damp rock faces using 1 m2 plots

(b) Power analysis with calculated sample size at 80% power
Figure 5: (b) Power analysis with calculated sample size at 80% power

With a small investment in field data collection (1 day), it was possible to calculate the minimum samples or replicates needed for certainty of impact detection. Provided the monitoring design is representative of the area to be affected, this approach can be used to evaluate and direct management practices as a project progresses. A practical example of how a power analysis can be incorporated into a monitoring design with quantified trigger points for management might read like this:

  • Management objective: Allow a decrease of no more than 30% of the 2014 cover of E. muelleri in population sampling area A between 2014 and 2017, in comparison with a control/reference site.
  • Sampling objective: Be 80% certain (power) of detecting a 30% change (effect size) in cover with a Type I error (alpha) of 0.10.
  • Management response: A decline of 30% will trigger a study to determine the cause of change. If mining activity is determined to be the cause of the decline, it may be necessary to avoid or minimise the techniques used (such as void width, the location of mining areas or the orientation of mining layout) to prevent further impact. Remediation may also be required.

The parameters in a power analysis can be adjusted further, depending on the conditions, limitations and trade-offs required for designing a successful monitoring program.

Elzinga et al. (1998) describe methods for changing parameters.

Alternatively, precision analysis can be used to determine the minimum effect size (difference from the control mean) that can be detected with adequate power with a given sample size. This can be particularly useful where the number of samples that can be taken is constrained by a limited budget or the availability of the monitoring target (such as rare organisms or rare habitat types). The methods used for calculating sample size or precision can be quite complicated, but fortunately there are a number of guides and free software online. Free online monitoring manuals with chapters on power analysis include Barker (2001), Elzinga et al. (1998), Harding & Williams (2010), Herrick et al. (2005) and Wirth & Pyke (2007). A very good overview of the importance of power analysis is provided by Fairweather (1995). Also useful is the online statistical reference McDonald (2009) and the free software G*Power and PowerPlant. Thomas & Krebs (1997) list over 29 software programs capable of undertaking power analysis.

Data should be analysed as soon as possible to ensure that rapid feedback is available to operators and stakeholders and that any identified problems can be addressed as soon as possible. Standard practice requires that the data be analysed and compared against agreed objectives and targets or standards. Leading practice goes beyond this and seeks to provide early warning of possible problems by analysing trends (either visually or using statistical analyses). Companies may choose to set more stringent internal trigger levels than required for compliance to initiate further investigation.

The use of agreed statistical procedures will often be needed to analyse and correctly interpret the data obtained using carefully designed monitoring programs. This can result in a more robust determination of whether objectives and targets have been met and help resolve situations where legal issues may be involved. However, even when statistical procedures have been agreed on, it is essential that exploratory data visualisation (such as graphs, tables and GIS plots) is done to examine patterns and trends and, if appropriate, that investigative statistical analyses are conducted to ensure that changes are detected early. This can also help confirm the applicability of the agreed statistical analyses.

In some situations, small sample sizes or other limitations may preclude the use of some conventional statistical analyses (such as analysis of variance). This applies especially to those cases in which there is a consistent trend through time. In such instances, analysis of trends and other procedures may be needed to detect changes. The use of Bayesian statistics has recently revolutionised analyses of small sample sizes, and several other robust classical statistical tools may be suitable.

Whatever the case, statistical methods are simply hypothesis-testing or hypothesis-generating tools and are no substitute for the examination of quality data from an informed environmental science viewpoint. Routine, mechanical, statistical testing of compliance may be standard practice, but leading practice requires data interpretation that takes into account an understanding of the processes in the receiving environment and the mechanisms of action of the stressors of concern.

Therefore, most leading practice monitoring programs and practices will include sound experimental design and statistical analyses, but can also include simple field trials and detailed observations that may help greatly in understanding the causes of impacts and the processes of recovery.

As well as being used in compliance checking, the analysis of the results from monitoring programs should also be used to investigate any trends that may be developing in the frequency of occurrence of, for example, noncompliance with a water quality parameter. An increasing frequency of failure can point to a developing adverse condition. Incidents can range from near misses to spills with significant environmental or safety impacts. Recording the details, impacts and frequency of events and analysing this information in relation to operating procedures can be useful for both reporting and improving performance. It is standard practice to record these details in sites with AS/NZS ISO 14001:2004 compliant EMSs. Leading practice takes this a step further by analysing the data and acting on the results of the analyses.

As well as the obvious aspects of the interpretation of analyses, such as determining whether objectives, targets and standards have been met, leading practice includes a strong focus on continuous improvement. Leading practice companies clearly understand that monitoring provides the information needed to identify problems and to assess the effectiveness of mitigation measures. Procedures are set in place to ensure that the findings of monitoring programs are reviewed by company environmental and operations staff. Results are inspected in conjunction with records of events (such as a change in operating procedure) and actions taken (for example, to explain an unexpected rehabilitation outcome) in order to determine causes and explain results. In some cases, further investigation, monitoring or research, including root cause analysis, may be required. Modifications to the monitoring program may also be needed and should be considered on a regular basis.

Frequent objective analysis and interpretation of data, with a strong focus on continuous improvement, will result in better environmental, economic and social outcomes.

Share this Page