Appendix 1: Ten principles of sampling

The 10 principles of sampling were originally defined by RG Green in 1979 (Sampling design and statistical methods for environmental biologists, John Wiley & Sons, New York). Although this is an old reference, the principles of sound experimental design have not changed, and it is worth restating them because companies and their advisers still occasionally design and run monitoring programs that are not suited to rigorous analysis and unequivocal interpretation of the findings. Leading practice requires that the 10 principles be taken into account when designing quantitative monitoring programs (some additional notes are provided below in italics). More detail on experimental design is in Section 3.2.

  1. Be able to state concisely to someone else what question you are asking.
  2. Take replicate samples within each combination of time, location and any other controlled variables. Differences among sites can only be demonstrated by comparison with differences within sites. (Take care to avoid pseudoreplication.)
  3. Take an equal number of randomly allocated replicate samples for each combination of controlled variables. Sampling in ‘representative’ or ‘typical’ places is not random sampling.
  4. To test whether a condition has an effect, collect samples both where the condition is present and where the condition is absent but all else is the same. An effect can only be demonstrated by comparison with a control. (Note: The definition of control and reference sites varies but in this instance the use of ‘control’ refers to comparing potentially affected sites with unaffected sites using conventional statistical procedures.)
  5. Carry out some preliminary sampling to provide a basis for evaluation of sampling design and statistical analysis options.
  6. Verify that your sampling device is sampling the population that you think you are sampling, and with equal and adequate efficiency over the entire range of sampling conditions to be encountered (e.g. aquatic invertebrates).
  7. If the area to be sampled has a large scale pattern, break the area up into relatively homogeneous subareas and allocate samples to each in proportion to the size of the subarea (‘stratification’).
  8. Verify that your sample unit size is appropriate to the size, densities and spatial distributions of the organisms you are sampling. Then estimate the number of replicate samples required to obtain the precision you want.
  9. Test your data to determine whether the error variation is homogeneous, normally distributed and independent of the mean. If it is not, as will be the case for most field data, then (a) appropriately transform the data, (b) use a distribution-free (nonparametric) procedure, (c) use an appropriate sequential sampling design, or (d) test against simulated null hypothesis (H ) data.
  10. Having chosen the best statistical method to test your hypothesis, stick with the result. An unexpected or undesired result is not a valid reason for rejecting the method and hunting for a ‘better’ one.
Share this Page