The legal landscape for AI in Australia

This standard and the 10 guardrails are voluntary. The standard does not seek to create new legal obligations for Australian organisations. It is designed to help organisations deploy and use AI systems in the bounds of existing Australian laws, emerging regulatory guidance and community expectations.

The table below shows some of the existing laws of general application that will have an impact on how Australian organisations develop and deploy AI. Organisations deploying, using or relying on AI systems should be aware of these laws, and how they may constrain or inform the use of AI.

There are also laws that may apply depending on the particular AI use case or application. These include product safety laws, motor vehicles and surveillance laws, and laws that may apply to particular sectors or organisations such as financial services or the medical sector. Organisations may also need to comply with laws of non-Australian jurisdictions (for example, where laws of another jurisdiction have extraterritorial application).

As part of their duties, directors of organisations must have a sufficient understanding of both the risks and the laws that apply to their use of AI.

Read some examples of use cases and their potential risks and harms.

AI risks or harms General laws that may apply
AI system not sufficiently secure
  • Directors’ duties (e.g. to exercise powers and discharge duties with due care and diligence), to assess and govern risks to the organisation (including non‑financial risk e.g. from AI and data). 
  • Privacy laws, require steps that are reasonable in the circumstances to protect personal information and impose data minimisation obligations to destroy or deidentify information no longer needed. 
  • The security of critical infrastructure act and sector specific laws (e.g. financial services), impose risk management and cybersecurity obligations.
  • Negligence, if a failure in risk management practices amounts to a failure to take reasonable steps to avoid foreseeable harm to people owed a duty of care, and that failure causes the harm.
  • Online safety laws, if certain online service providers fail to take pre-emptive and preventative actions to minimise harms from online services. 
Misleading outputs / statements
  • The Australian Consumer Law prohibitions against unfair practices (e.g. misleading and deceptive conduct and false and misleading representations) may apply:
    • if the outputs are misleading (e.g deceptive use of deepfakes)
    • to misleading representations or silence as to when AI is being used
    • to misleading statements as to the performance and outputs of the AI systems
Harmful outputs
  • Product liability (where the organisation is a manufacturer), if outputs result in harm caused by a safety defect (e.g. a defect in the design, model, manufacturing or testing of the system, including failure to address bias or cybersecurity risk) and other product safety laws (including recalls and reporting).
  • Negligence, if an organisation fails to exercise the standard of care of a reasonable person to avoid foreseeable harm to persons to whom it owes a duty of care, and that failure causes the harm.
  • Criminal laws, if the output resulted in, or aided or abetted the commission of a crime.
  • Online safety laws, if the outputs are restricted or harmful online content (such as cyberbullying or cyber-abuse material, or non-consensual sharing of intimate images or child sexual abuse material).
  • Defamation laws, if the outputs are defamatory and the organisation participated in the process of making the defamatory material available (such as through making the tool available or training) rather than merely disseminating the content.
Misuse of data or infringement of model or system 
  • Privacy laws, intellectual propriety laws (including copyright), duties of confidence and contract, protect the use, reproduction and/or disclosure of data (including training data, input data and outputs) and the model or system without the requisite consents or rights.
  • Privacy laws, restrict the collection of personal information for an improper purpose, and impose transparency and data minimisation requirements on the handling of personal information.
  • The Australian Consumer Law prohibitions against misleading and deceptive conduct, unconscionable conduct and false and misleading representations, may apply to unfair data collection and use practices.
Bias, incorrect or poor-quality output
  • Privacy laws, impose quality and accuracy obligations that may apply to training and input data (that is personal information) and outputs (where new personal information is generated). 
  • Systems that produce inaccurate or erroneous outputs such as ‘AI hallucinations’ may be in breach of statutory guarantees under the Australian Consumer Law (e.g. consumer goods be of acceptable quality and fit for purpose, or consumer services be rendered with due care and skill).
  • Anti-discrimination laws, if outputs exclude or disproportionately affect an individual or group on the basis of a protected attribute.
AI system not accessible to individual or group 
  • Anti-discrimination laws, if the exclusion is based on a protected attribute.
  • Prohibitions on unconscionable conduct under the Australian Consumer Law, if the exclusion of a consumer was so harsh that it goes against good conscience.
  • Essential services obligations, e.g. if used in energy and telecommunications essential services.