AI system not sufficiently secure |
- Directors’ duties (e.g. to exercise powers and discharge duties with due care and diligence), to assess and govern risks to the organisation (including non-financial risk e.g. from AI and data).
- Privacy laws, require steps that are reasonable in the circumstances to protect personal information and impose data minimisation obligations to destroy or deidentify information no longer needed.
- The Security of Critical Infrastructure Act and sector specific laws (e.g. financial services), impose risk management and cybersecurity obligations.
- Negligence, if a failure in risk management practices amounts to a failure to take reasonable steps to avoid foreseeable harm to people owed a duty of care, and that failure causes the harm.
- Online safety laws, if certain online service providers fail to take pre-emptive and preventative actions to minimise harms from online services
|
Misleading outputs / statements |
The Australian Consumer Law prohibitions against unfair practices (e.g. misleading and deceptive conduct and false and misleading representations) may apply:
- if the outputs are misleading (e.g. deceptive use of deepfakes)
- to misleading representations or silence as to when AI is being used
- to misleading statements as to the performance and outputs of the AI systems
|
Harmful outputs |
- Product liability (where the organisation is a manufacturer), if outputs result in harm caused by a safety defect (e.g. a defect in the design, model, manufacturing or testing of the system, including failure to address bias or cybersecurity risk) and other product safety laws (including recalls and reporting).
- Negligence, if an organisation fails to exercise the standard of care of a reasonable person to avoid foreseeable harm to persons to whom it owes a duty of care, and that failure causes the harm.
- Work, health and safety laws where outputs introduce physical or psychosocial risks or harms to workers.
- Criminal laws, if the output resulted in, or aided or abetted the commission of a crime.
- Online safety laws, if the outputs are restricted or harmful online content (such as cyberbullying or cyber-abuse material, or non-consensual sharing of intimate images or child sexual abuse material).
- Defamation laws, if the outputs are defamatory and the organisation participated in the process of making the defamatory material available (such as through making the tool available or training) rather than merely disseminating the content.
|
Misuse of data or infringement of model or system |
- Intellectual property laws (including copyright), privacy laws, duties of confidence and contract, protect the use, reproduction and/or disclosure of data (including training data, input data and outputs) and the model or system without the requisite consents or rights.
- Privacy laws regulate the collection, use and disclosure of, personal information and impose transparency (with specific provisions for some automated decision making to apply from 10 December 2026) and data minimisation requirements on the handling of personal information, and provide for a statutory tort for serious invasions of privacy, which commenced 10 June 2025.
- The Australian Consumer Law prohibitions against misleading and deceptive conduct, unconscionable conduct and false and misleading representations, may apply to unfair data collection and use practices
|
Bias, incorrect or poor-quality output |
- Privacy laws, impose quality and accuracy obligations that may apply to training and input data (that is personal information) and outputs (where new personal information is generated).
- Systems that produce inaccurate or erroneous outputs such as ‘AI hallucinations’ may be in breach of statutory guarantees under the Australian Consumer Law (e.g. consumer goods be of acceptable quality and fit for purpose, or consumer services be rendered with due care and skill).
- Anti-discrimination laws, including the Fair Work Act if outputs negatively exclude or disproportionately affect an individual or group on the basis of a protected attribute. Organisations should also ensure they meet obligations in enterprise agreements where applicable.
|
AI system not accessible to individual or group |
- Anti-discrimination laws, if the exclusion is based on a protected attribute
- Prohibitions on unconscionable conduct under the Australian Consumer Law, if the exclusion of a consumer was so harsh that it goes against good conscience
- Essential services obligations, e.g. if used in energy and telecommunications essential services.
|
Engagement with others in the AI supply chain |
- Privacy laws, to be open and transparent in managing personal information, including privacy policies setting out where personal information is collected from or disclosed to third parties.
- The Australian Consumer Law prohibitions on unfair practices (e.g. misleading and deceptive conduct) and unfair contract terms in how an organisation engages with consumers and other businesses.
- The Australian Consumer Law statutory guarantees, (e.g. that consumer goods be of acceptable quality and fit for purpose, or that consumer services be rendered with due care and skill) apply to business-to-business relationships where a party meets the test of a consumer.
- Anti-competitive and restrictive trade practices under competition laws, apply to how organisations engage in trade or commerce, including using AI systems to engage in anti-competitive conduct
- Product liability may require manufacturers to indemnify suppliers under the statutory guarantees, and proportional liability laws can restrict the liability of concurrent wrongdoers to their proportionate contribution.
|