The government’s regulatory approach to AI will continue to build on Australia’s robust existing legal and regulatory frameworks, ensuring that established laws remain the foundation for addressing and mitigating AI-related risks. These frameworks are actively enforced and continuously adapted to emerging risks. Agencies and regulators will retain responsibility for identifying, assessing, and addressing potential AI-related harms within their respective policy and regulatory domains.
To support this approach, the government is establishing an AI Safety Institute (AISI). The AISI will monitor, test and share information on emerging AI capabilities, risks and harms. Its insights will support ministers, portfolio agencies and regulators to maintain safety measures, laws and regulatory frameworks that keep pace with rapid technological change. The Institute will support existing regulators with independent advice to ensure AI companies are compliant with Australian law and uphold legal standards around fairness and transparency.
The government is committed to upholding international obligations, promoting inclusive governance and maintaining a resilient regulatory environment that provides certainty to business and responds quickly to new challenges.
Managing AI risks requires a whole-of-government approach. Every organisation developing and using AI is responsible for identifying and responding to AI harms and upholding best practice. A proactive approach to harms as they emerge ensures that government is continuing to update and introduce targeted laws where needed. This approach allows us to respond quickly and effectively to emerging risks and keep Australians safe.
Our approach focuses on harnessing the opportunities of AI while taking practical, risk‑based protections that are proportionate, targeted and responsive to emerging AI risks. By applying fit-for-purpose legislation, strengthening oversight and addressing national security, privacy and copyright concerns, we will work to keep the operation of AI systems responsible, accountable, and fair. This gives businesses confidence to adopt AI responsibly while safeguarding people’s rights and protecting them from harm.
Action 7: Mitigate harms
Mitigating the potential harms of AI is essential to maintaining trust and confidence in AI applications and upholding Australians’ rights. We cannot seize the innovation and economic opportunities of AI if people do not trust it.
Australia has strong existing, largely technology-neutral legal frameworks, including sector-specific guidance and standards, that can apply to AI and other emerging technologies. The government is monitoring the development and deployment of AI and will respond to challenges as they arise, and as our understanding of the strengths and limitations of AI evolves.
The approach promotes flexibility, uses regulators’ existing expertise, and is practical and risk-based. It supports government in targeting emerging threats such as AI‑enabled crime and AI-facilitated abuse which disproportionately impacts women and girls. AI has manifested harms to First Nations people, including through perpetuating harmful stereotypes and the use, misattribution and falsification of First Nations cultural and intellectual property. Genuine engagement with impacted First Nations communities, including alignment with Closing the Gap reforms and Indigenous data sovereignty principles, is vital to understanding and managing these risks.
Australia has strong protections in place to address many risks, but the technology is fast-moving and regulation must keep pace. That’s why the government continues to assess the suitability of existing laws in the context of AI. We are taking targeted action against specific harms, as outlined below.
Action on AI risks and harms
The government is taking action to identify and understand AI risks and deal with AI harms, including:
- Advancing the science of AI safety: AI safety research underpins the reliability and trustworthiness of AI systems. The government is engaging domestically and internationally to build expertise and understanding of the capabilities and risks of advanced AI systems, to inform when and how to respond.
- Consumer protections for AI-enabled goods and services: The Department of the Treasury’s Review of AI and the Australian Consumer Law found that Australians enjoy the same strong consumer protections for AI products and services as they do for traditional goods and services, including safety protections. The government will consult with states and territories on minor opportunities to clarify existing rules that the review identified and progress the changes when appropriate.
- Reducing online harms through reforms, codes and standards: The government addresses AI-related risks through enforceable industry codes under the Online Safety Act 2021 and by criminalising non-consensual deepfake material. Further restrictions on ‘nudify’ apps and reforms to tackle algorithmic bias are also being considered.
- Reviewing application of copyright law in AI contexts: The Attorney-General’s Department is engaging with stakeholders through the Copyright and AI Reference Group to consult on possible updates to Australia’s copyright laws as they relate to AI. The government has provided certainty to Australian creators and media workers by ruling out a text and data mining exception in Australian copyright law.
- Reviewing AI regulation in healthcare: The Safe and Responsible AI in Healthcare Legislation and Regulation Review (Department of Health, Disability and Ageing 2024) is assessing the impact of AI on healthcare regulation.
- Reviewing AI regulation in medical device software: The Therapeutic Goods Administration (TGA) oversees AI used in medical device software and led the review on Clarifying and Strengthening the Regulation of Medical Device Software including Artificial Intelligence (TGA 2025).
- AI security: The Department of Home Affairs, the National Intelligence Community and law enforcement agencies will continue efforts to proactively mitigate the most serious risks posed by AI. As the national security policy lead on AI, Home Affairs has contributed to the uplift of critical infrastructure, international collaboration on AI security, and coordinating a multiagency group on synthetic biology and AI. Home Affairs also oversees the Protective Security Policy Framework (Department of Home Affairs 2025), which details policy requirements for authorising AI technology systems for non-corporate Commonwealth entities.
- Updating Australia’s privacy laws: the Attorney-General is leading work to develop a modernised and clear Privacy Act 1988 (Cth), which achieves the right balance between protecting people’s personal information and allowing it to be used and shared in ways that benefit individuals, society, and the economy. This will help to underpin trust in digital services.
Responding to AI harms
The Australian Government continues to support regulators and law enforcement in countering AI-enabled non-compliance and crime. The government is considering preventative measures for harms such as child abuse material and infringements on Indigenous data sovereignty. The government is also developing AI-driven fraud detection and prevention capabilities to strengthen policies and outpace malicious actors.
Keeping Australians safe also means recognising that AI is likely to exacerbate existing national security risks and create new and unknown threats. To keep Australians safe, the government is taking proactive steps to prepare for any potential AI-related incident. The Australian Government Crisis Management Framework (AGCMF) provides the overarching policy for managing potential crises. For major AI incidents, our responses will continue to be guided by existing processes and frameworks, including the AGCMF. The government will consider how AI related harms are managed under the AGCMF to ensure ongoing clarity regarding roles and responsibilities across government to support coordinated and effective action.