Keep Australians safe

The government’s regulatory approach to AI will continue to build on Australia’s robust existing legal and regulatory frameworks, ensuring that established laws remain the foundation for addressing and mitigating AI-related risks. These frameworks are actively enforced and continuously adapted to emerging risks. Agencies and regulators will retain responsibility for identifying, assessing, and addressing potential AI-related harms within their respective policy and regulatory domains. 

To support this approach, the government is establishing an AI Safety Institute (AISI). The AISI will monitor, test and share information on emerging AI capabilities, risks and harms. Its insights will support ministers, portfolio agencies and regulators to maintain safety measures, laws and regulatory frameworks that keep pace with rapid technological change. The Institute will support existing regulators with independent advice to ensure AI companies are compliant with Australian law and uphold legal standards around fairness and transparency.

The government is committed to upholding international obligations, promoting inclusive governance and maintaining a resilient regulatory environment that provides certainty to business and responds quickly to new challenges.

Managing AI risks requires a whole-of-government approach. Every organisation developing and using AI is responsible for identifying and responding to AI harms and upholding best practice. A proactive approach to harms as they emerge ensures that government is continuing to update and introduce targeted laws where needed. This approach allows us to respond quickly and effectively to emerging risks and keep Australians safe.

Our approach focuses on harnessing the opportunities of AI while taking practical, risk‑based protections that are proportionate, targeted and responsive to emerging AI risks. By applying fit-for-purpose legislation, strengthening oversight and addressing national security, privacy and copyright concerns, we will work to keep the operation of AI systems responsible, accountable, and fair. This gives businesses confidence to adopt AI responsibly while safeguarding people’s rights and protecting them from harm. 

Decorative

Action 7: Mitigate harms

Mitigating the potential harms of AI is essential to maintaining trust and confidence in AI applications and upholding Australians’ rights. We cannot seize the innovation and economic opportunities of AI if people do not trust it. 

Australia has strong existing, largely technology-neutral legal frameworks, including sector-specific guidance and standards, that can apply to AI and other emerging technologies. The government is monitoring the development and deployment of AI and will respond to challenges as they arise, and as our understanding of the strengths and limitations of AI evolves.

The approach promotes flexibility, uses regulators’ existing expertise, and is practical and risk-based. It supports government in targeting emerging threats such as AI‑enabled crime and AI-facilitated abuse which disproportionately impacts women and girls. AI has manifested harms to First Nations people, including through perpetuating harmful stereotypes and the use, misattribution and falsification of First Nations cultural and intellectual property. Genuine engagement with impacted First Nations communities, including alignment with Closing the Gap reforms and Indigenous data sovereignty principles, is vital to understanding and managing these risks.

Australia has strong protections in place to address many risks, but the technology is fast-moving and regulation must keep pace. That’s why the government continues to assess the suitability of existing laws in the context of AI. We are taking targeted action against specific harms, as outlined below.

Action on AI risks and harms

The government is taking action to identify and understand AI risks and deal with AI harms, including:

  • Advancing the science of AI safety: AI safety research underpins the reliability and trustworthiness of AI systems. The government is engaging domestically and internationally to build expertise and understanding of the capabilities and risks of advanced AI systems, to inform when and how to respond.
  • Consumer protections for AI-enabled goods and services: The Department of the Treasury’s Review of AI and the Australian Consumer Law found that Australians enjoy the same strong consumer protections for AI products and services as they do for traditional goods and services, including safety protections. The government will consult with states and territories on minor opportunities to clarify existing rules that the review identified and progress the changes when appropriate.
  • Reducing online harms through reforms, codes and standards: The government addresses AI-related risks through enforceable industry codes under the Online Safety Act 2021 and by criminalising non-consensual deepfake material. Further restrictions on ‘nudify’ apps and reforms to tackle algorithmic bias are also being considered.
  • Reviewing application of copyright law in AI contexts: The Attorney-General’s Department is engaging with stakeholders through the Copyright and AI Reference Group to consult on possible updates to Australia’s copyright laws as they relate to AI. The government has provided certainty to Australian creators and media workers by ruling out a text and data mining exception in Australian copyright law.
  • Reviewing AI regulation in healthcare: The Safe and Responsible AI in Healthcare Legislation and Regulation Review (Department of Health, Disability and Ageing 2024) is assessing the impact of AI on healthcare regulation.
  • Reviewing AI regulation in medical device software: The Therapeutic Goods Administration (TGA) oversees AI used in medical device software and led the review on Clarifying and Strengthening the Regulation of Medical Device Software including Artificial Intelligence (TGA 2025).
  • AI security: The Department of Home Affairs, the National Intelligence Community and law enforcement agencies will continue efforts to proactively mitigate the most serious risks posed by AI. As the national security policy lead on AI, Home Affairs has contributed to the uplift of critical infrastructure, international collaboration on AI security, and coordinating a multiagency group on synthetic biology and AI. Home Affairs also oversees the Protective Security Policy Framework (Department of Home Affairs 2025), which details policy requirements for authorising AI technology systems for non-corporate Commonwealth entities.
  • Updating Australia’s privacy laws: the Attorney-General is leading work to develop a modernised and clear Privacy Act 1988 (Cth), which achieves the right balance between protecting people’s personal information and allowing it to be used and shared in ways that benefit individuals, society, and the economy. This will help to underpin trust in digital services.

Responding to AI harms

The Australian Government continues to support regulators and law enforcement in countering AI-enabled non-compliance and crime. The government is considering preventative measures for harms such as child abuse material and infringements on Indigenous data sovereignty. The government is also developing AI-driven fraud detection and prevention capabilities to strengthen policies and outpace malicious actors.

Keeping Australians safe also means recognising that AI is likely to exacerbate existing national security risks and create new and unknown threats. To keep Australians safe, the government is taking proactive steps to prepare for any potential AI-related incident. The Australian Government Crisis Management Framework (AGCMF) provides the overarching policy for managing potential crises. For major AI incidents, our responses will continue to be guided by existing processes and frameworks, including the AGCMF. The government will consider how AI related harms are managed under the AGCMF to ensure ongoing clarity regarding roles and responsibilities across government to support coordinated and effective action.

Keeping Australians safe: The mission of the AI Safety Institute

The government is establishing the AISI to strengthen its ability to respond to AI-related risks and harms, and to help keep Australians safe.

The AISI will focus on both upstream AI risks and downstream AI harms. Upstream AI risks are the model capabilities and ways AI models and systems are built and trained that can create or amplify harm. Downstream AI harms are the real-world effects people may experience when an AI system is used.

The AISI will generate and share technical insights on emerging AI capabilities and upstream risks, working across government and with international partners. It will develop advice, support bilateral and multilateral safety engagement, and publish safety research to inform industry and academia.

The AISI will engage with unions, business and the research sector to elicit expert views, inform broader engagement and ensure its functions meet the needs of the community. 

The AISI will also support a coordinated response to downstream AI harms by engaging with portfolio agencies and regulators. It will monitor, analyse and share information across government to allow ministers and regulators to take informed, timely and cohesive regulatory action, including by supporting existing regulators to ensure AI companies are compliant with Australian law and uphold legal standards of fairness and transparency. Portfolio agencies and regulators remain best placed to assess AI uses and harms in their specific sectors and adjust regulatory approaches and the law if necessary.

The AISI will operate with transparency, responsiveness and technical rigour, reinforcing public confidence in both AI technology and the institutions responsible for its governance. It will collaborate with domestic and international partners, including the National AI Centre and the International Network of AI Safety Institutes, to support the global conversation on understanding and addressing AI risks.

Decorative

Action 8: Promote responsible practices

Businesses need to do their part in adopting AI responsibly. Promoting responsible AI practices is central to building public confidence and supporting safe, ethical innovation. To support this, the Australian Government is encouraging the development and use of systems that are transparent, fair, and accountable, with consistent governance and compliance with relevant laws. This also includes promoting responsible practices by organisations throughout their development, including in relation to high-quality data, robust stewardship and clear documentation of how a system has been built.

The government will work with industry, unions, civil society and standards bodies to explore practical ways to support responsible deployment, including through voluntary measures and shared guidance. Businesses often express uncertainty about liability when adopting AI, which can undermine confidence and slow responsible innovation (Fifth Quadrant 2025). The government is responding by clarifying how existing laws apply to AI and supporting compliance, including workplace, consumer protection, product liability and competition laws. 

Support for responsible AI adoption

By fostering responsible practices, Australia aims to deploy AI in ways that are safe, inclusive and aligned with the public interest, supporting economic growth and national resilience. Examples of actions underway include:

  • Encouraging responsible AI adoption by organisations: The Guidance for AI Adoption (NAIC 2025) provides 6 essential practices to embed safety, transparency and ethical conduct into AI development and deployment.
  • Promoting transparency measures for AI-generated content: The Being clear about AI-generated content guide (NAIC 2025) advises businesses on how they can improve trust by clearly signalling when AI has been used to create or modify content. The recommended transparency measures include labelling, watermarking, and metadata recording.
  • Clear governance for government AI use: The Policy for the Responsible Use of AI in Government (Digital Transformation Agency 2025) promotes transparency, accountability and oversight, positioning government as a leader in ethical AI adoption.
  • Guidance for AI in schools: the Australian Framework for Generative AI in Schools (Department of Education 2023) provides nationally consistent guidance to students, teachers, staff, parents and carers on the opportunities and challenges presented by AI.
  • Aligning with international AI standards: Australia is actively participating in global standards development to reflect national values and industry interests, and to promote shared understanding of responsible AI practices.
  • Supporting responsible AI use by regulators: Regulators such as the Australian Prudential Regulation Authority and the Australian Securities and Investments Commission provide guidance for AI use in banking, insurance, and financial services, including operational risk and governance standards.

Being clear about AI-generated content: Guidance from the National AI Centre

As everyday AI use accelerates, Australians need to feel confident that they can recognise when digital content has been created or changed using AI.

Developed by the National AI Centre, Being clear about AI-generated content provides best-practice approaches to help Australian business show clearly when they use AI to create or modify digital content. Transparency around AI use can help business to reduce regulatory and reputational risks, build confidence in their digital content, and gain a competitive advantage in the digital economy. 

The guidance outlines practical steps to make AI-generated content easy to identify, including how to choose the right level of transparency for their context:

Labelling: Adding a visible notification to show AI-generated content, and the source.

Watermarking: Embedding information within digital content to verify authenticity and trace its origin. 

Metadata recording: Including descriptive information within the content file. 

This voluntary guidance is based on industry best practice and developing global standards. It will be updated as technology and international standards change.  

Simplifying responsible innovation

The NAIC will launch a dedicated online platform to consolidate guidance, training and use-case examples, supporting SMEs and end-users with regular updates to keep pace with industry change and complement existing cybersecurity resources. The 6 essential practices in the Guidance for AI Adoption will underpin new tools and resources, offering a coherent framework adaptable to different audiences and aligned with international standards. 

Australia aims to promote both innovation and responsibility, supporting local adoption while shaping global standards for safe, fair and transparent AI. The government will actively participate in major international forums and trade partnerships to promote interoperability and best practice. We will periodically review and update guidance and standards to reflect evolving global norms and certification schemes.

Decorative

Action 9: Partner on global norms

Shaping global governance of AI is vital for Australia’s economic prosperity and national security. Australia can use its role as a responsible middle-power to embed our values of safety, transparency and inclusion in international AI norms and standards.

Australia as an international AI leader and partner 

Through our deep and longstanding engagement in international AI governance Australia has already cemented itself as a reliable, responsible and trusted leader in our region. Australia can build on this leadership to ensure that we are the partner of choice for the adoption of safe, secure and responsible AI and digital infrastructure in the Indo‑Pacific. We will expand our capacity building efforts and work with partners to ensure the benefits of AI reach across the region and to share trusted and secure digital infrastructure. We are supporting this work with efforts to understand and address the risks and harms related to AI, informed by our engagement in the International Network of AI Safety Institutes and with our Five Eyes partners informs this. 

Our goal of keeping Australians safe will continue to drive our international advocacy and collaboration on AI safety. The AISI will continue working with international partners to advance global understanding of AI risks and safety, while national security agencies collaborate with partners to address emerging threats, such as the future prospect of AI systems achieving Artificial General Intelligence (AGI). We will keep examining new technologies and be proactive about evolving our approach to keep Australians safe as new capabilities emerge. 

Our ambition is to align international frameworks with domestic approaches, reduce regulatory friction and support innovation. This will position Australia as a trusted partner in global supply chains and a leader in secure, responsible adoption of trusted AI technologies across the region.

Through foundational multilateral commitments and engagements Australia has signalled its dedication to advancing AI safety, ethical standards and trustworthy development on the world stage.

Australia has strong bilateral relationships that are essential for supporting Australian industry and ensuring national resilience.

  • The MoU on Cooperation on AI with Singapore demonstrates Australia’s commitment to joint initiatives that promote ethical AI development and knowledge sharing.
  • Strategic partnerships with the United Kingdom and Republic of Korea in cyber and critical technologies advance Australia’s capacity to innovate securely and collaboratively.
  • Australia’s Framework Arrangement with India supports joint research, standards development, and improved market access for AI technologies. This strengthens Australia’s role as a trusted partner in the region and supports the growth of a robust, globally connected AI ecosystem.

We have also agreed to develop and launch a bilateral Technology Prosperity Deal with the United States to establish joint initiatives on cooperation and investment in AI, quantum, and other critical technologies.

Australia’s role in promoting AI safety

Australia is also playing a pivotal role in advancing global AI safety science. By participating in the International Network of AI Safety Institutes, Australia shares expertise and collaborates on the safety testing of advanced AI systems, helping to develop international best practice. Australia’s involvement in the International AI Safety Report (UK Government 2025) lets us offer evidence and insights that inform global efforts to understand and prevent AI-related harms. Through these contributions, Australia is helping to shape a safer, more transparent and more accountable AI landscape, both domestically and internationally. 

Through multilateral and bilateral engagement, we will deliver on our existing international commitments. We will collaborate with like-minded countries and regional partners to strengthen digital and data governance and promote the adoption of trusted technologies, with a focus on the Indo-Pacific. Strategic relationships, such as the Comprehensive Strategic Partnership (CSP) with Singapore, under which both nations have agreed to set up a Cyber and Digital Senior Officials Dialogue, and initiatives like the Australia–UK Cyber and Critical Technology Partnership create a strong foundation for future cooperation on AI. Our agreement to develop and launch a bilateral Technology Prosperity Deal with the United States will see us deepen cooperation on building a trusted and secure global AI ecosystem.

Australia’s leadership on AI in the region

The Department of Foreign Affairs and Trade, with the Department of Industry, Science and Resources, will lead on developing an Australian Government Strategy for International Engagement and Regional Leadership on Artificial Intelligence. The strategy will align Australia’s foreign and domestic policy settings on AI. It will also establish our approach to opportunities and the priorities of our bilateral partnerships and our engagement in international fora.

Building on Actions 7–9: What’s next

As AI advances at pace, Australia faces a rapidly shifting landscape of opportunities and risks. The government is actively monitoring emerging risks. Where necessary, we will take decisive action to ensure safety and accountability as new technologies and frontier AI systems emerge. Existing regulators will continue to identify and manage harms and report any gaps in laws to the AISI. We will respond to emerging risks including bias, privacy breaches, disinformation and cyber threats. If more regulation is needed to address bad actors or broader harms, the government will not hesitate to intervene.

The rights and data sovereignty of First Nations peoples and other vulnerable groups are increasingly at risk, as AI systems process and generate data in ways that do not always respect cultural protocols or individual privacy. Possible divergence of the global regulatory environment could lead to different countries and industries adopting varying standards, regulatory regimes and expectations. To keep Australians safe, we will continue to foster collaboration across government, industry, and communities, and to remain agile in the face of evolving global and technical challenges.