Why we wrote this

The Australian Government is taking an integrated approach to mitigating risks in the development and deployment of AI while supporting innovations in the sector. In January 2024, in its interim response to the Safe and Responsible AI consultation, the government committed to: 

  • consulting on the establishment of mandatory guardrails for high‑risk AI
  • working with industry to develop a Voluntary AI Safety Standard
  • working with industry to develop options for voluntary labelling and watermarking of AI‑generated content
  • establishing an expert advisory group to support the development of options for mandatory guardrails. 

This document represents a snapshot in time of best‑practice guidance of transparency mechanisms for AI‑generated content. This is an area that sits at the cutting edge of research. Because of this, what is ‘best practice’ is evolving. We intend to revise this guidance with updates reflecting any major changes in the current state of the art. 

This guidance does not cover transparency mechanisms for:

  • non‑AI‑generated content
  • whole‑of‑economy regulatory frameworks for digital content transparency
  • AI‑generated content detection mechanisms.
Graphic illustrating the scope of this guidance. Full description follows.

Spectrum of digital content transparency measures

A flow diagram showing the transparency measures that apply to AI-generated content, including metadata recording, digital watermarking and labelling. Digital watermarking can be visible or invisible. Measures that are out of scope for this guidance are those that relate to non-AI-generated content and content detection mechanisms.

Watermarking and labelling – or using transparency mechanisms, as we’ve referred to it in this guidance – is an emerging field of AI governance. Relevant global industry‑led approaches such as open‑source internet protocol (C2PA) and information security controls (ISO 27002) continue to emerge. In November 2024, in the United States, the National Institute of Standards and Technology released its first synthetic content guidance report. This looked at the existing standards, tools, methods, practices and potential for development of further ways to deliver digital content transparency. In April 2024, the EU introduced mandatory regulatory requirements for providers of general‑purpose AI systems. These require providers to ensure their output is ‘marked in a machine‑readable format and detectable as artificially generated or manipulated’. 

Widespread adoption of digital content transparency measures will be important to achieve broader economic and societal benefits as well as improve digital literacy. A coordinated approach can incentivise larger model developers to incorporate effective tools. It can also encourage sharing and collaboration on best practices, process and technologies. 

The International Network of AI Safety Institutes has identified managing the risks from synthetic content as a critical research priority that needs urgent international cooperation. Australia is co‑leading the development of a research agenda focused on understanding and mitigating the risks from synthetic content, including watermarking and labelling efforts. The network aims to incentivise research and funding from its members and the wider research community, and encourage technical alignment on AI safety science.

Fostering greater transparency of AI‑generated content requires global collaboration across sectors and jurisdictions to create integrated and trusted systems that promote digital content transparency.

Limitations of transparency mechanisms

Approaches to digital content transparency continue to develop in industry and academia. While the benefits of being transparent about AI‑generated content are clear, some limitations remain: 

  • A lack of standardisation in watermarking technologies means that one watermarking system is not interoperable with another system.
  • A lack of standardisation in approach across the economy can be a barrier to behaviour change. For example, a range of different content notifications may confuse the public. Or, in the case of widespread voluntary content labelling, users may assume that the absence of a label indicates that content is human‑generated. Measures to determine authenticity of human‑generated content are out of scope for this guidance, but are closely related.
  • Watermarking and labelling techniques are vulnerable to being used maliciously and manipulated or removed, undermining trust in the system. As a result, we should also be pursuing tools of provenance to assist with more persistent trust capabilities in AI content.

It is also critical that systems that display or sell AI‑generated content (for example, social media platforms or retailers) make transparency mechanisms and information visible.

Further work across government

This voluntary guidance gives Australian businesses access to up‑to‑date best practice approaches to AI‑generated content transparency. These are based on the latest research and international governance trends – both of which are rapidly evolving. This guidance complements and recognises other Australian Government initiatives which impact AI‑generated content transparency including: 

Under Australia’s Online Safety Act 2021, the eSafety Commissioner regulates mandatory industry codes and standards. These set out online safety compliance measures to address certain systemic online harms. There are a range of enforcement mechanisms for services that do not comply, including civil penalties. Watermarking, labelling or equivalent measures are part of the Online Safety (Designated Internet Services – Class 1A and Class 1B Material) Industry Standard 2024 (DIS Standard). This requires that certain generative AI service providers – those with a risk of being used to produce high impact material like child sexual abuse material – put into place systems, processes and technologies that differentiate AI outputs generated by the model, as well as other obligations. The Internet Search Engine Services Online Safety Code (Class 1A and Class 1B Material) requires search engine providers to make it clear when users are interacting with AI‑generated materials, among other obligations.

Taking action to address harmful deepfakes

As well as initiatives to improve the transparency of AI-generated content, the Australian Government continues to act against technology-facilitated harms associated with deepfakes.

Deepfakes are digital images, videos or sound files of a real person that have been edited to create an extremely realistic but false depiction of them doing or saying something that they did not actually do or say.

The non-consensual sharing of sexual or intimate material online, including artificially generated deepfake material, is a serious form of technology-facilitated abuse. It often occurs in the context of gender-based and family, domestic and sexual violence. 

The eSafety Commissioner’s complaints schemes all apply to deepfake material. This includes its scheme applying to image-based abuse. This is when a person shares, or threatens to share, an intimate image or video of someone without their consent. 

Last year, the Australian Parliament passed the Criminal Code Amendment (Deepfake Sexual Material) Act 2024. The amendment strengthens existing Commonwealth criminal offences and creates new offences targeting the creation and non-consensual sharing of sexually explicit material online. This includes material that has been created or altered using technology, such as deepfakes.

These civil and criminal schemes work in a complementary manner to provide choice and redress for victim-survivors. 

The current phase of online safety code development focuses on ensuring safety measures are in place to prevent children in Australia from accessing or being exposed to Class 1C and Class 2 material (such as online pornography). This includes through risk‑appropriate safety measures for AI‑generated material. The ‘Phase 2’ codes also aim to ensure online services have safety measures and tools in place to allow all end‑users to manage their online experiences with Class 1C and Class 2 material. eSafety published a Position Paper in July 2024 to guide the development of safety measures. This seeks to ensure the Phase 2 safety measures support a meaningful uplift in online safety practices and responsibilities in respect of AI‑generated content, particularly to protect children in Australia. In early 2025, industry associations representing the online industry submitted 9 codes to the eSafety Commissioner for consideration of whether they create appropriate community safeguards.   

More information about code development is available in the industry codes section of the eSafety website.

Additionally, the Online Safety (Basic Online Safety Expectations) Determination 2022 sets out the Australian Government’s expectations that digital providers keep Australians safe online. This includes social media services, messaging, gaming and file sharing services, apps, websites and other services. There are specific expectations around generative AI: 

  • that providers will take reasonable steps to consider end‑user safety and incorporate safety measures in the design, implementation and maintenance of generative AI capabilities on their services
  • that providers take reasonable steps to proactively minimise the extent to which generative AI capabilities may produce material or facilitate activity that is unlawful or harmful. 

While the expectations are not backed by civil penalties, the eSafety Commissioner can require providers to report on how they are meeting the expectations (with civil penalties available for non‑compliance). eSafety regularly publishes transparency reports summarising provider’s responses to improve transparency and promote greater accountability for user safety.

The Australian Communications and Media Authority (ACMA) oversees the operation of the voluntary Australian Code of Practice on Disinformation and Misinformation. The code includes measures that support transparency around the steps that signatories take to empower consumers to make better informed choices about digital content. This may include information about digital literacy interventions and the use of new technologies to signal the credibility of information online. The ACMA’s oversight role includes reporting to government the effectiveness of the code.