Being clear about AI-generated content

A guide for business

Date published:
28 November 2025

Introduction

This guidance explains why and how to tell people that you’ve used content that has been generated or modified by artificial intelligence (AI).

When you do business, it is good practice to tell people if you have used artificial intelligence to generate or modify your content. In some contexts, you should also be able to show where your AI‑generated content came from, if it has been modified and other details. 

This guidance provides Australian businesses with up‑to‑date best practice approaches to AI‑generated content transparency based on the latest research and international governance trends. 

Following this guidance will help you to:

  • ensure that your AI‑generated content is clearly identifiable
  • follow industry best practice
  • contribute to emerging standards of transparency for AI‑generated content.

This guidance is voluntary and supports the Guidance for AI Adoption: Implementation practices. It builds on practices outlined in Practice 4: Share essential information.

This guidance on how to inform people about AI‑generated content does not cover how to:

  • tell users when they are engaging with AI systems
  • tell users when they may be impacted by AI‑enabled decisions
  • tell users when content is human-generated
  • manage copyright or other intellectual property implications of AI‑generated content
  • use AI‑generated content detection mechanisms. 

Who is this for?

This guidance is for:

  • all businesses that use AI to generate or modify content
  • all businesses that build, design, train, adapt or combine AI models or systems that can generate or modify content. 

Everyone involved in the AI lifecycle is responsible for being transparent about their AI‑generated or modified content. 

What are transparency mechanisms?

We use the term ‘transparency mechanisms’ to mean the ways you can show your users when and how you’ve used AI to create or modify content. Transparency mechanisms can also show where AI‑generated content, including images, text, audio and video, come from.

In this guidance, we talk about 3 transparency mechanisms:

  • labelling
  • watermarking
  • metadata recording. 

You can apply these mechanisms individually or combine them. When combined, they provide a greater level of transparency. 

Labelling means using visible text to tell users if something is AI‑generated and where it came from.

Labelling is the easiest transparency mechanism to use, but it can need extra support from watermarking and accurate metadata for it to be credible. 

Labelling can range from very simple to complex. 

An AI-generated image of a butterfly on a shoe, with the label 'AI generated'.

Simple AI generated label

An AI-generated image of a butterfly on a shoe, with the label 'Generated 20 August 2025 using Adobe Firefly. Prompt used: "butterfly on a shoe".

Complex AI generated label

Watermarking is a way to embed information into digital content so you can trace its origin or verify its authenticity. This information can be visible or invisible to people. Watermarking is different to labelling, which is easier to remove. 

Visible watermarking can appear in several ways. In images and videos, a watermark may appear as a semi‑transparent text overlay. In audio, it could take the form of an audible disclosure stating, ‘This audio was generated by AI’. 

Invisible watermarking involves embedding hidden data into the content. To verify the watermark, a user would need to extract the watermark data by using a special watermark detection tool to examine the content. 

Typically, AI model developers are responsible for building watermarks into their model. System developers and deployers are responsible for implementing, checking and using these watermarks, depending on their needs and use case. Find out more in the National institute of Standards and Technology’s overview of technical approaches to digital content transparency [PDF].

An AI-generated image of a butterfly on a shoe, with a visible watermark.

Visible watermarking

An AI-generated image of a butterfly on a shoe, with an overlay representing invisible watermarking.

Invisible watermarking

Metadata is descriptive information about a piece of content that’s included with the content file. Often metadata appears automatically. An example of this is digital photography. Depending on the device you use, the metadata of a digital photo will record where and when it was taken. 

Metadata recording is versatile. It can include many details about a piece of content, like who created it and whether it’s been edited. Accurate metadata can support the credibility of both watermarking and labelling. 

Metadata capabilities are usually the responsibility of AI system developers or model developers. If you’re an AI system deployer, you should check if the system you’re using has metadata recording capabilities.

An AI-generated image of a butterfly on the shoe with a block of metadata text.

Metadata recording can contain more than just the AI generation information

What you can do

Why be transparent? 

AI is a rapidly developing technology, and it’s changing the way people do business. AI‑generated content is already common in business and marketing contexts, and its realism and reach has increased as the technology has advanced.

Because of this, it can now be difficult to tell if content has been modified or generated by AI. This can make it more difficult for people to trust the content they encounter. It can also make it easier for people to commit fraud and other malicious acts. 

Being transparent about AI‑generated content can help to:

Being transparent about AI‑generated content can contribute to greater accountability, reliability and trust in the digital content we engage with. 

The regulatory landscape around AI is evolving, both in Australia and internationally. Being transparent about AI‑generated content may be necessary for your business to meet its regulatory requirements and adapt to new ones. It can also help to reduce the risk of harms of misleading or deceptive AI‑generated content.

Clearly identifying AI‑generated content is an important way to help people recognise and understand that information that they encounter. This can support public education efforts and build skills in critically evaluating the authenticity and reliability of information.

People are more likely to engage with AI‑generated content when they understand where it comes from and how reliable it is. Being transparent about your use of AI‑generated content may help to create a point of difference with your competitors. It can also support your business to build a foundation of trust with your consumers. 

Your context and regulatory obligations (such as responsibilities under the Online Safety Act) will inform whether you need to use one or more transparency mechanisms.

Content transparency mechanisms can help to build awareness and trust and improve accountability and safety. They can help users to distinguish AI‑generated content from human‑authored material. This can help users think critically about the accuracy of content they consume, which can reduce the risks of seriously harmful disinformation and misinformation.

Digital content transparency mechanisms are evolving. There isn’t yet a standardised approach to transparency for AI‑generated content, but AI Safety Institutes are progressing research on technical approaches. Industry‑led initiatives such as the Coalition for Content Provenance and Authenticity (C2PA) are being adopted. 

Transparency mechanisms are not failsafe. They can be misused or tampered with and remain vulnerable to attack. As a business, you need to judge how to implement transparency mechanisms which best suit your context.

Read about the limitations of transparency mechanisms

Learn more

This guidance was produced with AI assistance. Full review and editorial control remains with the team at the National AI Centre.