When to use transparency mechanisms

You may not need to use transparency mechanisms every time you use AI generated content. Which transparency mechanisms you use will depend on your context and how much risk your content poses. 

You need to decide the best ways to be transparent about how your business uses AI to generate content. 

The transparency mechanisms you use should be proportionate to your content’s: 

  • potential negative impact, or how the AI‑generated content might adversely affect people
  • AI involvement level, or how involved AI has been in creating the content.

This will help you choose how and when to use transparency mechanisms such as watermarking, labelling and metadata. You can apply mechanisms individually or combine them. When combined, they provide a greater level of transparency.

You should consider how using AI-generated content and being transparent about it will affect employees, customers and the broader public.

Transparency becomes more important if your AI‑generated content has the potential to create more negative impacts, with less human oversight or involvement.

A risk matrix with 'AI involvement level' on the Y axis and 'Potential impact' on the X axis. Levels of transparency mechanisms increase as the X and Y values increase.

Framework for assessing risks of AI‑generated content

This section will be most useful for businesses who understand the context in which their AI-generated content is going to appear. This usually means AI system deployers and AI system developers. 

However, this section can also be useful to AI model developers, as they should be aware of the potential impacts of their models. Deployers and developers should communicate about these impacts and what transparency mechanisms may help to mitigate them. 

This section also shows examples of when you might use transparency mechanisms and which types you might use.

How to assess your AI-generated content

Step 1: assess potential negative impact 

The potential impact of AI‑generated content depends on:

  • its context
  • how you use it
  • the overall nature of your product or service more broadly. 

If the potential negative impact of your AI‑generated content is high, you need to use robust transparency mechanisms to make sure you can accurately communicate how you’ve used AI. 

It is important to consider the potential for the content to be seen as authoritative.

For example, using AI‑generated content in a clinical setting could lead to misdiagnosis. In recruitment processes it could lead to a breach of employment law. In these circumstances, you’d need to use strong transparency mechanisms for AI‑generated content.

You should consider how AI might negatively impact different aspects of people’s lives including (though this is not exhaustive):

  • people’s rights
  • people’s safety
  • collective cultural and societal interests, particularly First Nations peoples
  • Australia’s economy
  • the environment
  • the rule of law
  • the context in which it is shared
  • impacts to the business (such as commercial, reputational impacts).

Your business may already have risk assessment tools or frameworks that you can use to assess the likely impact of your AI‑generated content. You can also refer to the principles outlined in the government’s proposals paper for introducing mandatory guardrails for AI in high‑risk settings, which may help you assess your risk levels. You may also have regulatory obligations, for example under the Online Safety Act 2021, Australian Consumer Law and the Privacy Act 1988.

Step 2: assess AI involvement level

How much AI is involved in making content affects the level of risk. When thinking about AI involvement in creating content, ask:

a) How automated is the AI system that’s generating or modifying content?

  • Systems with lower levels of responsible human oversight need stronger transparency mechanisms.
  • Systems where AI serves as an assistive tool with substantial human oversight may not need such strict approaches.
  • Fully automated systems, where human input is minimal or absent, may require more extensive transparency mechanisms. Those with human review and oversight may require less.

b) Has AI substantially modified or generated the content?

  • If an AI system has substantially modified content, the content needs more transparency mechanisms.
  • What counts as a substantial modification depends on context and the needs and expectations of your users. 

c) Is there potential for AI to substantially change the meaning of the content? 

  • Even minor changes to content can significantly change its meaning, potentially leading to users misinterpreting the content. For example, an AI editor omitting the word ‘not’ could substantially change the meaning of the content it’s editing.
  • Low AI involvement can still lead to big changes in a piece of content’s meaning.
  • Assess if AI has altered the meaning of your content in ways which may affect your stakeholders’ interpretation.

Step 3: choose the right transparency mechanisms

When AI‑generated content has high potential for adverse impacts and high levels of AI involvement in the generation process, it needs more extensive transparency mechanisms. These could include a combination of labelling, watermarking and metadata recording. Types of content that may need this level of transparency include:

  • AI‑generated medical reports used in treatment decisions
  • when AI changes the spoken language of a high‑profile speaker in an online video, often known as a ‘deepfake’.

When AI‑generated content has low potential impact and low AI involvement it may need only minimal transparency mechanisms, or none at all. These scenarios could include:

  • grammar corrections in casual emails
  • automated brightness adjustments to personal photos.

When AI‑generated content has high potential impact and low involvement, or low potential impact and high involvement, you’ll need to tailor the transparency mechanisms depending on context.

Read the examples to learn more. These are illustrative only.

A risk matrix with 'AI involvement level' on the Y axis and 'Potential impact' on the X axis. Levels of transparency mechanisms increase as the X and Y values increase.

Examples of transparency mechanisms for AI-generated image content

Transparency mechanism options for AI-generated image content
Scenario Labelling Metadata Watermarking
Basic AI assistance for photo retouching on internal intranet Not needed Not needed Not needed
Advanced AI photo editing for non-commercial images May need May need Not needed
AI-enhanced image composition for digital marketing May need May need Not needed
Visual content and layout for news platforms curated using AI Likely need Likely need May have
Fully AI-generated artwork Likely need Likely need Likely need
AI-enhanced images for medical diagnosis Likely need Likely need Likely need

Examples

Example A: AI-enhanced editorial assistance

You are a journalist using an AI‑enabled word processer to write a news article. Your word processor does more than spelling and grammar checks. It also suggests improvements such as adjusting tone for consistency, performs basic fact checking, recommends examples and you include sections of AI‑generated text. You review and have editorial control over the final content.

Overall content risk level: moderate

Potential negative impact: moderate

News articles have the potential to reach broad audiences and influence public opinion. Readers may expect to be informed about AI’s role in the writing process. Publications may also need to take reasonable steps to adhere to relevant industry Standards of Practice.

Extent of AI involvement: moderate

  • Extent of automation: Low to moderate – humans still have oversight and full editorial control of the content.
  • Content modification: Moderate – the AI system is significantly generating new content.
  • Meaning alteration: Moderate – content modification has the potential to significantly change the meaning.

Recommended actions

Depending on the nature of the article and your audience, you may add a label ‘Article enhanced by AI’. This ensures readers are aware of AI’s supportive role. You may not need to use other transparency mechanisms, as the human author has primary responsibility.

Example B: AI-enhanced images for clinical diagnosis

You are developing an AI system to enhance medical images for diagnostic insights. This could include synthesising specialised imaging views or highlighting potential abnormalities in X‑rays, MRIs or other scans. These images directly inform clinical decisions and patient care pathways, so accuracy and reliability are paramount. Without thorough human oversight, incorrect or misleading images could lead to misdiagnoses or improper treatments, or have life‑threatening consequences. 

Overall content risk level: high

Potential negative impact: high 

This is a clinical setting with scope to adversely impact the health and wellbeing of cancer patients.

Extent of AI involvement: moderate to high

  • Extent of automation: Moderate – mixed approach with some human oversight with fully automated AI image modification.
  • Content modification: Moderate to high – AI‑created or enhanced images are significantly changing the original content.
  • Meaning alteration: High – small changes in an AI‑enhanced image could mean the difference between a patient receiving a cancer diagnosis or a negative result. 

Recommended actions

Co‑design labelling user interfaces with system users and clinicians. Use system‑level labelling of content as ‘Image enhanced by AI’ to disclose AI‑modified content to clinicians. Develop and maintain secure, accessible and complete metadata logs (for example, version of AI model used, date/time of generation, confidence scores). All watermarking‑related actions mentioned in the AI system developers section should be in place here. These steps should be on top of standard transparency measures and other requirements appropriate within medical settings.

Example C: AI-generated draft of legal contract

You are a lawyer using an AI system to produce fully AI‑generated drafts of legal contracts.

Overall content risk level: Moderate to high

Potential negative impact: High

There is the potential for material legal, financial and reputational harm to individuals or organisations relying on AI‑generated contracts. There may also be risks to privacy if personal information is an input to the AI system.

Extent of AI involvement: Moderate to high

  • Extent of automation: Moderate – mixed approach with some human oversight by legal professionals, and some parts of the content generation process are automated.
  • Content modification: Moderate to high – substantial content modification as AI is generating the contract.
  • Meaning alteration: High – small changes to contract text can have a significant impact on interpretation and meaning.

Recommended actions

Label contracts with ‘Initial draft generated by AI’ within the law firm. Make sure the lawyer retains human oversight and responsibility for accuracy. All metadata‑related actions for system developers listed above should apply here, including maintaining metadata logs (for example, date/time of generation).

Example D: AI-generated marketing content for household products

You run an advertising agency that is putting together a marketing campaign asset for a client advertising a new line of household products. Your creative team decides to use AI‑enabled applications to modify and curate images, short videos and messaging text. When content is marketed to consumers, Australian Consumer Law obligations apply, including that representations must be truthful and accurate.

Overall content risk level: Moderate

Potential negative impact: Moderate 

In marketing contexts, businesses need to ensure they remain compliant with Australian Consumer Law in all forms of advertisements, promotions, websites and statements.

Extent of AI involvement: Moderate to high

  • Extent of automation: Moderate – creative teams are using the AI‑assisted applications to modify and create content for the campaign. The teams maintain human oversight.
  • Content modification: Moderate – AI is significantly modifying content from the original.
  • Meaning alteration: Moderate – the meaning of content is likely to change from the original where AI introduces new visual elements or modifies contexts.

Recommended actions

The advertising agency should label content produced for the client as ‘Enhanced by AI.’ The advertising agency should ensure human oversight from design teams and copyrighters. The client may choose to label advertisements ‘Enhanced by AI’. The client is also responsible for ensuring that the overall general impression made by representations of the product or its characteristics is accurate to comply with Australian Consumer Law.