Your role in building trust

In this guidance, we talk about businesses as AI model developers, AI system developers and AI system deployers. Businesses may fall into more than one category. All 3 have a role to play in improving the transparency of AI‑generated content. 

Roles and responsibilities across the AI lifecycle
AI system deployers AI system developers AI model developers

Apply user-friendly labels:

  • fit-for-purpose information
  • consider accessibility.

Educate and verify:

  • verify mechanisms
  • explain their use
  • offer support contacts.

Develop and apply system‑level tools:

  • adapt to context and users
  • offer configurable options
  • verify persistence of watermarks.

Develop model-level tools:

  • use common standards
  • provide robust tools to deployers
  • ensure resilience against attacks.

Establish feedback with deployers:

  • understand risks
  • foster continuous improvement.

AI system deployers

You are an AI system deployer if you are a business or person that uses an AI system to operate or to provide a product or service. This might look like:

  • using chat assistants such as ChatGPT, Microsoft Copilot, Claude or Gemini to create content for use internally or externally
  • using AI‑enabled image editors such as Adobe, Canva or Picsart
  • giving your customers access to an AI chat assistant to answer frequently asked questions from your website
  • using AI to monitor system performance
  • AI‑enabled handling of user feedback
  • using AI to maintain systems. 

Examples of AI system deployers include Australian businesses who are using AI to improve their operations. For example, call centres who deploy AI to improve their customer support experience and minimise follow‑up calls. 

Read more examples of AI system deployment in When to use transparency mechanisms.

You should:

  • Apply visible, user‑friendly labelling that suits the context and people consuming your AI‑generated content
  • Create simple labels such as ‘generated by AI’, or ‘created with AI assistance’ that are easy to identify and understand
  • Use icons, colour codes or badges to identify AI‑generated content without compromising content quality
  • Consider the accessibility of transparency mechanisms for diverse audiences, including people living with disability.

  • Tell people who consume your content about AI‑generated content risks and the importance of transparency mechanisms
  • Help people understand how to interpret and use transparency mechanisms – consider FAQs, videos and explainers.

  • Develop and use tools to safely record and access metadata, ensuring privacy, safety, security and compliance with the model developer’s standards
  • Keep metadata in secure, access‑controlled storage so you can perform self‑audits or comply with auditing requirements.

  • Develop and use tools to safely record and access metadata, ensuring privacy, safety, security and compliance with the model developer’s standards
  • Keep metadata in secure, access‑controlled storage so you can perform self‑audits or comply with auditing requirements.

  • Use tools provided by developers to validate watermarks and metadata and include feedback loops if applicable
  • Tell people when you can’t verify transparency mechanisms
  • Give people clear next steps when they encounter verification issues including support contacts.

  • Facilitate human oversight of AI‑generated content
  • Participate in feedback loops with AI system and model developers, as well as with experts in relevant forms of harms (e.g. privacy).

  • Assess the watermarking, labelling and metadata recording capabilities of AI system suppliers as part of the procurement process.

AI system developers

You are an AI system developer if you are a business or person who designs, builds and tests a system that uses AI. This might look like: 

  • integrating AI models into applications
  • creating a platform that uses existing AI in new ways, like a chatbot or AI‑augmented visual design app
  • creating user interfaces for AI models
  • customising AI models for specific uses.

Examples of AI system developers include software companies and app developers incorporating AI models in their product. These products might include AI image and video editors, or tools that create music samples using AI. 

If a company takes an off-the-shelf AI system and tailors it for specific use or context they are considered a system developer. For example, if IT development company Fortunesoft tailors ChatGPT for the financial services sector, we consider it to be an AI system developer. 

Other examples of AI system developers include Open AI (ChatGPT), Amazon (Amazon Rekognition) or Microsoft (Copilot). Businesses buy these AI systems off‑the‑shelf and often deploy them directly.

You should:

  • Design intuitive labels that suit context and people consuming the AI‑generated content
  • Prioritise user experience for users with varying levels of technical knowledge
  • Offer configurable options for transparency mechanisms to be applied by deployers

  • Develop and use tools to safely record and access metadata, ensuring privacy, safety, security and compliance with the model developer’s standards
  • Keep metadata in secure, access‑controlled storage so you can perform self‑audits or comply with auditing requirements.

  • Develop or adopt post‑generation (after content is generated) watermarking techniques at the system or application level
  • When available use tools from your model developer to embed model‑level watermarks into AI‑generated content
  • Create ways to verify watermark persistence during your system’s operations.

  • Facilitate human oversight of AI‑generated content
  • Participate in feedback loops with AI system deployers and AI model developers, as well as with experts in relevant forms of harms (e.g. privacy).

AI model developers

You are an AI model developer if you are a business or person who creates, tests, trains and validates AI models. This might look like:

  • designing and building AI models
  • training AI models on specific datasets
  • testing and checking model outputs
  • researching new ways to improve model abilities such as by changing model parameters and fine‑tuning.

Examples of AI model developers include Open AI, Anthropic and Google DeepMind. 

You should:

  • Develop model‑level watermarking techniques by embedding traceable identifiers during model training or integrating them into the content generation process
  • Ensure watermark resilience against common transformations and attacks while maintaining utility as technology matures
  • Contribute to or use existing standards where applicable to define secure metadata formats, including model version, parameter counts, and training data sources
  • Provide tools, software development kits, application program interfaces, or frameworks to system developers and other stakeholders for implementing and verifying watermarks and metadata.

  • Ensure watermarking techniques and metadata standards are effective and fit‑for‑purpose
  • Test and validate that transparency features survive content workflows and user interactions
  • Design mechanisms that flag when content has been modified or tampered with
  • Create solutions for content tracking throughout the distribution channels
  • Participate in feedback loops with AI system deployers and AI system developers, using deployment insights to continuously improve transparency mechanisms.

Working together to improve transparency across the AI lifecycle

Wherever possible, developers and deployers should work together to ensure they build in transparency mechanisms from the start and that these work as intended. Developers should commit to improving transparency mechanisms and sharing best practices, processes and technologies, including publishing this information to improve transparency. 

Developers and deployers should also ensure that these mechanisms are accessible to people with disabilities. This audience has diverse ways of communicating and may use a range of technologies to access information.