Artificial intelligence transparency statement

Date published:
28 February 2025
Date updated:
26 February 2026

Our department is supporting Australia to become a leader in developing and adopting trusted, secure and responsible artificial intelligence (AI). 

Our own adoption of AI adheres to the Voluntary AI Safety Standard, which consists of 10 voluntary guardrails. These guardrails include transparency and accountability requirements and explain what developers and deployers of AI systems must do to achieve them.

The Digital Transformation Agency’s Policy for the responsible use of AI in government (Version 2.0) sets the requirements for Australian Government agencies to engage with AI in a safe and responsible way.

The policy has mandatory requirements about accountable officials and transparency statements. This statement details our implementation of the policy requirements. 

Governance

In January 2024, we formed an AI Governance Committee (AIGC) to have central oversight for AI use in the department. The committee’s members represent a range of perspectives from across the department and have involvement in developing AI policy or projects. The AIGC ensures:

  • we find ways to use AI to improve efficiency, capability and innovate
  • appropriate governance of AI use and adherence with relevant legislation, policies and best practice
  • opportunities involving the use of AI are considered, safe and responsible
  • we identify and address all potential AI related risks
  • appropriate training and usage policies are available.

AI accountable official 

Our Chief Information Officer is the accountable official responsible for carrying out the policy. The AIGC supports the accountable official. 

How we use AI

Our approach uses a number of AI tools to deliver efficiencies and augment processes. These help staff focus on more complex and meaningful work.

The AIGC maintains visibility of AI use and classifies AI use according to the following usage patterns and domains

  • Usage patterns: supporting human decision-making and administrative action, giving insights through analytics and improving workplace productivity so staff can focus on more complex work.
  • Domains: service delivery, compliance and fraud detection, policy and legal, and corporate and enabling domains. 

We have a policy that guides all staff on: 

  • acceptable use of AI in our department
  • ethical considerations
  • freedom of information considerations
  • record keeping
  • privacy
  • roles and responsibilities when using AI.

We do not use AI in any instance where the public directly interacts with or feels a significant impact from AI without a human agent. 

Staff review all AI tool outputs and treat them as drafts or starting points for further research, not for decision-making.

Australian AI Safety Institute

We are responsible for the Australian AI Safety Institute, a key action to achieve the goals set out in the National AI Plan. The institute is being established to monitor and test frontier AI technologies and share insights on emerging capabilities and risks. It will support ministers, agencies and regulators to protect people and businesses in relation to AI safety issues by sharing information, connecting relevant bodies and facilitating understanding of emerging AI risks. 

Our commitment

We will continuously refine and enhance our AI capabilities. We do this by ensuring centralised oversight and evaluation of AI tools through the AIGC. 

This statement will evolve to align with technology changes, legislation, policy and governance best practices. We will review at least every 12 months and update it if our AI approach changes, or if anything materially impacts its accuracy.