Why we wrote the standard

We designed Australia’s first voluntary AI safety standard to help organisations develop and deploy AI systems in Australia safely and reliably. Adopting AI and automation is projected to contribute $170 billion to $600 billion of GDP. Australian organisations and the Australian economy can gain significant benefits if they can capture this (Taylor et al.).

The standard offers a set of voluntary guardrails to establish consistent practices for organisations to adopt AI in a safe and responsible way. This is in line with current and evolving legal and regulatory obligations and public expectations. While this standard applies to all organisations across the AI supply chain, this first version of the standard focuses more closely on organisations that deploy AI systems. The next version will expand on technical practices and guidance for AI developers. 

While there are already examples of good AI practice in Australia, organisations need clearer guidance. By adopting this standard, organisations will be able to use AI safely and responsibly. 

The standard consists of 10 voluntary guardrails that apply to all organisations across the AI supply chain. The voluntary guardrails establish consistent practice to adopt AI in a safe and responsible way. This will give certainty to all organisations about what developers and deployers of AI systems must do to comply with the guardrails. 

In the government’s January Interim Response to the Safe and Responsible AI discussion paper, the government identified actions to take. These included working with industry to develop this Voluntary AI Safety Standard. This standard sits alongside a broader suite of government actions enabling safe and responsible AI under 5 pillars, outlined in Figure 2. Actions included in the 5 pillars include the Proposals Paper for Introducing Mandatory Guardrails for AI in High-Risk Settings, the National Framework for the Assurance of Artificial Intelligence in Government and the Policy for Responsible Use of AI in Government. The standard will continue to evolve alongside the broader activities underway by government to ensure alignment and consistency for safe and responsible AI. 

Infographic outlining the 5 actions being undertaken by government to support safe and responsible AI in Australia: Delivering regulatory clarity and certainty, Supporting and promoting best practice, Supporting AI capability, Government as an exemplar and International engagement. The 5 actions are underpinned by Australia's Regulatory Strategy for AI and Australia's AI Ethics Principles.

Figure 2: Actions the government is taking to support safe and responsible AI in Australia

Why implement a voluntary standard?

The standard establishes a consistent practice for organisations. It sets expectations for what future legislation may look like as the government considers its options on mandatory guardrails. It also gives organisations the best practice AI governance and ethical practices, which offers them a competitive advantage.

The standard is designed to guide organisations to:

  • raise the levels of safe and responsible capability across Australia
  • protect people and communities from harms
  • avoid reputational and financial risks to their organisations
  • increase organisational and community trust and confidence in AI systems, services and products
  • align with legal obligations and expectations of the Australian population
  • operate more seamlessly in an international economy.

This will lead to the longer-term benefits of improved safety, quality and reliability of AI in Australia. It will support broader use of AI products and services, increased market competition and opportunities for technological innovation.