The 10 guardrails

The 10 guardrails at a glance

1. Establish, implement and publish an accountability process including governance, internal capability and a strategy for regulatory compliance.

Guardrail one creates the foundation for your organisation’s use of AI. Set up the required accountability processes to guide your organisation’s safe and responsible use of AI, including:

  • an overall owner for AI use
  • an AI strategy
  • any training your organisation will need.
2. Establish and implement a risk management process to identify and mitigate risks. Set up a risk management process that assesses the AI impact and risk based on how you use the AI system. Begin with the full range of potential harms with information from a stakeholder impact assessment (guardrail 10). You must complete risk assessments on an ongoing basis to ensure the risk mitigations are effective
3. Protect AI systems, and implement data governance measures to manage data quality and provenance.

You must have appropriate data governance, privacy and cybersecurity measures in place to appropriately protect AI systems. These will differ depending on use case and risk profile, but organisations must account for the unique characteristics of AI systems such as:

  • data quality
  • data provenance 
  • cyber vulnerabilities. 
4. Test AI models and systems to evaluate model performance and monitor the system once deployed. Thoroughly test AI systems and AI models before deployment, and then monitor for potential behaviour changes or unintended consequences. You should perform these tests according to your clearly defined acceptance criteria that consider your risk and impact assessment.
5. Enable human control or intervention in an AI system to achieve meaningful human oversight across the life cycle.  It is critical to enable human control or intervention mechanisms as needed across the AI system lifecycle. AI systems are generally made up of multiple components supplied by different parties in the supply chain. Meaningful human oversight will let you intervene if you need to and reduce the potential for unintended consequences and harms. 
6. Inform end-users regarding AI-enabled decisions, interactions with AI and AI-generated content.  Create trust with users. Give people, society and other organisations confidence that you are using AI safely and responsibly. Disclose when you use AI, its role and when you are generating content using AI. Disclosure can occur in many ways. It is up to the organisation to identify the most appropriate mechanism based on the use case, stakeholders and technology used. 
7. Establish processes for people impacted by AI systems to challenge use or outcomes. Organisations must provide processes for users, organisations, people and society impacted by AI systems to challenge how they are using AI and contest decisions, outcomes or interactions that involve AI.
8. Be transparent with other organisations across the AI supply chain about data, models and systems to help them effectively address risks.

Organisations must provide information to other organisations across the AI supply chain so they can:

  • understand the components used including data, models and systems
  • understand how it was built
  • understand and manage the risk of the use of the AI system.
9. Keep and maintain records to allow third parties to assess compliance with guardrails. Organisations must maintain records to show that they have adopted and are complying with the guardrails. This includes maintaining an AI inventory and consistent AI system documentation. 
10. Engage your stakeholders and evaluate their needs and circumstances, with a focus on safety, diversity, inclusion and fairness.  It is critical for organisations to identify and engage with stakeholders over the life of the AI system. This helps organisations to identify potential harms and understand if there are any potential or real unintended consequences from the use of AI. Deployers must identify potential bias, minimise negative effects of unwanted bias, ensure accessibility and remove ethical prejudices from the AI solution or component. 

Using the guardrails

Adopting these guardrails will create a foundation for safe and responsible AI use. It will make it easier for any organisation to comply with any potential future regulatory requirements in Australia and emerging international practices. It will also help to uplift any organisation’s AI maturity. 

When using the guardrails, start with guardrail 1 to create your core foundations. To completely adopt the standard, your organisation will need to adopt all 10 guardrails.

Since most deployers rely on AI systems developed or provided by third parties, these guardrails offer procurement guidance (in yellow boxes) on how to work with your supplier to ensure their practice is aligned with the guardrails. 

The guardrails are not intended to be one-off activities. Instead, they are ongoing activities for organisations. The guardrails may contain organisational-level obligations to create the required processes and system-level obligations for each use case or AI system. 

The guardrails align with international standards including ISO/IEC 42001:2023 and the US National Institute of Science and Technology AI Risk Management Framework 1.0.

How the guardrails support human-centred AI deployment

Being voluntary, the standard does not create new legal duties about AI systems or their use. Rather, the guardrails ask organisations to commit to:

  • understanding the specific factors and attributes of their use of AI systems
  • meaningfully engaging with stakeholders
  • performing appropriately detailed risk and impact assessments
  • undertaking testing 
  • adopting appropriate controls and actions so their AI deployment is safe and responsible. 

These activities will help organisations understand regulatory obligations and community expectations around AI use. For example, if an organisation deploys an AI system that uses data from or about First Nations communities, the organisation should respect Indigenous Data Sovereignty Principles. These principles draw on Article 32(2) of the United Nations Declaration on the Rights of Indigenous Peoples (UNDRIP). They affirm the inherent rights of First Nations peoples to govern the collection, ownership and use of their data. The principles require organisations to use this data in a way that respects the values and laws of First Nations communities. They also require that organisations secure free, prior and informed consent from relevant First Nations communities before starting AI projects that will engage First Nations data or impact First Nations communities. First Nations communities must have the capacity to withdraw consent should AI system data usage deviate from the initially agreed purposes.

Definitions

Safe and responsible AI: AI should be designed, developed, deployed and used in a way that is safe. Its use should be human-centred, trustworthy and responsible. AI systems should be developed and used in a way that provides benefits while minimising the risk of negative impact to people, groups, and wider society.

AI deployer: An individual or organisation that supplies or uses an AI system to provide a product or service. Deployment can be internal to an organisation, or external and impacting others, such as customers or other people who are not deployers of the system.

AI developer: An organisation or entity that designs, develops, tests and provides AI technologies such as AI models and components.

AI user: An entity that uses or relies on an AI system. This entity can range from an organisation (such as business, government or not-for-profit), an individual or other system. 

Affected stakeholder: An entity impacted by the decisions or behaviours of an AI system, such as an organisation, individual, community or other system.

Read a complete list of terms and conditions.

Guide to icons

These icons show how actions under each guardrail map on to Australia's AI Ethics Principles

Look for callout text like this for guidance on working with AI suppliers.

Guardrail 1: Establish, implement and publish an accountability process including governance, internal capability and a strategy for regulatory compliance.

Guardrail 1 creates the foundation for your organisation’s use of AI. Set up the required accountability processes to guide your organisation’s safe and responsible use of AI, including:

  • an overall owner for AI use 
  • an AI Strategy 
  • any training you will need to ensure broad understanding of these principles across the organisation. 

Incorporate AI governance into your existing processes or create new processes as you need them. 

AI Ethics Principle 8: Accountability

1.1. Organisational leadership and accountability 

Commit to appointing people in the leadership team who are accountable for the governance and outcomes of AI systems, as well as the safe and responsible use of AI within the organisation.

Key concept: Leaders cannot delegate or outsource accountability for the safe and responsible deployment and use of AI systems.

1.1.1. Assign and communicate accountability and authority to relevant roles. These roles will ensure AI systems and the overall AI management system perform in the ways required. Having these roles also ensures AI systems meet external obligations and internal policies, including monitoring and reporting responsibilities.

1.1.2. Staff these roles with appropriately empowered and skilled people. These people will need to meet specific obligations, such as handling personally identifiable information and legal and regulatory obligations.

1.1.3. Clearly communicate the leadership commitment to, and accountability for, safe and responsible development and use of AI across the organisation. This includes the staff (including contractors and third-party providers) who you have made accountable for AI systems.

1.1.4. Create and document overarching organisational responsibilities and accountabilities for AI deployment and use.

1.1.5. Provide sufficient resources to deploy and use AI responsibly and safely throughout the organisation and throughout the lifecycle of AI systems in use.

1.1.6. Maintain operational accountability, capability and meaningful human oversight throughout the lifecycle of AI systems in use.

1.2. AI strategy and governance

Commit to creating and documenting overarching objectives and policies for the deployment and use of AI. These should be in line with your organisation’s strategic goals and values.

Key concept: You should only adopt AI strategy and policies to address gaps in existing related policies, such as information security, data management and data privacy, or to include enhancements to existing policies to address the specific characteristics of AI systems.

1.2.1. Document and communicate the requirement that AI use in the organisation be assigned to an accountable owner with appropriate capability for this role. 

1.2.2. Create and document the organisation’s overarching strategic intent to deploy and use AI systems in line with the organisation’s strategy and values.

1.2.3. Create, document and communicate the organisation’s strategy to comply with identified regulation related to the organisation’s deployment and use of AI systems. 

1.2.4. Create, document and communicate appropriately detailed AI policies, processes and goals for safe and responsible AI. Ensure these are compatible with the overall strategy. Create a process to set targets for AI systems to meet obligations for the safe and responsible use of AI.

1.2.5. Review and revise cross-organisation AI strategies, policies and processes at appropriate intervals so they remain fit for purpose and meet the legal and regulatory obligations of the organisation. Make sure to appropriately plan any changes to the overarching AI management system.

1.2.6. Create and document a process to proactively identify deficiencies in the overarching AI management system. This includes instances of non-compliance in any AI systems or in use of AI systems in the organisation. Include documentation of any root causes, corrective action taken and revisions to the AI management system required.

1.2.7. Create and document a process for deploying AI systems that supports mapping from business targets to system performance, with suggested metrics for internal and third-party developed systems.

1.2.8. Identify and document any factors that may affect the organisation’s ability to meet its responsibilities through the overarching AI management system.

1.2.9. Where you anticipate developing AI systems internally, create and document the end-to-end process for AI system design and development.

1.2.10. Document and perform a training needs analysis for broad AI understanding across the organisation. Source or deliver training to bridge any identified gaps. Regularly check AI skills are up to date as AI use and understanding evolves.

1.3 Strategic AI training

Commit to embedding responsible AI training and workplace practices. This provides people accountable or responsible for AI system performance with sufficient competence to perform their role.

Key concept: Training requirements will depend on the nature of the role in relation to AI. At a leadership and governance level, staff need the skills to understand potential risks and benefits of AI in the context of the organisation. Product owners may need more in-depth technical skills relevant to specific characteristics of the AI system for which they are responsible.

1.3.1. Provide appropriate and up-to-date training so accountable people can perform their duties and responsibilities. Document the competencies of the accountable people.

1.3.2. Adopt appropriate communication, training and leadership behaviour strategies to create a culture of broad accountability and address any gaps in understanding across the organisation. Offer a mechanism for staff to raise concerns or provide feedback about the use of AI systems.

1.3.3. Monitor compliance and behaviours across the organisation to identify and address any gaps between leadership expectations and staff understanding of obligations about safe and responsible deployment and use of AI. 

1.3.4. Document and communicate the consequences for people who act outside of the organisation’s defined risk appetite and associated policies.

1.3.5. Where applicable, evaluate the training needs for staff who deal with third-party AI systems that are being developed, procured or used. Provide the appropriate training to address skill gaps.

Guardrail 2: Establish and implement a risk management process to identify and mitigate risks

AI impact and risk management processes need to consider how the AI system is used. Begin assessments with the full range of potential harms with information from the stakeholder impact assessment (guardrail 10). The impact and risk assessments must align with organisational risk appetite and tolerance levels. You must complete the risk assessments throughout the lifecycle of the AI system and on an ongoing basis to ensure the risk mitigations are effective. These assessments may be required to input into any future conformity assessments mandated for use in high-risk settings.

2.1. AI risk and impact management processes 

AI Ethics Principle 1: Human, social and environmental wellbeing.
AI Ethics Principle 2: Human-centred values
AI Ethics Principle 3: Fairness

Commit to creating, documenting and applying an organisational-level risk management approach that considers the specific characteristics of AI systems. 

Key concept: Do not use the potential benefits to an organisation of deploying and using AI systems to overlook the risks and potential harms that could arise. Evaluate potential harms in relation to people, organisations and the environment.

2.1.1. Create an organisational-level risk tolerance for the use of AI systems.

2.1.2. Create and document criteria to identify acceptable and unacceptable risks in relation to AI. Base this on the risk tolerance of the organisation, the likely risk of harms to users, and in line with AI policy.

2.1.3. Create and document a suitable impact assessment, risk assessment and treatment approach to AI system deployment and use. This should cover both internal and third-party developed AI systems, with awareness of the specific characteristics and amplified risks of AI systems. Include criteria for reassessment over the lifecycle of an AI system.

2.1.4. Identify and document potential risks to the organisation and potential harms to people and groups that arise from the deployment and use of AI systems. Communicate these to relevant teams and third parties.

2.1.5. Identify and document any specific use cases or qualities of AI systems that represent an unacceptable risk to stakeholders or the organisation, in line with the organisation’s risk tolerance.

2.1.6. Where indicated by risk, decide whether to require AI system developers to implement technology solutions for specific risk mitigation, such as industry-standard labelling and watermarking approaches.

2.1.7. Evaluate and document the high-level risks and liabilities related to the organisation’s existing or planned use of third party-provided systems and components (including open-source software).

2.2. System risk and impact assessment 

AI Ethics Principle 1: Human, social and environmental wellbeing.
AI Ethics Principle 6: Transparency and explainability

Commit to rigorous risk and impact management processes for assessing AI systems against the organisational risk tolerance.

Key concept: The level of risk of an AI system depends on the specific use case for that system. You should perform assessments for the system under the expected usage, and perform them again should that use evolve. This requires ongoing monitoring of the AI system. It may place extra responsibility on deployers and end users than more traditional technology systems.

Key concept: A key risk is the over-reliance end users place on outputs or other responses from AI systems. Risk mitigation and treatment approaches should be put in place to address this risk, where appropriate, on an ongoing basis. The risk may evolve over the lifecycle of the system, particularly as users become more familiar with it.

2.2.1. Perform and document a risk assessment for each AI system, including systems developed by or procured from third parties. Assess and document risks with reference to specific, documented use cases, potential unintended use for that system and the unique requirements and characteristics of that system.

2.2.2. As part of the risk assessment of systems where users, employees or other stakeholders may be exposed to potential harms, carry out and document an impact assessment process.

2.2.3. Document and implement a system of controls to safeguard against risks and potential harms from AI systems and products as soon as is practical after your organisation has identified a risk.  Reassess the risk after you’ve implemented the controls to verify their effectiveness.

2.2.4. Perform risk assessments and treatment plans on a periodic basis or when a significant change to either the use case or system occurs, or you identify new risks. This includes responding to impact assessments or insufficient risk treatment plans.

2.2.5. Implement, document and communicate a robust impact assessment approach relating to the deployment and use of AI systems.

Procurement guidance for guardrail 2: Understand your suppliers’ risk management processes. Make sure you have sufficient information about the system, such as identified risks and potential harms for the intended use of the system, to conduct your own risk and impact management process.  Reflect agreed processes in your contracts.

Guardrail 3: Protect AI systems, and implement data governance measures to manage data quality and provenance.

You must have appropriate data governance, privacy and cybersecurity measures in place to appropriately protect and manage AI systems. These will differ depending on use case and risk profile. Organisations must account for the unique characteristics of AI systems such as data quality, data provenance and cyber vulnerabilities.

3.1. Data governance, privacy and cybersecurity

AI Ethics Principle 4: Privacy protection and security
AI Ethics Principle 5: Reliability and safety
AI Ethics Principle 8: Accountability

Commit to fit-for-purpose approaches to data governance, privacy and cybersecurity management of AI systems. This will help realise the value and mitigate the emerging and amplified risks.

3.1.1. Evaluate and adapt existing data governance processes to check they address the use of data with AI systems. Assess the risks arising from AI system use of and interaction with data. Focus on the potential for AI systems to create amplified and emerging risks.

3.1.2. Review privacy policies to include the collection, use and disclosure of personal or sensitive information by AI systems, including for system training purposes.

3.1.3. Review existing cybersecurity practices to verify they sufficiently address the risks arising from AI system use.

3.1.4. Create and document an organisation-wide process to support teams to apply the Australian Privacy Principles to all AI systems.

3.1.5. Create and document an organisation-wide process to support teams in the management of data usage rights for AI, including intellectual property, Indigenous Data Sovereignty, privacy, confidentiality and contractual rights.

3.1.6. Create and document an organisation-wide process to support teams to apply the Essential Eight Maturity Model for cybersecurity risks to AI systems.

3.1.7. Document how the Essential Eight Maturity Model for cybersecurity risks has been applied to each AI system in use, including those developed or provided by third parties.

3.2. Data governance measures to manage data quality and provenance 

AI Ethics Principle 4: Privacy protection and security

Commit to evaluating the requirements of each AI system in relation to data quality, data provenance, information security and information management, including where systems are provided by third parties. Documentation of this activity may be required to input into any future conformity assessments mandated for use in high-risk settings

Key concept: You should understand and document your data sources, put in place processes to manage your data and document the data used to train and test your AI model or system.

3.2.1. Define and document the requirements for each AI system relating to data quality, data/model provenance and data preparation.

3.2.2. Evaluate the existing information/system security and management processes in the organisation. Make sure they are fit for purpose for AI system deployment and use.

3.2.3. Understand and document the sources, collection process and types of data on that the system was trained and tested on and the data that it relies on to function, including personal and sensitive data.

3.2.4. Where appropriate, report to stakeholders on data, model sources and provenance for each AI system or product.

3.2.5. Document how you have applied the Australian Privacy Principles to each AI system in use, including those developed or provided by third parties.

3.2.6. Document the data usage rights for each AI system, including intellectual property, Indigenous Data Sovereignty, privacy, confidentiality and contractual rights.

3.2.7. Consider and document data breach reporting requirements and liabilities from related standards for each AI system. For example, under the Notifiable Data Breach scheme of the Office of the Australian Information Commissioner.

Procurement guidance for guardrail 3: Your suppliers must have appropriate data management (including data quality and data provenance), privacy, security and cybersecurity practices for the AI system or component.  Reflect this in your contracts.

Guardrail 4: Test AI models and systems to evaluate model performance and monitor the system once deployed.

Thoroughly test AI systems and AI models before you deploy them, and then monitor for potential behaviour changes or unintended consequences. Perform these tests according to the clearly defined acceptance criteria that considers the prior risk and impact assessment.

4.1. Organisational-level reporting, evaluation and continual improvement

AI Ethics Principle 1: Human, social and environmental wellbeing.
AI Ethics Principle 6: Transparency and explainability
AI Ethics Principle 8: Accountability

Commit to a robust process for timely and regular monitoring, evaluation and reporting of AI system performance. 

4.1.1. Create and document organisation-wide processes and capability required for testing, monitoring, continuously evaluating, improving and reporting of AI systems.

4.1.2. Create a formal process to review and approve evidence that systems are complying with their test requirements.

4.1.3. Apply appropriate document versioning, management and security practices.

4.1.4. Create a process for determining whether an AI system requires regular auditing, appropriate to the level of risk identified by its risk assessment.

4.2. AI system acceptance criteria 

AI Ethics Principle 1: Human, social and environmental wellbeing.
AI Ethics Principle 6: Transparency and explainability
AI Ethics Principle 8: Accountability

Commit to specifying, justifying and documenting acceptance criteria your organisation will need to meet to consider potential harms to be adequately controlled.

4.2.1. Create clear and measurable acceptance criteria for the AI system that, if met, should adequately control each of the identified harms. When appropriate, use industry and community general benchmarks. These criteria should be specific, objective and verifiable. Each acceptance criterion should link directly to one or more of the potential harms. For example, if the risk assessment raises fairness concerns, this implies fairness measures should be present in the acceptance criteria. Specify the thresholds or conditions under which you consider the potential harm to be adequately controlled. Record the acceptance criteria, with explicit justifications for why you chose the criteria and why you judged them to be adequate, in an acceptance criteria registry.

4.2.2. Communicate the acceptance criteria and their justifications with all team members involved in the development, testing and deployment of the AI system.

4.2.3. Regularly review and update the acceptance criteria to reflect any changes in the system, the identified harms or the broader context in which the system operates. Record any findings or changes in the acceptance criteria registry.

4.3. Testing of AI systems or models to determine performance and mitigate any risks

AI Ethics Principle 5: Reliability and safety
AI Ethics Principle 6: Transparency and explainability
AI Ethics Principle 8: Accountability

Commit to rigorously testing the system against the acceptance criteria before deployment, documenting the results and deciding whether to deploy.

Key concept: AI model testing verifies and validates an AI system’s underlying AI model(s). AI system testing verifies and validates the entire AI system, supporting expected behaviours in real‑world scenarios.

4.3.1. Develop and carry out a test plan that covers all acceptance criteria. The plan should specify the testing methods, tools and metrics your organisation will use, as well as the roles and responsibilities of the testing team.

  • The plan should include both model and system testing.
  • When evaluating and testing your models, use data that is representative of the use of the system, but that has not been used in the training of the system. Where they exist, use industry and community benchmarks or datasets.
  • Design evaluation and testing processes that account for the possibility that there are multiple acceptable and unacceptable outputs.
  • For general-purpose AI systems, such as those based on large language models, include adversarial testing procedures such as red teaming.

4.3.2. Compile a complete test report, including:

  • a summary of the testing goals
  • methods and metrics used
  • detailed results for each test case
  • an analysis of the root causes of any identified issues or failures
  • recommendations for remediation or improvement
  • whether the improvements should be done before deployment or as a future release.

4.3.3. Apply the organisational process for reviewing and approving the testing results to ensure the system meets all acceptance criteria before you deploy it.  The system deployment authorisation must come from the person or people accountable for the AI system.

4.4. Ongoing system evaluation and monitoring

AI Ethics Principle 1: Human, social and environmental wellbeing.
AI Ethics Principle 2: Human-centred values
AI Ethics Principle 5: Reliability and safety

Commit to implementing robust AI system performance monitoring and evaluation, and to ensuring each system remains fit for purpose. 

4.4.1. Create continuous monitoring and evaluation mechanisms to gather evidence that the AI system continues to meet its acceptance criteria throughout its lifecycle. Directly monitor any measurable acceptance criteria, alongside other relevant metrics such as performance metrics or anomaly detection. Frequently evaluate the monitoring mechanisms to check they remain effective and aligned with evolving conditions.

4.4.2. Create clear and accessible feedback channels for impacted people or groups to report problems or harms they may experience. You should actively solicit, systematically collect and carefully analyse this feedback.

4.4.3. Follow organisational review processes to ensure accountable people review and interpret the monitoring data, reports and alerts. Keep auditable monitoring logs to document the activities, feedback you receive and actions you take.

4.4.4. Ensure that people who review individual-level feedback can trigger recourse and redress processes where there is an obligation to do so. High-impact decisions may warrant direct human oversight.

4.5. Regular system audit or assessments

AI Ethics Principles 7: Contestability
AI Ethics Principle 6: Transparency and explainability
AI Ethics Principle 8: Accountability

Commit to regular system audits for ongoing compliance with the acceptance criteria (or justify why you don’t need to carry out audits).

4.5.1. Apply the organisation’s process to determine whether the level of risk warrants a comprehensive system audit plan. Document this decision as a system audit requirement statement. 

If an audit is necessary:

  • Create a regular system auditing schedule based on factors such as the system’s complexity, criticality and rate of change.
  • Ensure system audit teams have the necessary independence, expertise and authority to conduct a thorough, impartial evaluation against the organisation’s audit criteria. Record their findings in a system audit report. The system’s development team should not lead the audits.
  • Create review processes and response processes to address the findings of each system audit report. The reports should be reviewed by those accountable for the system, consulting with key stakeholders, and by management. Response processes should clearly lay out how to respond to the discovery of problems with the in-production system.

Procurement guidance for guardrail 4: Clarify who is responsible and accountable for this monitoring and evaluation (between the supplier and the deployer). Regularly review with the accountable person and make sure each system remains fit for purpose. If the supplier is responsible for monitoring the AI system or its components, put an agreement in place.

Guardrail 5: Enable human control or intervention in an AI system to achieve meaningful human oversight.

It is critical to ensure human control or intervention mechanisms are in place as needed across the AI system lifecycle. AI systems are generally made up of multiple components supplied by different parties in the supply chain. Meaningful human oversight will result in appropriate intervention and reduce the potential for unintended consequences and harms. 

5.1. Accountability and human control to achieve meaningful human oversight.

AI Ethics Principle 8: Accountability

Commit to assigning accountability to a suitably competent and empowered person in the organisation for each AI system and product.

5.1.1. Assign accountability for each AI system to someone who shows suitable competence and has the necessary tools and resources.

5.1.2. Assign the accountable role sufficient authority to oversee, intervene and be effective in ensuring responsible AI use throughout the system lifecycle.

5.1.3. Create and document competency, oversight and intervention requirements and support needs for each AI system before implementation. Evaluate as part of the continuous improvement cycle.

5.1.4. Create and document monitoring requirements for each AI system prior to implementation. Evaluate as part of the continuous improvement cycle. 

5.1.5. Assign responsibility for developing, acquiring, deploying, operating, managing and maintaining each AI system to the teams and people best suited to supporting its safe and responsible use across the lifecycle. 

5.1.6. Assign accountability for oversight of third-party development and use of AI systems and components to appropriately skilled and empowered people in the organisation. 

5.1.7. Evaluate the training needs for end users for each AI system you deploy. Provide the required training to address any identified needs. 

5.1.8. Evaluate the training needs for those responsible for the ongoing operation and monitoring for each AI system you deploy. Provide the required training to address any identified needs.

Procurement guidance for guardrail 5: Develop a plan with your supplier for governance and oversight over the AI system or component, with clear responsibilities between parties. Reflect this in your contracts.

Guardrail 6: Inform end-users regarding AI-enabled decisions, interactions with AI and AI-generated content.

Create trust with users. Provide people, society and other organisations with confidence that you are using AI safely and responsibly. Disclose when you use AI, its role and when content is AI-generated. Disclosure can occur in many ways. It is up to the organisation to identify the most appropriate mechanism based on the use case, stakeholders and technology used. 

6.1. Transparency and contestability

AI Ethics Principles 7: Contestability
AI Ethics Principle 6: Transparency and explainability

Commit to creating processes for stakeholders impacted by the decisions or behaviours of AI systems, so they understand when AI systems that could affect them are in use. Give stakeholders the opportunity to contest the decisions and outputs of those systems.

Key concept: Technologies such as watermarking and labelling can help create transparency for stakeholders by making AI-generated content clearly identifiable to end users. For relevant AI systems, consider implementing or obtaining systems that comply with the Coalition for Content Provenance and Authority (C2PA) Technical Specification.

6.1.1. Create and communicate an organisational process through which people can understand the use of AI systems. This process should include when and how frequently to communicate, the level of detail to provide, and the level of AI knowledge of stakeholders. Evaluate communication obligations for both internal and external stakeholders and interested parties, including accessibility needs.

6.1.2. Create and communicate an organisational requirement to disclose the use of AI to impacted parties in a direct interaction or in a decision-making process. 

6.1.3. Create and document the level of transparency and evidence required for you to conduct an audit over the AI system lifecycle. 

6.1.4. Create and document a process to apply the organisation’s responsibilities under this Standard to AI systems developed or provided by third parties. This should include appropriate transparency and detail of information for the organisation to make a sufficiently informed evaluation. 

6.1.5. Create and document a process to evaluate any specific reporting and disclosure obligations under the Online Safety Act relevant to AI systems usage.

6.2. Transparency for AI systems

AI Ethics Principle 6: Transparency and explainability

Commit to communicating with sufficient transparency to demonstrate safe and responsible use of AI systems.

Key concept: Certain internal and external stakeholders may require different levels of transparency given existing social inequalities. For example, you may need to make extra considerations when using data owned by or about Aboriginal and Torres Strait Islander people and organisations to mitigate the perpetuation of existing social inequalities.

6.2.1. Evaluate the level of transparency that each AI system needs – including third party provided systems – dependent on the use case and external stakeholder expectations. Consider potential conflicts, such as privacy, intellectual property, AI systems presenting as a person, hallucinations or potential for misinformation. 

6.2.2. Where applicable, document how the AI system indicates to impacted users that an AI system is being used in an interaction or in a decision-making process. 

6.2.3. Evaluate and document how the required level of transparency with the key stakeholders varies by stakeholder group. When possible, choose more interpretable and explainable AI systems to ensure understandable transparency.

6.2.4. Implement the agreed transparency measures for each AI system. 

6.2.5. Where expected by stakeholders, implement approaches to communicate relevant information about AI-generated content to end users. Require associated third-party developers to do the same, with options such as labelling and watermarking. Evolve these approaches as new solutions become available. 

6.2.6. Where required under the Online Safety Act, report on measures you have taken to ensure safety, such as notices or mandatory reporting. 

6.2.7. Determine and document the expected level of technical detail required by different stakeholder groups to effectively explain the use of AI to the intended audience.

Procurement guidance for guardrail 6: Agree with your supplier the transparency mechanisms required for the AI system or component. Reflect this in contracts and project documentation.

Guardrail 7: Establish processes for people impacted by AI systems to challenge use or outcomes.

Organisations must provide processes for users, organisations, people and society impacted by AI systems to challenge how AI is used, contest decisions, outcomes or interactions that involve AI.

7.1. Contestability and related risk controls

AI Ethics Principles 7: Contestability
AI Ethics Principle 6: Transparency and explainability

Commit to creating processes for stakeholders of AI systems to understand and challenge the use of those systems.

7.1.1. Create and communicate the process for potentially impacted stakeholders to understand how and for what purpose you are using AI, as well as raise concerns, challenges or requests for remediation. 

7.1.2. Embed stakeholder contestability of AI system use with the risk and control process of the organisation. 

7.1.3. Create and communicate an organisational process through which people can raise concerns, challenges or requests for remediation and receive responses (for example, a human rights grievance and remediation mechanism). This process should include when and how frequently to communicate, the level of detail you need to provide, and the level of AI knowledge of stakeholders. Evaluate contestability requirements for both internal and external stakeholders and interested parties, including accessibility needs. 

7.1.4. Assign an accountable person to oversee concerns, challenges and requests for remediation. 

7.1.5. Create and document a review process to evaluate stakeholder contests of AI system use across the organisation, including any concerns raised by stakeholder groups and requests for information.

Procurement guidance for guardrail 7: Agree with your supplier a process to raise issues and contested outcomes. Reflect this in contracts and project documentation.

Guardrail 8: Be transparent with other organisations across the AI supply chain about data, models and systems to help them effectively address risks.

Organisations must provide information to organisations downstream in the AI supply chain for them to understand the components of the AI system, how it was built and to understand and manage the risk of the use of the AI system. 

8.1. Transparency between developers and deployers

AI Ethics Principle 6: Transparency and explainability
AI Ethics Principle 8: Accountability

Commit to sharing information and establishing processes to provide sufficient transparency between developers and deployers of AI systems.

Key concept: When using open-source AI models, deployers need to consider which safe and responsible AI measures the developer has implemented and their effectiveness. Developers using open-source AI models should be transparent about the safe and responsible AI practices they have implemented and what further practices they recommend for deployers of their AI system, AI model or component.

8.1.1. Organisations developing AI systems, AI models or components (systems) should supply deployers of their systems with as much of the following information as possible while protecting commercially sensitive information:

  • capabilities and limitations of the system
  • technical details of the system including architecture, description of components and characteristics
  • test use cases and results of the system relevant to the deployers use of the system
  • known risks and mitigations put in place related to the deployers use of the system
  • data management processes for training and testing data including data quality, known bias and provenance
  • privacy, security and cybersecurity practices including compliance to standards and best practice relevant to the deployers use of the system
  • transparency mechanisms implemented for AI generated content, interactions and decisions
  • any known potential bias, actions taken to minimise negative effects of unwanted bias and ethical prejudices from the AI solution or component. 

8.1.2. Organisations deploying AI systems or components are required to share with their suppliers of AI models, systems or components the following information:

  • expected use of the AI system, component or model
  • any unexpected and unwanted bias resulting from use of the system. Where data privacy is a consideration, deployers should share as much as possible to highlight the issue and replicate the outcome without compromising data privacy or security such as data profiles or sample synthetic data.
  • issues, faults and incidents that occur with the system.

8.1.3. Agree with your suppliers of systems:

  • responsibility and accountability for monitoring and evaluation of system performance
  • responsibility and accountability for issue identification, resolution and system updates
  • responsibility and accountability for human oversight and intervention and when to take action
  • process for raising issues, faults and incidents including contested outcomes. Ensure your process protects user and stakeholder privacy. 

8.1.4. Ensure you’ve included the required information in contracts with suppliers of systems including when to update information. 

8.1.5. Schedule regular reviews throughout the lifecycle of the system based on timed intervals and as a result of milestones or events.

Procurement guidance for guardrail 8: Agree with your supplier roles, responsibilities and information flows across the lifecycle of the AI system from initial implementation through to end of life. Reflect in contracts and project documentation.

Guardrail 9: Keep and maintain records to allow third parties to assess compliance with guardrails.

Organisations must maintain records to demonstrate that they have implemented and are complying with the guardrails, this includes maintaining an AI inventory and consistent AI system documentation. These records may be required to input into any future conformity assessments mandated for use in high-risk settings. 

9.1. AI inventory and consistent documentation

AI Ethics Principle 6: Transparency and explainability
AI Ethics Principle 8: Accountability

Commit to adopting an inventory of the AI systems you use and deploy. Define and apply documentation standards for these systems.

9.1.1. Create and maintain an up-to-date, organisation-wide inventory of each AI system, which includes:

  • people accountable
  • purpose and business goals
  • capabilities and limitations of the AI system
  • technical requirements and components
  • datasets and their providence used for training and testing
  • technical specifications
  • acceptance criteria and test results
  • identified risks, potential impacts and relevant controls
  • any impact assessments and outcomes
  • any system audit requirements and outcomes
  • dates of review.

9.2. Critical system documentation 

AI Ethics Principle 6: Transparency and explainability
AI Ethics Principle 8: Accountability

Commit to understanding and documenting critical information about each AI system you deploy and use. Include the purpose, context, expected benefit and sufficient technical detail for the system to be understood. Be aware that the documentation you record will be the foundation to demonstrate compliance with future regulation in the form of conformity assessments. 

9.2.1. Create and document the business goals, desired outcomes and obligations for each AI system the organisation deploys and uses. Periodically review this with reference to the organisation’s strategy, values and risk tolerance. 

9.2.2. Document the scope for each AI system, including intended use cases, capabilities, limitations, expected contexts, and what responsible use looks like for an end user or affected stakeholder.  Note that the unique characteristics of AI systems have the potential to go beyond intended use and context without explicit changes to the system or notice. 

9.2.3. Document the risk management process including identified risks and mitigation implemented for the AI system or AI model. 

9.2.4. Document or request from your system provider the relevant technical details of the system or model that you may need for others to understand the system. For example, expected use, overview of system architecture and design, information about the model and training data, overview of data flows, and reliance on or links to other digital systems. 

9.2.5. Document the testing methodology applied and results of testing for the AI system or AI model. Request from your supplier the testing methodology and results during the development of the AI system and model. 

9.2.6. Document the accountable people and the mechanisms for human control and oversight for the deployed AI systems.

9.2.7. Ensure documentation related to each AI system is recorded in the inventory at a sufficient and consistent level of detail to inform the accountable and responsible parties and any third-party stakeholders.  This will enable completion of future conformity assessments to demonstrate compliance with mandated guardrails.

Procurement guidance for guardrail 9: Work with your supplier to understand and document the expected use, capabilities and limitations of the AI system or component. This should include technical details of the system and the data used in relation to the AI system (including the use of third-party data). Integrate expectations into contract, including ongoing scheduled reviews.

Guardrail 10: Engage your stakeholders and evaluate their needs and circumstances, with a focus on safety, diversity, inclusion and fairness.

It is critical for AI deployers to identify and engage with stakeholders for the life the AI system. It helps in identifying potential harms and understanding if there are any potential or real unintended consequences from the use of AI. Deployers must identify potential bias, minimise negative effects of unwanted bias, ensure accessibility and remove ethical prejudices from the AI solution or component. 

10.1. Organisational-level stakeholder engagement

AI Ethics Principle 1: Human, social and environmental wellbeing.
AI Ethics Principle 2: Human-centred values
AI Ethics Principle 3: Fairness

Commit to engaging with stakeholders – people and groups – potentially impacted by AI systems. 

10.1.1. Identify and document which key stakeholder groups may be impacted by the organisation’s use of AI in line with your AI strategy. 

10.1.2. Identify and document the needs of these key stakeholder groups in relation to your AI strategy. 

10.1.3. Identify and document which of the stakeholder needs your organisational-level AI policies and procedures will address.

10.1.4. Create processes to support ongoing engagement with stakeholders about their experience of AI systems. Make sure you identify any marginalised groups and support them appropriately. Equip stakeholders with the skills and tools necessary to give meaningful feedback.

10.2. Organisational-level diversity, inclusion and fairness

AI Ethics Principle 1: Human, social and environmental wellbeing.
AI Ethics Principle 2: Human-centred values
AI Ethics Principle 3: Fairness

Commit to creating and documenting a process so any use of AI contributes to safe, fair and sustainable outcomes.

10.2.1. Define and document the organisation’s responsibility to ensuring that AI systems do not undermine diversity, inclusion and fairness.

10.2.2. Define and document organisational-level goals relating to diversity, inclusion and fairness in the deployment and use of AI systems.

10.2.3. Evaluate whether and how the current or planned use of AI may impact the organisation’s pre-existing responsibilities and programs related to creating a positive impact. For example, human rights, diversity and inclusion, accessibility and environmental responsibilities.

10.2.4. Document and operationalise a responsibility to prevent unwanted bias, discrimination and other risk factors that could impact diversity, inclusion and fairness in leadership responsibilities and the organisation’s AI strategy.

10.3. System-level stakeholders, points of human interaction and impact of potential-harm

AI Ethics Principle 4: Privacy protection and security
AI Ethics Principle 6: Transparency and explainability

Commit to system-level stakeholder engagement and evaluation of potential harm.

Key concept: Stakeholder engagement is effective in responsible AI system deployment, particularly when carried out at the earliest possible stages in the AI lifecycle and embedded throughout the end‑to‑end lifecycle.

10.3.1. Identify and document where expected users interact with each AI system, including:

  • user interactions with the system or AI system-generated content
  • when the system processes an individual’s personal data
  • when the system makes or influences a decision about a person or group of people.

10.3.2. Identify and document the stakeholder groups for each system. 

10.3.3. For each identified interaction with a human, evaluate and document if the interaction has the potential to cause harm to an individual, group or society at large.

10.3.4. When this evaluation indicates that an AI system could harm people or groups, or pose a material risk to the organisation, perform and document an appropriate impact assessment.

10.4. System-level diversity, inclusion and fairness

AI Ethics Principle 1: Human, social and environmental wellbeing.
AI Ethics Principle 2: Human-centred values
AI Ethics Principle 3: Fairness

Commit to relevant processes with fair and sustainable outcomes for AI systems and uses.

Key concept: Organisations need to evaluate the potential impact of unwanted bias on the AI systems they deploy and use, including developing strategies to identify potential biases. Existing standards, guidance and technical reports, such as ISO Information technology – Artificial Intelligence (AI) – Bias in AI systems and AI aided decision making, ISO/IEC TR 24027:2021 may help. As understanding and expectations evolve, stay informed of new developments in this area, where relevant.

10.4.1. Evaluate and document the potential impact of each AI system in relation to diversity, inclusion and fairness. Identify and mitigate risks of unwanted bias or discriminatory outputs, including for marginalised groups.

10.4.2. Evaluate how each AI system may support or undermine any existing legal obligation or program with a positive, social impact. The include human rights, diversity and inclusion, accessibility and environmental responsibilities.

10.4.3. Define and document how you have embedded accessibility obligations (such as inclusive design) in the deployment and use of each AI system.

10.4.4. For each AI system, define and document the stages in the AI lifecycle where you will need meaningful human oversight to meet organisational, legal and ethical goals.

Procurement guidance for guardrail 10: Work with your supplier to undertake AI impact assessments and understand the needs of system stakeholders. Know suppliers’ actions to understand potential bias, minimise negative effects of unwanted bias, implement accessibility and remove ethical prejudices from the AI solution or component. Ensure you haven’t reintroduced any unwanted bias during deployment.