The 10 guardrails at a glance
1. Establish, implement and publish an accountability process including governance, internal capability and a strategy for regulatory compliance. |
Guardrail one creates the foundation for your organisation’s use of AI. Set up the required accountability processes to guide your organisation’s safe and responsible use of AI, including:
|
2. Establish and implement a risk management process to identify and mitigate risks. | Set up a risk management process that assesses the AI impact and risk based on how you use the AI system. Begin with the full range of potential harms with information from a stakeholder impact assessment (guardrail 10). You must complete risk assessments on an ongoing basis to ensure the risk mitigations are effective |
3. Protect AI systems, and implement data governance measures to manage data quality and provenance. |
You must have appropriate data governance, privacy and cybersecurity measures in place to appropriately protect AI systems. These will differ depending on use case and risk profile, but organisations must account for the unique characteristics of AI systems such as:
|
4. Test AI models and systems to evaluate model performance and monitor the system once deployed. | Thoroughly test AI systems and AI models before deployment, and then monitor for potential behaviour changes or unintended consequences. You should perform these tests according to your clearly defined acceptance criteria that consider your risk and impact assessment. |
5. Enable human control or intervention in an AI system to achieve meaningful human oversight across the life cycle. | It is critical to enable human control or intervention mechanisms as needed across the AI system lifecycle. AI systems are generally made up of multiple components supplied by different parties in the supply chain. Meaningful human oversight will let you intervene if you need to and reduce the potential for unintended consequences and harms. |
6. Inform end-users regarding AI-enabled decisions, interactions with AI and AI-generated content. | Create trust with users. Give people, society and other organisations confidence that you are using AI safely and responsibly. Disclose when you use AI, its role and when you are generating content using AI. Disclosure can occur in many ways. It is up to the organisation to identify the most appropriate mechanism based on the use case, stakeholders and technology used. |
7. Establish processes for people impacted by AI systems to challenge use or outcomes. | Organisations must provide processes for users, organisations, people and society impacted by AI systems to challenge how they are using AI and contest decisions, outcomes or interactions that involve AI. |
8. Be transparent with other organisations across the AI supply chain about data, models and systems to help them effectively address risks. |
Organisations must provide information to other organisations across the AI supply chain so they can:
|
9. Keep and maintain records to allow third parties to assess compliance with guardrails. | Organisations must maintain records to show that they have adopted and are complying with the guardrails. This includes maintaining an AI inventory and consistent AI system documentation. |
10. Engage your stakeholders and evaluate their needs and circumstances, with a focus on safety, diversity, inclusion and fairness. | It is critical for organisations to identify and engage with stakeholders over the life of the AI system. This helps organisations to identify potential harms and understand if there are any potential or real unintended consequences from the use of AI. Deployers must identify potential bias, minimise negative effects of unwanted bias, ensure accessibility and remove ethical prejudices from the AI solution or component. |