3. Measure and manage risks: implement AI-specific risk management
AI risks fundamentally change depending on the type and complexity of your AI systems. Risks often emerge from how the AI system behaves in different situations and use-cases, rather than only from software updates. They can rapidly amplify smaller issues into significant problems.
For example, an AI chatbot that answers simple questions during business hours, when it can be monitored by a staff member, is a low-risk use of AI. The risks expand, however, if that chatbot operates 24/7, without human oversight, and answers more complex questions.
To use AI responsibly, organisations need to be able to identify and manage its risks.
3.1 Establish a fit-for-purpose risk management framework
An effective risk management framework supports organisations to identify and manage the risks of using AI, set clear rules about what risks are acceptable, and regularly check how AI systems are working over the lifecycle.
3.1.1 Create and document:
- a risk management framework that addresses the specific characteristics and risks of AI systems
- organisational‑level risk tolerance and criteria to determine acceptable / unacceptable risks for the development and deployment of AI systems. This should include the significance and likelihood of potential harms to affected stakeholders in line with the AI policy and objectives.
- AI impact assessment, risk assessment and risk treatment processes, including criteria for reassessment over the lifecycle of an AI system. Identify and document any specific use cases or qualities of AI systems that represent an unacceptable risk to stakeholders or the organisation, in line with the organisation’s risk tolerance.
3.1.2 Ensure that risk management processes include steps to identify, assess and treat risks arising from other parties in the AI supply chain, such as third‑party developers and third party deployers. Specific risks relating to open‑source AI models, systems and components should be considered by both providers and consumers of these technologies.
3.1.3 Adopt or develop clear and consistent reporting formats, such as data sheets, model cards, or system cards, to communicate appropriate risk management outcomes, including residual risks, to relevant stakeholders (DEV).
3.2 Assess AI system risks
Using proportionate, robust methods to assess AI system risks is a key part of the operation of the risk management framework.
3.2.1 Establish a triage system to determine which AI systems may pose an enhanced or unacceptable risk, aligned to the organisation’s context and risk tolerance (see Foundations Triage template).
3.2.2 Perform and document a risk assessment and evaluation for the specific requirements, characteristics and documented use cases of each AI system, including systems developed or procured from third party suppliers.
3.2.3 In undertaking AI system risk assessments, take the following steps to evaluate the likelihood and consequence of each risk as well as the consequence of not deploying the AI system:
- Identify potential severity and likelihood of harms to stakeholders, drawing on the Stakeholder Impact Assessment (see 2.1.1 – 2.1.6).
- Identify legal, commercial and reputational risks such as failing to meet legal obligations, organisational commitments to ESG, diversity, inclusion and accessibility or programs supporting diversity, equity and fairness.
- Consider the potential amplified and emerging data governance risks across each phase of the AI system lifecycle including before and after model training.
- Analyse risks systemically using risk models to identify the sources and pathways through which AI systems could produce the identified risks.
- Compare the estimated value or level of identified risks to pre‑determined organisational risk criteria (see 3.1.1) or those defined by regulatory bodies or stakeholders.
- Document any specific use cases or qualities that represent an unacceptable level of risk to stakeholders or the organisation.
- Communicate risk assessments in clear reporting formats to relevant stakeholders.
3.3 Implement controls for AI system risks
Where risks are identified, risk treatment plans make it clear how risks will be mitigated.
3.3.1 Create, document and implement a risk treatment plan to prioritise, select and implement treatment options (e.g. risk avoidance, transfer, acceptance, reduction) and controls to mitigate identified risks. Reassess risks after controls are implemented to verify their effectiveness.
3.3.2 Communicate risk treatment plans in clear reporting formats to relevant stakeholders.
3.3.3 Create and document a deployment plan which includes the response, recovery and communications for the realization of residual risks.
3.3.4 Research, document and implement leading practices in safety measures as safeguards, as appropriate for identified risks (DEV).
3.4 Monitor and report incidents
Reporting incidents when they happen and communicating the steps you’ve taken is essential to build trust with stakeholders and meet regulatory obligations.
3.4.1 Track, document and report relevant information about serious incidents and possible corrective measures to relevant regulators and/or the public in a reasonable timeframe. Reporting near‑misses and corrective measures is good practice. Communication of corrective measures should consider privacy and cybersecurity risks.
3.4.2 Create and document a process to evaluate and fulfil reporting and disclosure obligations such as those under the Online Safety Act relevant to AI systems usage, including documentation of safety measures implemented such as notices and incident reporting.
3.4.3 Conform to and document data breach reporting requirements and liabilities from related standards. For example, under the Notifiable Data Breaches scheme of the Office of the Australian Information Commissioner.
3.4.4 Maintain two‑way communication between developers and deployers for incident reporting, sharing performance insights and coordinating responses to identified issues.
3.4.5 Monitor and evaluate risk assessments and treatment plans on a regular, periodic basis or when a significant change to the use case or the system occurs, or new risks are identified. This includes responding to impact assessments or insufficient risk treatment plans.
3.4.6 Monitor and evaluate the overall effectiveness of risk management processes and continually improve them.