Our department has summarised high-level learnings from these experiences.
What we heard
The Australian AI Ethics Principles are relevant to any organisation involved in AI (private, public, large or small). We heard this message consistently from pilot participants. They also expect the Australian Government to lead by example and implement the principles.
Businesses agreed that pressure for AI to be ethically designed and deployed will grow. Public demand and government or regulator policies will drive ethical expectations and practices over time.
By implementing the Australian AI Ethics Principles, businesses can:
- be prepared ahead of these drivers
- exemplify best practice
- be ready to meet community expectations or any changes in standards or laws.
Resolving ethical issues can be complex, and businesses may need more help. This help can come from:
- professional or industry bodies
- academia or experts
Smaller businesses in particular need greater support through things like:
- simple online training or certification
- more examples such as case studies
- cost effective methods and education.
Everyone can benefit from greater education on the value of applying the Australian AI Ethics Principles. This includes showing how to practically achieve them and the role they play throughout the AI system and life-cycle.
What businesses (and our department) learnt
Businesses found it helpful to implement the Australian AI Ethics Principles and compare them to their existing ethical practices. This enabled them to test and refine their internal processes or consider the need for new ones.
The pilot highlighted a number of complex challenges businesses face when applying the principles.
Responsibilities of AI purchasers and AI developers differ
Businesses buying AI solutions recognised that they could not outsource their accountability for AI ethics. To meet the principles, they relied on their internal due diligence processes and information from their vendors:
- on how the model makes decisions
- on the data it has been trained on to minimise risk of bias
- to understand how the model operated or its limitations across a range of applications.
Developers of AI found it challenging to manage the ethical impacts that occur after they sell their AI system and no longer control its deployment.
Key insights on these challenges:
- Businesses need to explore and discuss meeting ethics principles with vendors. Determining ‘Accountability’ is particularly important.
- AI developers need to understand their responsibility in setting up the system correctly for certain specified applications. They need to be transparent about any limitations of the system in untested or untrained applications, and share this with their purchaser (deployers).
- Where systems are not appropriate for the application intended by the purchaser, both parties should consider getting the AI system re-designed to suit the new application. They would also need to test the refined AI system to ensure it still meets the ethics principles.
Businesses need to influence culture and improve staff capabilities
Several participants raised the challenges staff face when working on AI systems, as well as the challenge of building staff capability around AI ethics.
Businesses need to:
- raise awareness of AI ethics
- educate staff on the benefits that implementing AI ethics has for the company and its customers
- improve training.
Some principles are more challenging to practically implement
Principles like ‘Fairness’ involve trade-offs. This can be harder to judge and measure. To implement fairness it is appropriate and necessary for employees to make a value judgment on:
- who is most at risk
- what protections would address potential impacts and meet legal requirements.
Businesses can, and should always, monitor and re-assess if they have these value judgements are right. They should address any issues when they steer off course. They should also document this and refer any serious issues to relevant leaders to consider as needed.
What governance and solutions can support ethical AI
Businesses can support their ethical AI efforts in many ways:
- Set appropriate standards and expectations of responsible behaviour when staff deploy AI. For example, via a responsible AI policy and supporting guidance.
- Include AI applications in risk assessment processes and data governance arrangements.
- Ask AI vendors questions about the AI they have developed.
- Form multi-disciplinary teams to develop and deploy AI systems. They can consider and identify impacts from diverse perspectives.
- Establish processes to ensure there is clear human accountability for AI-enabled decisions and appropriate senior approvals to manage ethical risks. For example, a cross-functional body to approve an AI system’s ethical robustness.
- Increase ethical AI awareness raising activities and training for staff.
Some Australian Public Service agencies already have data ethics frameworks and governance processes to manage the ethical risks of AI applications.
Our department will continue to work with agencies to encourage greater uptake and consistency with the Australian AI Ethics Principles.
Read more on the government’s commitment to make Australia a global leader in responsible and inclusive AI.
Give us feedback
Are you using or interested in adopting the Australian AI Ethics Principles in your work? Let us know your feedback or get in touch with us by emailing Artificial.Intelligence@industry.gov.au.