This feedback is anonymous. Please Contact us if you would like a response.

Main navigation

Developing the AI framework and principles

Main content area

To develop the AI Ethics Framework, we consulted with stakeholders across Australia.

How we consulted with the public

The Minister released a discussion paper on 5 April 2019 to encourage conversations about AI ethics in Australia. This paper included a set of draft AI ethics principles.

During this consultation phase, our department:

  • received more than 130 written submissions
  • conducted stakeholder roundtables and targeted consultation in Sydney, Melbourne, Brisbane and Canberra
  • collaborated with a group of AI experts, to develop the revised set of AI ethics principles

We consulted 272 stakeholder organisations and individuals. This includes 53 in the government sector (44 federal and 9 state and territory); 96 in the private sector (83 large firms and 19 small to medium enterprises); 104 in civil society (39 academics, 17 research organisations, 18 industry associations, 15 community organisations and 15 peak bodies); and 19 individual members of the public).

Read the AI ethics discussion paper and published submissions on our Consultation hub.

What we heard from consultations

General insights

Stakeholders said:

  • They generally support a principles based framework to guide design, development, deployment and operation of AI in Australia.
  • The draft principles need to be more clearly defined to clarify how they would be applied, and how they interact with each other – see below for detailed feedback on each principle.
  • The framework needs to be iterative and flexible to ensure it adapts to technology change.
  • ‘Security’ is missing as a principle – including software, data and system security, cyber security, and physical and social security.
  • The principles need pragmatic guidance on how to apply them – AI applications are very context specific, which makes it difficult to apply one set of principles to all scenarios.
  • The framework may need to be supplemented with regulations, depending on the risks for different AI applications. New regulations should only be implemented if there are clear regulatory gaps and a failure of the market to address those gaps.
  • Government needs to engage with the community sector and general public on how AI is developed and used.
  • Diversity is an important part of ethical AI – including diversity in those working on AI, ensuring training data sets are inclusive and representative, and consultation with diverse stakeholders.
  • Human oversight is an important feature of ethical AI to ensure risks relating to diversity and inclusion are managed effectively. This includes the ability to monitor AI decisions and over-ride those decisions, if needed.
  • Australia’s framework should align with other international frameworks, where appropriate. AI is not geographically constrained and businesses operate in an international market.

Feedback on draft principles

Draft Principle 1 – Generates net-benefits

Stakeholders said this principle:

  • Is likely to be impractical, as calculating net-benefit is open to interpretation without a clear definition.
  • Assumes a level of ‘acceptable’ harm, conflicting with Draft Principle 2: Do no harm.

Draft Principle 2 – Do no harm

Stakeholders said this principle:

  • Needs improved clarity on the definition, threshold and scope of ‘harm’.
  • Contradicts Draft Principle 1 in intent – these two principles should be combined into a single revised principle to promote wellbeing and reduce harm.

Draft Principle 3 – Regulatory and legal compliance

Stakeholders said this principle:

  • May be redundant, as it should be a condition that is considered before ethics.
  • Should be refocused to address issues like human rights, democratic values, diversity and rule of law.

Draft Principle 4 – Privacy protection

Stakeholders said this principle:

  • Needs to have consistent terminology and be aligned with legal definitions.
  • Needs improved clarity on how it interacts with current statutes and common law principles.
  • Does not adequately capture the scope of the potential impact – it should also consider data governance and broader information privacy related issues.

Draft Principle 5 – Fairness

Stakeholders said this principle:

  • Needs improved clarity on the definition of ‘fairness’.
  • Needs more emphasis on ensuring minority groups are not unintentionally discriminated against.
  • Should include the concepts of inclusion and accessibility.
  • Should not just be limited to algorithms and training data – fairness needs to be considered over the full lifecycle of an AI system.

Draft Principle 6 – Transparency & Explainability

Stakeholders said this principle:

  • May be challenging to apply in practice, due to the complexity of explaining AI systems and decisions in a way that is easy to understand.
  • Should ensure that that people are provided with a reasonable justification of the outcome from the AI system in a user friendly format.
  • Should ensure that requirements for explainability are applied in a way that is proportional to the potential impact and risks of a given AI system.

Draft Principle 7 – Contestability

Stakeholders said this principle:

  • Needs improved clarity on ‘impact’ and its threshold.
  • Needs further guidance on how the principle would be applied – including clarity on the process for contesting decisions.
  • Needs to clearly communicate that redress for harm is possible when things go wrong, as this is vital to building public trust in AI.

Draft Principle 8 – Accountability

Stakeholders said this principle:

  • Needs improved clarity on who would be considered accountable – particularly in relation to open source algorithms and using AI systems beyond their original intent.
  • May stifle innovation by holding developers accountable, particularly for unintended consequences.
  • Should focus on accountability for the outcomes of AI systems, and ensuring appropriate levels of human oversight.