Responsible AI Network

Bringing together experts, regulatory bodies, training organisations, and practitioners to focus on responsible artificial intelligence (AI) solutions for Australian industry.

There's a global race to build guardrails for AI development and deployment. These aim to ensure responsible practices are included alongside AI development. Worldwide, standards and regulatory changes are coming. These require organisations to upskill and change to adapt to these new regulations.

The National AI Centre is ready to respond to this global momentum and meet the need for clear industry guidance.

The Responsible AI Network (RAIN) is a world first cross-ecosystem collaboration aimed at uplifting the practice of responsible AI across Australia's the commercial sector. RAIN's advice and best practice guidance falls in the following 7 actionable pillars:

  • law
  • standards
  • principles
  • governance
  • leadership
  • technology
  • design.

Knowledge Partners

NAIC is collaborating with a number of Knowledge Partners. Each partner brings a specific skillset to the collaboration and Australia's AI Ecosystem.

  • AIIA has joined the RAIN to advocate and uplift the practice of Responsible AI and leveraging their influential and innovative technology company members.
  • The Australian Industry Group will support traditional, innovative and emerging industry sectors with the aim to uplift Australia's responsible AI practice.
  • CEDA brings the ability to drive AI as a fundamental driver of our economic development, and advocate for its responsible and sustainable use.
  • The Gradient Institute brings skills in building ethics, accountability and transparency into AI systems by providing training for organisations operating AI systems and technical guidance.
  • Standards Australia seeks to democratise the ability of all businesses to use standards to deliver responsible AI.
  • Tech Council of Australia will bring together the Australian Technology sector towards responsible AI principles and practices.
  • The Governance Institute will provide expertise on corporate governance, risk management, and corporate accountability.
  • The Ethics Centre will provide vision and discussion about the opportunity presented by AI.
  • The Human Technology Institute at UTS will focus on skills, tools and policy advice for Australia's businesses.

Join to the Responsible AI Network

Currently the joining process is under development. If you are interested in being part of the Responsible AI Network, send an email to for further instructions.

Responsible AI Network resources


Navigating AI: Analysis and guidance on the use and adoption of AI

This report provides a deepened understanding of the AI regulatory landscape globally and within Australia and the need to continue to progress a conversation around appropriate regulation.

Stakeholders from across government and industry contributed to the development of this report, including a selection of AIIA members who were interviewed by KPMG. This report is ideal for leaders interested in or tasked with creating policies, governance and oversight of AI technology.

Read the report [PDF · 3.3MB].

EU: AI Act

The world’s first, comprehensive AI law was finally agreed by lawmakers in the European Union in late 2023. In collaboration with Reuters, the World Economic Forum has summarised some of the key elements of the EU AI Act in this article.


Artificial Intelligence Management System Standard

Published in late 2023, ISO/IEC 42001 is the world’s first international standard that specifies requirements for an Artificial Intelligence Management System (AIMS) within an organisation. This comprehensive resource outlines how an organisation might use AI responsibly and effectively.

The official ISO Standard can be bought and downloaded at ISO/IEC 42001:2023 Artificial intelligence Management system.

Our partner, Standards Australia, will soon offer a new course to help professionals and organisations understand and use this standard.  Register your interest in this course.


Implementing Australia's AI Ethics Principles report

To help bridge the gap between the Australian AI Ethics Principles and the business practice of responsible artificial intelligence (RAI), the National AI Centre (NAIC) has worked with Gradient Institute to develop Implementing Australia’s AI Ethics Principles: A selection of responsible AI practices and resources.

The report explores some of the practical steps needed to implement the Australian Government’s 8 AI ethics principles, explaining each practice and its organisational context, including the roles that are key to successful implementation. Practices such as impact assessments, data curation, fairness measures, pilot studies and organisational training are some of the simple but effective approaches in this report.

Read the report to learn how to implement Australia's AI Ethics Principles to create responsible AI practices.

OECD: Artificial intelligence and responsible business conduct

Over the last decade, rapid advancements in artificial intelligence (AI) and machine learning have opened up new opportunities for productivity, economic development, and advancements in various sectors, from agriculture to healthcare.

While current and future AI applications have the potential to advance responsible business, they can also pose risks to human rights, the environment and other important elements of responsible business conduct as addressed in the OECD Guidelines for multinational enterprises.

On the other hand, the use of AI in hiring, law enforcement, lending, and other fields could lead to discriminatory outcomes through the reliance on inappropriately biased data or algorithms.

Relying on and collecting increasing amounts of personal data, there is a risk of AI adversely impacting privacy. When used in autonomous weapons systems, AI could lead to impacts on the human right to life, personal security and due process.

This background paper provides an overview of the different types of AI applications, the ways in which humans and AI can interact, and potential adverse human rights and societal impacts that AI technology may introduce.

Read the paper [PDF · 2.5MB].


eSafety Commissioner: Generative AI – position statement

Recent advancements have rapidly improved generative AI due to the availability of more training data, enhanced artificial neural networks with larger datasets and parameters, and greater computing power. Some experts now claim AI systems are moving rapidly towards ‘human-competitive intelligence’. This could impact almost every aspect of our lives, in both positive and negative ways.

The possible threats related to generative AI are not just theoretical – real world harms are already present. The online industry can take a lead role by adopting a 'safety by design' approach. Technology companies can uphold these principles by making sure they incorporate safety measures at every stage of the product lifecycle.

eSafety recognises the need to safeguard the rights of users, preserve the benefits of new tools and foster healthy innovation.

Read the full position statement.

Navigating AI governance: A comprehensive look at existing and new EU and US AI regulations

An overview of EU, US and China AI regulations and governance initiatives, including key action points to help organisations with AI compliance.

In this blog post, Daiki aim to offer pointers to both existing and new EU and US AI regulations and governance initiatives. Additionally, we will provide some key action points to help organizations effectively prepare for regulatory changes and navigate the complexities of AI compliance.

Read the blog.


IEEE SA Standards Association: Prioritising people and planet as the metrics for responsible AI

What are the metrics of success for responsible AI? Defining how to be responsible with artificial intelligence systems (AIS) is critical for modern technological design. 

This report provides direction for business readers so they can utilise these metrics - large enterprises as well as small - and medium-sized businesses (SMBs) - while also informing policy makers of the issues these metrics will create for citizens as well as buyers.

While common business performance metrics focus on financial indicators primarily, organisations risk causing unintended harm when issues of human wellbeing or ecological sustainability are not prioritised in their planning.

Read the report [PDF · 9.7MB]

Microsoft and Tech Council Australia: Australia's generative AI opportunity report (July 2023)

Generative AI (GAI) represents a substantial economic opportunity for Australia, with the potential to add tens of billions to the economy by 2030. But where to start? Australia’s generative AI opportunity report answers 3 crucial questions:

  • How will GAI impact occupations and the workforce?
  • What opportunities does GAI present for Australian industries?
  • How can Australian businesses seize this potentially billion-dollar opportunity?

Read the report [PDF 2.2MB].

Responsible AI Index: a study of over 400 organisations

The study was conducted by Fifth Quadrant CX, led by the Responsible Metaverse Alliance, supported by Gradient Institute and sponsored by IAG and Transurban.

Key details:

  • On average 82% of respondents believe they’re taking a best-practice approach to AI, but on closer inspection, only 24% are taking deliberate actions to ensure their own AI systems are developed responsibly.
  • 60% of organisations surveyed have an enterprise-wide AI strategy that is tied to their wider business strategy, compared with 51% in 2021.
  • Only 34% of organisations that have an enterprise-wide AI strategy have a CEO personally invested in driving the strategy.
  • Organisations where the CEO is responsible for driving the AI strategy have a higher RAI Index score of 66 compared with a score of 61 for those where the CEO is not taking the lead.
  • 61% of organisations now believe the benefits of taking a responsible approach to AI outweigh the costs.
    Evidence from the Second Edition of the Responsible AI Index (

Read the report.

AI ecosystem report 2023

Commissioned by the National AI Centre (NAIC) and written by CSIRO's Data61, Australia's AI ecosystem: Catalysing an AI industry provides businesses, investors, government and research institutions with the most up-to-date analysis of Australia’s AI ecosystem and how to advance it.

Building on the findings of the NAIC's 2022 AI ecosystem momentum report, Catalysing an AI industry delves into the current landscape of Australia's AI companies and research institutes, revealing a nimble and growing AI ecosystem that's on par with some of the world's AI leaders.

Read the report to learn more about Australia's growing AI industry and how it can be accelerated.


Tools for trustworthy AI: A framework to compare implementation tools for trustworthy AI

As AI advances across economies and societies, stakeholder communities are actively exploring how best to encourage the design, development, deployment and use of AI that is human-centred and trustworthy.

This report presents a framework for comparing tools and practices to implement trustworthy AI systems as set out in the OECD AI Principles. The framework aims to help collect, structure and share information, knowledge and lessons learned to date on tools, practices and approaches for implementing trustworthy AI.

As such, it provides a way to compare tools in different use contexts. The framework will serve as the basis for the development of an interactive, publicly available database on the OECD AI Policy Observatory.

This report informs ongoing OECD work towards helping policy makers and other stakeholders implement the OECD AI Principles in practice.

Read the report.

Atlassian: Responsible technology review template

Leading Australian software company Atlassian has developed and shared a Responsible technology review template. With input and advice from the UTS Human Technology Institute, the tool can be used to help organisations review and avoid AI-related unintentional bias and harmful, unintended consequences.

To learn more about Atlassian’s approach to the development of responsible technologies, and to download the tool, go to Atlassian’s Responsible Technology Principles.


What is human-centred AI and why do you need it?

Discover the fundamentals of human-centred design for AI and why it's important in this short clip produced by the University of Technology’s Human Technology Institute (HTI) in collaboration with NAIC.

Learn the significance of placing humans at the heart of AI development, and how human-centred AI design fosters transparency, ethics, and adaptability in systems.

Watch the video What is human-centred AI and why do you need it?

Responsible AI pattern catalogue

Developed by CSIRO's Data61 and published by IEEE software, this collection of patterns for the design of responsible AI systems can be embedded into AI systems as a product feature or a piece of structural design across multiple architectural elements. In software engineering, a pattern is a reusable solution to a problem commonly occurring within a given context in software development.

The focus is on patterns that practitioners and broader stakeholders can undertake to ensure that responsible AI systems are responsibly developed throughout the entire lifecycle with different levels of governance. The current version of our Responsible AI pattern catalogue contains over 60 patterns to assist stakeholders at all levels in implementing responsible AI in practice.

Access the Pattern Catalogue.

Towards responsible AI in the era of ChatGPT: A reference architecture for designing foundation model-based AI systems

The release of ChatGPT, Bard, and other large language model (LLM)-based chatbots has drawn huge attention on foundations models worldwide.

There is a growing trend that foundation models will serve as the fundamental building blocks for most of the future AI systems. To address the new challenges of responsible AI and moving boundary and interface evolution, we propose a reference architecture for designing foundation model-based AI systems.

Towards responsible AI in the era of ChatGPT: A reference architecture for designing foundation model-based AI systems provides readers with the fundamental building blocks to design future AI systems.

Discover them and read the research [PDF · 227KB].

Microsoft: HAX Toolkit

Developed by Microsoft Research and Microsoft’s advisory body on AI ethics, the Human AI Toolkit is a comprehensive library of practical resources and tools for technology and design teams.

Explore it on HAX Toolkit.

Google: People + AI guidebook

Google’s People + AI guidebook is a set of methods, best practices and examples for designing with AI, incorporating key elements of responsible AI throughout.

These resources have been developed by Google, with input from industry experts and academics. Included in the materials are a series of practical tools and templates to run their own responsible AI workshops and discussions.

Find them at People + AI guidebook.

Was this page helpful?

Was this page useful?

Thank you for your feedback!

Would you like to tell us more about your experiences with this page? (optional)

Feedback you provide will not be directly answered. If you require a reply, please reach out to the page contact directly. For any other queries, please use our general enquiries web form.

Please do not include personal or financial information (e.g. email addresses, phone numbers or credit card details).

Your feedback is covered by our privacy policy.