Two co-workers are standing together and discussing what they see on a computer tablet. In the background there are 2 other people working on computers.

Deploying and using artificial intelligence (AI) products and services transcend geographic boundaries and national borders as different countries have their own AI governance rules. This case study looks at opportunities to align governance to better manage risk and reap opportunities.

Companies looking to deploy AI across regions could face the complexity of having to meet different or conflicting conditions. Compatible AI governance frameworks help cross-border deployment of AI by lessening regulatory burdens. These frameworks could also reduce compliance costs for companies. 

Australia and Singapore AI governance frameworks

Governments around the world are supporting the responsible development and deployment of AI by creating: 

  • regulations
  • principles
  • frameworks
  • guidelines
  • practices. 

Australia and Singapore are no exception. Both countries have detailed AI ethics principles and frameworks. These promote the responsible use and design of AI while preserving public trust in AI technologies.

Australia’s 8 voluntary AI Ethics Principles are part of the Australian Government’s plan to make Australia a global leader in responsible and ethical AI. The principles cover: 

  • human, societal and environmental wellbeing
  • human-centred values
  • fairness
  • privacy protection and security, including for data
  • reliability and safety
  • transparency and explainability
  • contestability
  • accountability.

Singapore’s voluntary Model AI Governance Framework (Model Framework) guiding principles are available on Singapore Government’s Personal Data Protection Commission website. They were published in 2019 and updated in 2020 to ensure:  

  • decisions made by AI should be explainable, transparent and fair.
  • AI systems should be human-centric and safe.

The Model Framework gives guidance on 4 main areas for organisations to consider when developing and deploying AI:

  • internal governance structures and measures 
  • determining the level of human involvement in AI-augmented decision-making 
  • operations management
  • stakeholder interaction and communication.

Singapore and Australia collaboration

To help digital trade and collaboration, Singapore and Australia entered into a Digital Economy Agreement (DEA) in December 2020. One of the aims of this DEA is to encourage cooperation between organisations in Singapore and Australia in growing areas such as AI. The DEA gives organisations the scope to trial use cases and technologies across the 2 countries. 

Our department developed a use case with Singapore’s Infocomm Media Development Authority (IMDA) to show the compatibility of each country’s AI governance framework. 

We invited National Australian Bank (NAB) and IMDA engaged Tookitaki, a Singaporean tech start-up, to be industry partners for the use case.

  • Tookitaki brought experience applying AI governance frameworks and offering AI-enabled compliance-related products for the finance sector.
  • NAB found a suitable AI application and applied both country’s AI frameworks to the software.
  • As the AI framework policy owners, IMDA and our department gave the industry partners advice on each country’s framework.

About the use case 

NAB released a new machine learning based solution, the Financial Difficulty Campaign (FDC), in October 2019, to meet its obligations under the Australia’s Banking Code of Practice. 

The code gives individual and small business customers and guarantors safeguards and protections not set out in law. Under the code banks have obligations, including how they service their customers and help those experiencing financial difficulty.

NAB aimed to use the FDC as one measure to meet obligations by using: 

  • a model to identify customers at risk of falling into financial difficulty. 
  • an SMS marketing campaign to actively engage identified customers and offer them support options. 

The AI system was designed to predict financial difficulty and indicate if a customer is at risk of facing financial hardship. NAB’s Assist team also have business as usual (BAU) processes for identifying and helping these customers. These BAU processes remained unchanged when the FDC model started. 

The impacts of the COVID-19 global pandemic led to the decision to disable the original machine learning model in March 2020. Instead, NAB put in place emergency measures to help all customers at risk during the pandemic, including the home loan deferral program. The wide promotion of these programs significantly decreased the need for a model during this time of economic flux and high contact by the bank. Rather than relying on advanced tech, the most effective approach to help customers was to call them directly to check in on their circumstances. NAB did this by ramping up banker numbers.

They decided to put the FDC model back online when emergency measures stopped in March 2021. Before restarting the model, NAB refreshed the dataset to reflect the changed economic environment and impact of COVID-19. The release of the updated FDC model was in August 2021. 

NAB chose the FDC model ideal for the use case because of the direct customer benefit and the model’s use of demographic information. The use case was an opportunity to identify and mitigate potential data biases that could lead to suboptimal model performance.

Putting AI ethics principles into practice

NAB invested substantial effort to mitigate the risk of unintended bias by choosing suitable fairness metrics to assess model outputs. Governance and policy also ensure that human decisions are aptly and fairly made. NAB aims to achieve a consistent and governed approach to using data for AI systems through its own Data Ethics Principles. Its Data Ethics Assessment gives guidance on whether an initiative is:

  • using data responsibly
  • highlights unintended results
  • helps lift awareness of ethical issues in data use.

NAB’s view is that decisions on what is fair, equitable, ethical and unbiased are human decisions that technology can support. NAB considered it too sensitive for AI alone to identify customers and send SMS support options. NAB needed a form of human intervention for validation. 

NAB also considered the FDC model against the Singapore Model Framework’s 4 areas of responsible AI development and deployment.

In this collaboration, Tookitaki shared the following with NAB:

  • experience of applying their AI governance frameworks in its flagship AI-enabled product Anti Money Laundering Suite (AMLS) with its financial institution clients. 
  • best practices from the model validation framework used to successfully apply its product in line with strict anti-money laundering regulations. 
  • experience in performing tests and assessments to ensure AMLS behaved according to its standards and values. This includes having an independent auditor to conduct a risk assessment. 

Tookitaki also shared its Explainable AI (XAI) framework which covers global, local, and contextual explainability. The following main XAI framework aspects helped build a transparent and fair AI model: 

  • Ensure the feature selection of the AI model does not correlate with any sensitive data such as race, gender, ethnicity.
  • Where there is sensitive data ensure the AI model does not base a decision on sensitive data. 
  • Conduct extensive counterfactual testing to ensure transparency and fairness of the AI model.
  • Build a sound model validation framework covering a review of conceptual soundness, data management and outcomes analysis. 
  • Build a multi-level explanation of the AI model that is simple, decomposable, and interpretable in plain English. 
  • Explain the decision-making of the AI model in human readable English. This is essential to instil trust in the end user (internal and external audit teams) and towards the AI model leading to greater adoption.

Stakeholder interaction and communication

In this collaboration, there was useful sharing on the principles of transparency and offering channels for feedback from NAB customers. 

NAB is aware of the sensitivity of FDC, as the model’s target audience are customers in danger of falling into financial hardship. While the goal is to help these customers as soon as NAB becomes aware, the messages to offer help need tactful crafting. NAB has assigned teams to give different types of support to the identified customers. 

Tookitaki’s client is a financial institution. Interaction and communication was through its governance structure with weekly cadence calls and regular reporting.

Learning from the collaboration

The overall collaboration experience was positive and constructive. Open exchanges between the government agencies and private sector companies help them learn from one another’s expertise and experience. 

Here is a summary of the main takeaways:

  • The AI governance frameworks from Australia and Singapore align and are compatible. There were no specific obstacles that could prevent an Australian company from applying both Australia’s AI Ethics Principles and Singapore’s Model AI Governance Framework.
  • The definition of fairness could vary between cultures and from industry to industry. It will also be bound by prevailing regulations in the jurisdictions.
  • The selection of an appropriate fairness metric is not straight forward. In this respect, there are opportunities for industry to develop customised or industry-specific methods and tools to help with choosing suitable fairness metrics.
  • Organisations could consider AI governance-by-design by building governance processes and measures into the workflow of AI model development and deployment. This ensures there is no inadvertent omittance of critical AI governance processes. It also helps reduce the complexity of applying AI governance.