Flamingo AI is a small Australian start-up specialising in artificial intelligence (AI), collaboration and employee experience. The company applies an ethical approach when partnering with clients to deploy its machine learning product.
This page belongs to: Australia’s Artificial Intelligence Ethics Framework
Flamingo AI’s existing ethical processes
Flamingo AI shared some of the ethical processes it had in place before participating in the pilot. The company is certified to securely manage client data (to SOC 2 level) and is building on principles of security, availability, processing integrity, confidentiality and privacy.
Ethical processes include:
- building an ethical culture on the back of the SOC 2 accreditation, especially on privacy and security
- extensively training staff on potential security issues to ensure their clients also comply with SOC 2 accreditation
- analysing data sets to ensure they don’t contain unfair bias, skewed or discriminatory data
- inducting and onboarding staff in ethics
- communicating to clients that Flamingo AI would use ethical AI approaches when working with them
- training and equipping quality assurance engineers to identify any ethical issues in the code
- training staff to monitor and report ethical or unethical client use of software.
Flamingo AI’s pilot research
Flamingo AI approached the pilot as a qualitative research project to:
- explore the practical challenges small AI vendors like Flamingo AI would face to implementing the AI ethics principles
- gather feedback on things that may help smaller organisations overcome any challenges.
Flamingo AI interviewed its staff on their reactions to the 8 Australian AI Ethics Principles.
Applying the AI ethics principles
Flamingo AI staff shared the following observations during the pilot. Staff commented on challenges and opportunities they saw for small businesses applying the Australian AI Ethics Principles. They highlighted ‘transparency and explainability’ and ‘accountability’ as these were the most relevant principles to their AI product, Smart Hub.
Challenges and opportunities for small businesses
- Resolving ethical issues can be complex. Businesses may need more tools or help.
- These issues can require lots of testing with large sets of data and clients to resolve. For example, proving a product does not result in unfair bias.
- A simple online training or certification process may allow businesses to understand the principles and what they mean in practice.
- Continuous monitoring is needed. Cost-effective methods or training would help businesses.
- Businesses should integrate ethical checkpoints into the AI product or service design stages and continue this throughout and after deployment.
- Without cost-effective methods or training, businesses may be limited by their existing resources when implementing the principles.
- Examples of implementation can be useful, through case studies and guidance based on the AI lifecycle.
- Case studies of what companies have or haven’t implemented can demonstrate the realisation of ethical AI for both small and large companies.
- It is helpful to structure guidance on how to apply the principles based on the AI lifecycle. However, it is prudent for businesses to have all 8 principles in mind from the start.
- Public pressure for businesses to design and deploy AI ethically will grow.
- Implementing the ethics principles should not become a box-ticking exercise. Complying with privacy, safety and other laws is important, and businesses shouldn’t see it as a regulatory burden.
Principles 6 and 7: Transparency and explainability, and Accountability
‘Transparency and explainability’ and ‘Accountability’ can be more challenging when businesses buy AI products. Determining responsibilities for transparency and accountability can be more complex when AI developers like Flamingo AI sell products to other businesses. Where does a developer’s responsibility start and end?
Flamingo AI staff reported examples where their clients have chosen not to inform their customers that they are engaging with a virtual assistant rather than a human. The Australian AI Ethics Principles guide businesses to use AI ethically and to ensure customers know when they are engaging with an AI system. This is especially important where the impact of the AI system on the customer is significant.
Benefits and impacts
Flamingo AI’s staff believe that the Australian AI Ethics Principles are important for any businesses involved with AI, large or small.
‘The ethics principles are relevant for any company involved in AI.’
‘It is important corporations are educated on ethics, even if their intentions are not bad, given the risks of bias in the way data is treated.’
‘Businesses must be careful their product isn’t biased to advantage one group over another.’
They anticipate public pressure for businesses to design and deploy AI ethically will grow due to public demand.
‘Why do ethical AI? Because it’s the right thing to do. Organisations all over the world are paying attention to their carbon footprint – the same will happen with AI. It will begin to be expected by the client/customer…’
‘Incentivisation is key. Client demand will drive organisations to have ethical frameworks. I would advocate for sanctions for non-compliance as not all sectors have a culture that prioritises ethics.’
Education and incentivisation will be key in helping businesses to commit to developing and deploying AI ethically.
‘While some consider there to be a lack of legal accountability for violating ethical principles, there are growing consumer expectations and public pressure for businesses to ensure they design and apply AI systems ethically.’
‘In some instances, there is already accountability by law but education is needed to get businesses more familiar with their obligations.’
Contact Flamingo AI
If you have any questions about this example or are interested to learn about Flamingo AI’s ethical processes, please contact email@example.com.