Main content area

Hear about challenges that different organisations face in ensuring ethical development and use of AI, including how they are putting Australia’s AI Ethics Principles into practice.

Moderator: Bill Simpson-Young, Chief Executive, Gradient Institute

Panel:

  • Chris Dolman, Director, Data and Algorithmic Ethics, IAG
  • Dr Michelle Perugini, Chief Executive Officer, Presagen
  • Mark Caine, Lead, AI and ML, World Economic Forum

Transcript

(MUSIC PLAYS)

Description: The Australian Government Coat of Arms. Black text reads Australian Government Department of Industry, Science, Energy and Resources

Text: TECHTONIC 2.0 – Australia’s National AI Summit 18 JUNE 2021

Below, blue text on white reads ‘This session will commence shortly’.

On the right, colour images arranged on tiles. A woman operates a drone over sun-kissed wheat fields; a robotic arm grips a small computer chip; a robotic arm in a factory welds; a large yellow mining truck; the Mars rover; a dark-haired man wears a grey coat in a factory and holds a tablet computer; a man wears a yellow hard hat and sits at a computer.

On webcam, a middle-aged man has short brown hair. He wears thin framed circular glasses and a dark suit jacket over a grey jumper.

Text: Bill Simpson-Young, Chief Executive, Gradient

Bill Simpson-Young: Welcome, everybody. Welcome to stream one, this is putting AI ethics principles into practice, and I hope you've enjoyed the summit so far. I certainly have. I'm Bill Simpson-Young, I'm Chief Executive of Gradient Institute. Gradient Institute, for those who don't know it, is a not-for-profit research institute specifically working in responsible AI. A registered charity trying to improve the way in which people develop and use AI systems. We'll be joined by three panellists who I will introduce in a minute.

First, I'd just like to acknowledge the people of the Gadigal nation where I'm based from now. I'm actually on the University of Sydney campus. The Gadigal people have been here for a long time, and we acknowledge the ownership of the land that we are using.

I'll just introduce the topic. So, as I said, this is all about putting AI ethics principles into practice. People may remember, back in November 2019, the-then Minister for Industry, Science and Technology, Karen Andrews, released the AI ethics framework. This was an acknowledgement that AI is really powerful and, you know, we're building AI systems that operate at scale, making decisions that influence people's lives. It's really important when you're designing or deploying AI systems that there's some governance of those systems: both from a technical point of view and a management and governance point of view. The AI ethics principles were put out by the Department to help encourage...and it's a voluntary set of principles, eight principles, that were promoted within industry to try to encourage industry to develop AI responsibly.

At the same time as that…as the principles were announced, there was also a pilot announced where six companies operating in Australia all took on to run a...pilots within their organisations of those eight principles. You know, as the minister mentioned just before, those case studies of those six companies have been released today. We've just gone out just immediately before the summit, and I'm looking forward to reading them myself. So the companies that were involved in that trial were Commonwealth Bank Australia, Flamingo AI, Insurance Australia Group, IAG, - we'll hear from them in a minute - Microsoft, National Australia Bank and Telstra. So, if you go on the industry.gov.au website, you'll be able to read through their case studies have used those principles.

Now, of course, these principles are fairly high level principles, they include, you know, principles such as fairness, reliability and safety, transparency and explainability. The principles do provide some sort of high level guidance on what is expected in industry and government. But obviously there's a big challenge in trying to get those principles into operational use - you know, when you're actually writing code, it's going out into use, there's a big gap there between high level principles and action. And this is a big focus of what this session's about, you know, how do you actually take these principles and put them into action? And hopefully we'll explore that with the panel members shortly.

I'll just give a little bit of a sense of some of the outcomes from the pilots, some of the key findings. I'll just mention three of the key findings from those pilots. One is the differing and shared responsibilities between AI purchasers and developers. So, businesses who are buying AI solutions often recognised that they couldn't outsource the accountability for how they operate the system, and the need to get relevant information from the vendors. You can't just outsource ethics when you outsource your system development. It's really important that organisations own the responsibility for their AI systems, so it was a key finding.

Second key finding, there's often common governance mechanisms. The businesses, their approaches to managing the AI ethics included where they’d already had appropriate policies in place, in some cases they broadened the data governance for AI applications. In some cases they had compulsory ethics checks when engaging with AI vendors. And also they had cross-functional bodies to approve the system's ethics robustness - you know, so ethics committees within the organisations, for example.

And third finding, the broader organisational challenges. You know, effectively implementing AI ethics principles isn't just about, you know, how you build your AI system. You're talking about how to have a companywide culture to educate staff, educate customers, and make sure that you have the appropriate skilled staff on AI ethics. So, if you wanna know more about that, look at the industry.gov.au website and you'll find more detailed case studies from the six companies.

And of course, Australia isn't alone in our implementation of AI ethics. There's work going on across the world, and we'll hear shortly from one of our panellists who is from the World Economic Forum. Of course, Australia...any AI development being done in Australia or companies who are using technology, you have to be thinking globally. If you're a company developing AI, you're probably developing it for a global market. If you're deploying AI products on the Cloud, you're doing it in an AI market. If you're using third party products that are based in the Cloud, you know, you care about global markets.

So, things like regulations that are starting to happen around the world are gonna be important for Australian companies regardless of whether they're in Australia or overseas. And it's really important that Australia is involved in international discussions about ensuring that AI gets used responsibly. And so, we'll talk about that as well.

So, that's enough for an intro. Now I'll introduce the panel.

Description: The shared screen adds three more webcams around Bill’s now smaller window.

In the top right, a middle-aged man has thinning brown hair and wears thin framed rectangular glasses. White text on an aqua background reads ‘Chris Dolman – Director, Data & Algorithmic Ethics, Insurance Australia Group (IAG).

Below, on the left, a woman has brown hair tied back and wears dark framed glasses and a black and white patterned dress. White text on aqua reads ‘ Dr Michelle Perugini – Co-Founder & CEO Presagen.

In the bottom right, a man has black hair and a full beard. He wears a lemon yellow business shirt. White text on aqua reads ‘Mark Caine – Lead, AI and ML, World Economic Forum’.

Bill Simpson-Young: So, we have three panellists, as I mentioned, First we've got Chris Dolman. Chris is from IAG, Insurance Australia Group, is currently the director of data and algorithmic ethics at IAG. He's helping to ensure that modern decision making algorithms and other advanced uses of data are designed and implemented in ethical, responsible and thoughtful way. So, clearly very relevant to this topic.

Next, I'd like to introduce Michelle Perugini, Dr Michelle Perugini. She's got a PhD in medicine and works as a postdoctoral research scientist in oncology for a decade. She's founded two global AI tech companies - first of which was acquired by EY in 2015. Michelle is now the co-founder and CEO of Presagen, which is an AI healthcare company building Presagen, which is the social network for healthcare. The first product used in in Presagen is Life Whisperer, which uses AI for embryo selection to improve pregnancy outcomes in IVF and is being sold in IVF clinics globally.

And our third panellist is Mark Caine. He's the lead for artificial intelligence and machine learning at the World Economic Forum, the WEF. He began his career as a research academic at the London School of Economics, and now he's focusing on this area.

So, I will now start off... We're sorta gonna be talking about a range of different topics, but we'll start off with the challenges of implementing practical solutions of AI that are responsible. You know, we've talked about the high level principles that are out there. How do you actually practically implement those into real AI systems? So, let's start with Chris. Chris, IAG was involved in the pilot of the Federal Government's AI ethics principles, can you tell us how that went, you know, what were the sort of challenges, what was easy, what was hard?

Chris Dolman: Sure, thanks, and then thanks for a very comprehensive intro and for setting the topic up. As you said, Bill, it's definitely the case that the high level principles are valuable, but it is sometimes a little unclear what exactly one should do underneath. Sorry, I'm gonna apologise. I have a bit of a cold, and my voice is breaking up a little. Might happen from time to time.

So, what we did prior to the pilot, we had a framework that we've been using for a little while, which was very much aligned to the development cycle of a project. So from, sort of, conception, project definition, problem definition to system model building, deployment, monitoring and so on, and so forth. Very much in line with some of the ideals that the principles spell out. You know, obviously we've been inspired from other frameworks that have been published prior to that, which are fairly similar as well.

So, we saw the pilot as a good opportunity to test that framework that we built already with a few other people from industry that were also part of the pilot, engage in a sort of two way dialogue with government about how that was going and what could be done in a practical sense to implement these principles in business and also be able to publish case studies at the end. I mean, one of the things that's quite lacking in this space is sorta real-world case studies of actual implementation of principles and practice.

So, as you mentioned earlier, the case studies have been published today. We also published ours in sorta longer form... recent actuary summit for sorta professional audience as well. So, it's something that we want to see more of, that will really help people sort of understand how to actually apply these things in practice, cause it's somewhat unclear how to go from the very abstract to something that's a bit more practical.

Bill Simpson-Young: (INAUDIBLE).

Chris Dolman: You're on mute, Bill.

Bill Simpson-Young: Sorry, of the eight principles, were there particular principles that were difficult and challenging to implement?

Chris Dolman: I don't know if sort of difficult and challenging is the way to think about... I mean, we think about projects as individual sort of projects, right? And so, different principles are going to be more important than others depending on what actually it is you're doing. And some of the principles for sort of insurers and others that were in such a high stakes sectors already are kind of dealt with by regulation already.

And so, things like contestability is very much embedded into insurance regulation already. So, I mean, the case study we published earlier was... there's a sort of claims related case study. You can complain about your claims experience today. There's an ombudsman to go to if things get really bad, and the fact that you're using AI doesn't change that. So that from a principle's perspective is quite easy to implement. There's already something there and AI doesn't change it particularly.

Other things are a bit more challenging. You mentioned fairness earlier. Fairness... There's philosophers in the room, I'm sure they'll tell you it's an essentially contested topic, so we can debate endlessly what 'fairness' might mean and different people will argue it can mean different things. So, that's quite challenging to implement practically. We go about doing that by thinking about potential harms and potential benefits with systems, trying to make sure that you are...whilst you might try and minimise harms, you maybe can't remove them entirely. There's going to be mistakes, errors, and things like that sometimes. So, it's about making sure that those are dealt with fairly and distributed fairly across the population and aren't skewed towards populations rather than others. So, that's sort of how we conceptualise fairness typically, these sorts of projects.

Bill Simpson-Young: So, Michelle, you're coming from a very different area, working in healthcare. What sort of challenges have you found in implementing the AI ethics principles?

Michelle Perugini: Yeah. I think in healthcare, we actually have a little bit of an advantage in that we have to comply with many of these principles just by virtue of having to meet medical and health regulations in different countries. So, the AI principles, from my perspective, are a guide around the things that we need to consider and be conscious of when we're creating products that are going to impact people, that when we're creating products, that we make sure they've been based on datasets that are unfair and unbiased, and that we can adequately test those products to the extent that we're meeting the needs of those principles.

I guess one of the big challenges is the implementation and going a little deeper than the principles and into an actual operating framework that's practical for companies to be able to, you know, I guess, put into action these principles. And that's where I think the next step will be in terms of fleshing out these principles.

But if I look at healthcare as an example, when we're looking at regulation, it's very risk-based. How many people are going to be affected by the AI product that you're building? Is there a, you know, possibility of serious harm being done? What types of populations and demographics are you trying to impact with those particular products? And that kind of creates a risk profile of these different products or applications in AI that can be used to kind of stratify risk and then determine how much or how deep you need to go with each of these principles and actually monitoring and governing them.

Because one of the things that we're always conscious of is you want to regulate, but you don't want to overregulate. You don't want to over-govern. Because then, you know, low-risk products and services, you know, will be hampered in their development and utility if we go too far.

Bill Simpson-Young: Can you tell us a bit about some specific of the principles... implementation principles? So, for example, in your context, do you consider, say, fairness or transparency? How do you consider that in your systems?

Michelle Perugini: Yeah, fairness and transparency. I mean, all of the AI principles we consider. One of the big ones is actually privacy protection and security, and that comes very front of mind in terms of the data play. You know, how do you leverage data from different countries and different sources and make sure that the privacy of those individuals on which that data relates is kept private and confidential and that it can't be used to kind of negatively impact outcomes of the AI.

And, you know, as we all know AI development can be...you know, has the potential to be quite biased by the data that it's built on. And so, having protections around those data that are being input is really important. And there's a lot of laws already governing data privacy, like GDPR, for example, and HIPAA compliance in the US. And there's a lot of ways in which we, as a company, are kind of overcoming some of those challenges by not moving private patient data and using de-identified data, for example, and that goes a long way to building protections.

But, I guess, some of the what I would call the softer ethics principles around fairness and transparency, I think, again, it's not necessarily about explainable AI and trying to be transparent around how the AI's built. Because that's incredibly difficult, actually, because AI is inherently kind of a black box technology and it's quite difficult to be really transparent. But what you can be transparent about is how it's been built. So, the data in which it's based on and how it's been tested, and make sure that those testing of the products and services are, you know, well-published and scrutinised, and that, you know, you're providing explanation of the results or the outputs of the AI.

So, not so much how it's built, but on the basis of which data, how it's been tested, and what the outcomes actually mean. And that's where we kind of focused heavily on, the transparency piece.

Bill Simpson-Young: Now, Mark, you're from the World Economic Forum. Tell us a bit about the challenges that you're... or what you're seeing happening globally, where people are trying to operationalise AI ethics principles, in general. Is there anything that you can share that you think might be relevant to Australian businesses and government?

Mark Caine: Yeah, sure. Thanks for that great introduction, and thanks for having me at this wonderful event, and congratulations also to all of the Australians involved in the pilots that have just been published, and the whole process, really, which I will say we focus, you know, broadly around the world. We, at the World Economic Forum, focus on AI governance and we work with governments, we work with industry, we work with civil society groups and academia really to, sort of, figure out, in essence, kind of how to move from the principles that have been so widely talked about, and I think largely agreed upon, to actually operationalise them in the development and use of AI across a number of different industry sectors by a number of different actors.

One of the things that we've seen is... And I come into this, I think, fairly optimistic to begin with, I should say. Because one of the things that we have been seeing in a lot of different places, including Australia, is exactly the kind of hard work of prototyping and actually testing and developing these approaches that Chris and Michelle have just walked you through in their own practice as well as the pilots and others that are happening. And what we are seeing is that...you know, I think it goes without saying probably at this point that it is easier to come up with your set of principles than it is to put it into practice. And what...I think, the kind of global conversation is now really pivoting from one to the other. And I think that's a good thing.

We did a survey about two years ago of all of the different sets of AI principles that had been published. And we counted, at the time, over 175 different sets of principles from different business associations, different companies, different governments, the OECD put one out, a number of international organisations. And what was really interesting actually was how much overlap there was between them all. And I think that this was actually helpful to realise at the time because it very quickly pivoted a lot of different companies and industry sectors and governments from thinking about what should the principles be, because they largely fell into the same sort of 10, 12, 15, depending on how you slice and dice to say, "OK. What are the sort of operational policy, that kind of business practice, product design and engineering, customer relationship practices, that need to actually happen to make this work?"

And I think that the kind of combination of that move in a lot of different jurisdictions from principles to practice as well as the development of risk-based approaches, which were mentioned by Michelle in particular, and which we've seen really animating the European Union's new draft AI regulations, is really focusing on what are the specific risks of different use cases and what level of governance needs to happen based on what kind of data is being put into the system? What is the range of possible outcomes and harms that could accrue to people? And those two things, we see a lot of experimentation being done on and, so, I think that, generally, the outlook is good right now. But the proof is going to be in the pudding. And I think that there's more work that needs to be done in terms of developing concrete tools.

The work that we do, we do a lot of work with government on the kind of policy and regulatory side, but also procurement. We've found that government can sort of lead the charge by setting out its standards and kind of its expectations for the market in the areas where government is a large buyer. We've done some work with the UK government on that, trying to sort of exercise soft governance in possible anticipation of, but perhaps in lieu of, a sort of harder edged regulation, depending on how the market develops.

And on the industry side, similarly, we've seen and we've been involved in work really just up and down the stack. Starting at the board of directors to say, “What are the roles and responsibilities of different parts of the corporate governance structure in exercising oversight and discretion and judgement in the different areas of...the different principles that we're talking about, in the case of Australia?” And that goes all the way down through sort of legal and compliance, which has a role. C-suite executives which have a role. And then really down into the actual product design and deployment.

I think a couple of things that I think have already been mentioned around contestability and redress, putting in place the mechanisms so that if something does go wrong, there is a way for people to see that, to flag it, to address it. That has been a really positive development that we've seen in a lot of different domains. And so, I would say, yeah, the kind of high-level outlook globally that we're seeing is we're not there yet, but we're starting to go in the right direction and there's a kind of Cambrian explosion of different prototyping and experimentation iteration. And the task now is to sort of sort through that, figure out what are the best practices, what are the best models, and how can those be scaled up to make it easier for companies and for others using AI to sort of know where they need to be, and to be successful...you know, on a firm foundation.

Bill Simpson-Young: Thanks Mark. So, I love the idea of this Cambrian explosion of AI ethics frameworks, and tools, and process. And so, given that complexity and the wealth of information that's out there, Michelle, how best can business leaders and government help companies operationalise AI ethics, you know, given this... Everyone knows it's an issue, everyone's got a great intent, everyone wants the AI systems to be responsible. How do they actually...you know, what should people be doing to maximise the likelihood that businesses can actually ensure that their actions reflect their intent? Michelle.

Michelle Perugini: Yeah. I think there's a couple of different ways. I mean, firstly, we need kind of education around, you know, the actual impacts that AI can have as opposed to a somewhat negative narrative around the technology that's sometimes floating around. So, I think that goes a long way to helping people to understand actually what this technology is capable of and what it's probably not capable of. And you probably don't need to be fearful of. But I think some practical implementation steps and maybe practical guidance around how to self-assess and how to risk-profile what it is that you're trying to do. I think that's the main thing.

So, when I look at medical regulations in which we do similar risk profiling, if we're looking to file a regulatory approval in a particular country, we go through a series of steps. It's almost like a wizard, you know. What are you trying to achieve? If you're going for FDA approval for a medical product and you're doing a... you're creating a new toothbrush, obviously, that's a very different risk profile to if you're creating a clinical decision support tool. And so it steps you through a practical set of questions that help you to understand what the risk profile of your particular use case is. And I think that's a really practical way of doing it. And actually, I think that can apply across different industries quite easily and quite, you know, practically, and that will give people a sense of where they sit in that kind of process or risk profile.

And then from there, depending on the risk profile, I think there needs to be different kind of governance frameworks or different steps that need to be taken by those companies or, you know, whoever it is that's creating that AI for that particular use case that guides them through what should be required and what is reasonable for that level of risk.

So, you know, I quite like the idea, and I'm probably biased because I've come from the medical industry. But I am quite keen on modelling this in... on the sort of medical regulation framework, because I think it really fits and suits well with the risk profile of AI products in general. And I guess one of the other things that we need to be cognisant of is if you don't kind of stratify on something like risk, then it can be really burdensome to go through the same process for different types of risk use cases. And you don't want to do that because it's going to hold back the industry.

And I guess from a narrative perspective, I think Australia is actually really uniquely positioned to take a lead role in creating what is kind of a trusted and implementable framework. And I think the rest of the world does trust us from a research and innovation perspective. I think many of the products that come out of Australia are built with the trust of other countries in the world. But we need to be a serious player. If we're not going to invest in this space, no point in kind of creating the best framework in the world around ethics, governance for AI technologies. We need to kind of invest. We need to change the narrative. We need to put a practical implementation plan in place so that we can show that we're serious about, you know, making sure that we're creating products and services that are globally relevant and that meet the guidelines of the ethics framework and principles.

Bill Simpson-Young: Yeah. Chris, what are your thoughts on the... what business leaders in government should be doing to help companies operationalise responsible AI?

Chris Dolman: I certainly agree with Michelle that some detailed guidance is definitely needed. You can see the early stages of that from various places around the world already. But it's very early days, I think, at this point. The sort of principles frameworks are very well-established but practical guidance is not. So, we definitely need to push on and do that quite quickly. I, perhaps, take a different view to Michelle on the need for sector-specific considerations. I do think some sectors are a bit odd. I always say I work in one. So, perhaps, you need to consider the nuances of particular sectors or particular decision forms in how you construct that guidance.

And also, existing regulation where it exists. So, a lot of high stakes decisions are already fairly heavily regulated. I work in one of those sectors. There are plenty of others. And what we don't want to have, I don't think, is a sort of two-tier process where you've got a sort of set of AI standards over here and a set of sector rules over here for a high-risk decision form with the two potentially conflicting - that would not be ideal. So, when we get to practical implementation, we really do need to figure out those sector nuances.

The other aspect of that was so what can government do? Government can certainly lead by example. And there's, you know, recent communications of that ilk, I guess. So, take principles and certainly implement them within government and take a leadership position that way. That can definitely be done and that will obviously inspire more to then adopt them.

Bill Simpson-Young: Now, both Michelle and Chris, you've both mentioned in passing the... as part of what you’ve said, that there are already regulations here. Now, so we've got...as we've said already, the AI ethics principles are voluntary. They're not regulations, but there are other regulations that already exist.

Take, for example, I know the Gradient Institute worked with the Australian Human Rights Commission to publish a report with Data61 and others which looked at algorithmic bias in AI systems, and one of the key things that the Human Rights Commission was really pressing when the report was released was that it's already the case that there's anti-discrimination law in Australia and it's already the case that is very easy with an AI system to unknowingly discriminate, and a lot of companies may well be doing that. You know, there are anti-discrimination laws. There are privacy laws. You both talked about laws within healthcare and insurance.

How is AI at the moment...you know, how is innovation being affected by these existing regulations and to what extent do those regulations need to be changed or improved to support... to allow AI to be used, and to AI to be used for all the great things it's going to be used for, while still protecting against the sorts of harms that AI can cause such as discrimination? So, Michelle, would you like to just describe that?

Michelle Perugini: That's a very complex question, but also a really good point, Bill.

(LAUGHS)

So, it's actually really challenging because to the extent that you can create a single kind of regulated framework globally, I just don't think that's a near-term possibility. I think it's going to take quite some time. And what we're seeing is fragmented regulation of different parts of the AI and technology process, which sort of sit outside of the traditional regulatory processes, if that makes sense. And that makes it incredibly difficult actually for companies like ours that are trying to get products and services approved in many different regulated countries around the world. We have medical device regulation bodies, like the FDA and the TGA, and CE marking in Europe and elsewhere.

There's so many different compliance standards. There's different medical device regulators. There's different data requirements with HIPAA and GDPR. And then, you know, to think about layering on top of that kind of an AI ethics governance structure or a regulation is kind of... We need to work out how that is going to fit within those different regulatory frameworks. And of course, then, Chris, as Chris mentioned, there's also sector-specific requirements, both legal and regulatory, that are required, which makes it even more difficult.

But I think what is happening is some of those regional regulators of different industries are starting to get more savvy around what AI means for their industry. And I think some of this will be subsumed into their regulatory purview because it kind of has to be. So if you look at our Life Whisperer product, for example, we're using AI to image embryos during the IVF process. Now, when we go through the FDA approval process, we're going through the obstetrics and gynaecology department - they actually have no idea about AI.

Some of their other departments have a better idea about AI, like radiology, for example, but this particular one doesn't have any idea about AI. Now, that makes it really difficult because they don't know how to regulate it, they don't know what sort of testing they really need for those types of products. And so, I think it's kind of on the regulators of those different industries to understand how AI applies within their sector and therefore how they're going to regulate it. But at the moment, it's quite fragmented and disaggregated, and I think everyone's just playing the game of catch up because the technology is here and now. And, yeah, we need to find a way to manage and govern.

Bill Simpson-Young: And Chris, what are you noticing in your part of the world?

Chris Dolman: I mean, I think the financial services' regulators are fairly well aware of AI as a thing and the way it can influence financial services, and that's not just Australia that's across the world that seems to be the case. And I mean, financial services has used data-driven processes for a long time, right. So, you know, when you get a home loan or when you get an insurance policy, or things like that, there's a degree to which that's data-driven and has been for decades or longer. So those regulators are already fairly well up on how to manage these systems in particular instances.

What I think they'll probably be doing is just reviewing the extent to which the sort of broader use of AI in customer interactions might require further guidance or tweaks to existing rules. I don't necessarily take the view that there's going to be material change as a result of AI to existing rules. They are usually fairly, fairly broad and sort of well-encompassing and principles-based anyway. But there probably needs to be some specific guidance here and there to sort of tell companies exactly what it means for sort of...areas where it's not been used perhaps too much to date but might be in future - so, like insurance claims is an example of that. So there's not too much use of either traditionally. They might well be in future. How do the existing rules apply in light of that? Are there any areas where there's lack of clarity and there might be some need for further guidance? That's a question that will need to be answered.

I do think they'll be certain areas, maybe not financial services because there's heavy regulation already, but perhaps there'll be areas where we find out through this sort of AI ethics discussion, we're having around the world, that there's bits of the consumer environment where harms can occur, where maybe there isn't heavy regulation today, and maybe that might inspire regulation to come about. I don't necessarily think that should be AI-specific though. It should be focused on the particular interaction and the harms that can occur because of it. But I think that might be a benefit that we get from the current debate.

Bill Simpson-Young: You know, it's interesting that...I was quite surprised when the EU draft regulation on AI came out. I had been assuming that most countries and most jurisdictions would be going for sector-specific regulations. For those who aren't aware, the EU regulation on AI, or the draft, is for AI, and it defines the whole sum... It's very much taking into account the risk type framework, that Michelle was referring to, starting with the high-risk areas. And saying, here are some very high-risk areas that are just prohibited for certain applications, like certain types of applications of facial recognition, and so on, and a sort of social credit score type system being operated by governments. So, you know, these are prohibited. And then there's a whole lot of other classes of high-risk applications that have certain requirements for transparency. I'd like to get people's opinions.

And starting with Mark… Obviously, any...and this is a draft regulation, but obviously, GDPR, when it came out as a regulation had major impact to all Australia... even though it's a European regulation, had a big impact on Australian companies. And anyone with a website who...people are using their website from around the world. Are we going to have something similar happening for AI, where Europe is saying, you know, the extent to which different applications have to have transparency of different types? How does… What's going to be happening for regulation around the world, and how does Australia make sure that it's in a good position to both influence it, but also companies to be ready for it? So maybe start with Mark.

Mark Caine: Yeah, it's a great question. I think it's sort of a million-dollar, Australian dollar, I guess, question because the truth is is that I think we're several years earlier than where GDPR is. And I think there have been some lessons that have been learned from the GDPR process. Looking back to when GDPR was first published in its draft form and then when it was first getting rolled out, there was still a lot of lack of clarity amongst business actors. And I think actually there still is in a number of different domains and jurisdictions. And really a lot of it ended up coming down to, what is the enforcement going to look like? And that's, I think, a huge consideration here.

And so what I would say about the draft EU regulations is it is early days. I think it is a draft. We will see when it gets shaped out to. I think that we had a similar observation that it was interesting that it was so sort of vertical in its AI focus as opposed to sort of being more distributed across different use case application areas, industry sectors. But I think that the direction of travel is relatively clear, irrespective of what the final details are and what the final enforcement looks like.

And I think that what we're seeing is that it's not just the European Union who are thinking but in particular, what are these high-risk use cases, and what are these sort of serious harms that can happen to citizens which need to be mitigated for? And even if the mitigation mechanisms haven't exactly been figured out fully or even if those use cases haven't been fully conceptualised, let alone exploited or developed for, I think that there will be certain things that companies would be well-advised to do to put themselves in a good position to be ready for whatever it is exactly that comes down from the EU in terms of the final regulations and then the enforcement.

And additionally to that, what comes down from other jurisdictions that they may be subject to the rules of. So in Australia, there will obviously be the Australian regulations that may or may not come. There will also be elements of this that get brought up in trade and digital trade discussions in the years ahead, I think.

And I think what we hear from businesses that we see as being on the front foot and kind of approaching this in a proactive way is that they're starting to put in place the structures and the systems that they need to be able to do the documentation of how systems are being built, what kinds of data are coming into it, how it's being used, what kind of notice and consent customers are getting - these are the sort of basic building blocks of any kind of compliance regime that's going to come. It may also be useful for other things, for reporting out to investors about how the company is handling customer data and addressing the potential risks, whether legal, reputational or otherwise, related to fairness or bias or unjust outcomes.

And so to the extent that Australian companies want to be on the front foot, that would be my advice is just to just sort of look and help co-design and develop those tools and practices to do those steps of governance, which will be helpful anyways to understand what's happening inside of your companies, to be able to be on the front foot and able to explain when something goes wrong, why it went wrong and to make it right. And I think that those are the key building blocks that irrespective of where the regulations land and, crucially, how they end up being enforced, and how they're enforced across jurisdictions, a lot of that has to be figured out. But I think it will be easier to be in a good position once those things are figured out, if one starts earlier doing the things that are sensible to do anyways for a number of other reasons.

Bill Simpson-Young: So, Chris.

Chris Dolman: Yeah, I mean, I was a bit like you, Bill, when I first saw the EU rules. I was somewhat taken aback. I mean, my general impression with this is if you regulate something called AI, we're going to end up in a perpetual debate about whether the thing you're looking at is AI or not, whenever anything gets challenged, and that's a very unproductive discussion - when the thing that you're actually caring about is the impact that a decision's had on people and whether or not that decision was appropriate, or needs redress or what have you.

So my general view on the EU rules is that if you took out the word 'AI' and replaced it with ‘high stakes decisions’, or something like that, there might be a considerable enhancement and would actually make it a lot clearer in how you'd apply it in the real world. I mean, you used the example of sort of social credit systems, and banning those makes sense for a lot of reasons. But if I create a social credit system with a pen and paper and a filing cabinet sort of notes, it's still a pretty awful thing. But it doesn't need to be AI - however defined - to make an awful thing. It's just something that we don't want to have.

And so it would make sense to me, I think, to sort of take a step up and say, what are we trying to achieve? Well, high stakes decisions being made appropriately with proper amounts of consideration for redress when things go wrong is what we want to get to. So I was a little surprised with how it was designed sort of structurally. Whether that'll be the final form, who knows. We will have to see, I guess.

Bill Simpson-Young: Now, just, Michelle, I'll come to you in a minute, but just before that, just please ask any questions. We'll be going to the audience for any questions shortly, and also if there are any questions there, please upvote them or...if you want to ensure they're answered.

Now, just continuing on from that... coming back more to Australia as well. The Australian Human Rights Commission has just recently, a couple of weeks ago, released their final report into human rights and technology that they spent about three years working on. There's a lot of recommendations, and their recommendations for regulations, as you would expect, the...one of the areas they push is for there to be human rights impact assessments - that's the term they use 'human rights impact assessments'. And they are sort of calling for those to be encouraged in business and to be required by certain types of high-risk applications by governments, so on.

Could we hear a bit from, say, starting with Michelle, then Chris and Mark, in the fields of work that you are working in, do you see human rights impact assessments being something that is a good thing? Is it something that it's clear how it would be done? Is it clear how you would... And, Michelle, I like your approach to the importance of risk first, you know, coming up with your risk framework first. In the case of healthcare applications, is it going to be when different companies are looking at different risk assessments, are they going to end up with the same categorisation into high risk and low risk or is it going to be a lot of the Wild West out there? So, Michelle.

Michelle Perugini: Yeah, I think it probably will be a bit of the Wild West. And, you know, there's always grey in every type of application of those types of principles. And I don't know the answer to your question, to be honest. I mean, it's kind of inherent in the risk profile from the healthcare perspective that the consideration is entirely around human rights and ensuring that there is adequate kind of protection from a utility perspective of the data, from how that data is packaged into AI technologies or products and then utilised for the benefit or... to the detriment of those people that it's impacting. But I don't really have a good answer to whether it's sort of practically being implemented or whether it's an actual thing that is being followed in a prescribed way, so I might defer to Chris or Mark for an answer to that.

Chris Dolman: I'll give it a go, but I'm probably similar to you, Michelle. I mean, it depends on what's going on in the detail I think of that. I mean, to a degree, we probably do some of this stuff already, maybe all of this stuff, but it really depends on what such an impact assessment might entail. I mean, the way we implement the principles today, we go through a sort of an impact assessment sort of process, I guess. The systems get designed and built. So, is that a human rights impact assessment? Well, maybe. I don't know, but to some extent it probably is.

Bill Simpson-Young: OK, well, I might actually just jump into the questions now. We've got a number of questions that some of which are really hairy, and will take a while to answer. So let's get into those now. I’ll start with… So the first question, one that's top voted, and this is a paraphrase on the actual question because there's been a couple of similar questions. So ethical AI should really be seen as the way to develop the best possible product rather than ethics as a 'hurdle' in how can we change the framing.

So I think what that question is getting at is just because we think we can build something, we shouldn't then try to do that ethically. We should consider whether we should be doing that at all - that's part of that. So how do we build the best possible products for the human race, I think, is what, one of the questions asked?

Chris Dolman: Well, I 100% agree.

(CROSSTALK)

So, like, the first thing we do when we develop a product is say, well, what's the actual intent, what are we trying to achieve, what we're trying to maximise, what are the goals, and then what are the constraints around it? You certainly don't want to have ethics as a constraint. We want to have ethics as an embedded goal in the design of your system. So I agree with the premise of the question, I guess.

Bill Simpson-Young: Michelle?

Michelle Perugini: Yeah, I completely agree as well, and it kind of goes to Mark's point earlier, is...there's all of these regulations and there's AI ethics principles and there's data privacy laws, compliance requirements, but at the end of the day, I think if you're creating products and services that are built in the right way with the right considerations of who the end beneficiary is going to be and making sure that you're ethical in the way you approach the actual build of the products and diligent in the way that you manage the compliance and manage the… whatever it is, the data requirement and the access and utility, then you will already be solving all of these issues as a company.

So I think that's kind of the challenge is. Where do you encapsulate what should be specified AI principles versus an AI framework versus what should reasonably be done anyway for these products, particularly if they're impacting human life or insurance decisions, which is obviously not a health issue but it is an issue, right, that can inherently create sort of unintended consequences. And so, I completely agree. I think it's just...to Mark's comments, again, I think it's just... We should be building these systems with that in mind already, and then we'll be 80 to 90% there.

Mark Caine: If I may just jump in on that quickly, I would say, I double...similarly endorse the premise of the question. I think it's a good question, and also Michelle's answer there, what I will say additionally is that I think that there is a lot of capacity-building that needs to happen to actually get us there. We've talked about building the systems, we've talked about using the tools, we've talked about putting in place practices, the reality in a lot of the domains in which AI is now being deployed is understandably that the people in those domains don't have as much experience or history or expertise in some of these ethics and AI ethics and AI issues.

And I think that this is something that is now really interesting to watch, because as AI is really percolating out of the tech sector, even five, seven years ago, so much of the AI that we were seeing developed was in technology, in social media advertisement, etc. And really, I think the story of the last couple of years is health - every sector, really, metals and mining, agriculture, consumer finance, government services, etc. And these are increasingly domains where the sort of baseline skill set and understanding and knowledge and experience with the issues that we're talking about on the panel just get lower and lower. And so, there's a lot of capacity building that needs to happen there.

Fortunately, there is good practice that's being developed out there. And so this is a big part of our work is to try to create these sort of real-time communities of practice and of exchange and learning between whether it's policymakers and regulators, or decision makers and companies, or people kind of on the front line of product design and deployment, because I think that's an area where there is a public good to sort of greater collective understanding and sort of skill set. And we don't have a lot of time given how fast the actual sort of technology use cases are being developed and kind of coming out into the market.

Bill Simpson-Young: OK, now, next question. There's a couple of questions here that I'll, sort of, I'll combine. One is, to what extent do you think open source is a requirement to actually achieve transparency, safety and fairness with critical applications of AI, for example, in health, Michelle. And another related question is, what types of practical examples can the panellists' provide on how they practically implement the transparency that was mentioned in the context of their own businesses? Do they release their code on open source licences? Do they release their models, do they release test code, do they release an online training data? So, Michelle, shall we start with you on that one?

Michelle Perugini: Yeah, I don't think so. I think there's a lot of kind of commercial complexity in those kind of discussion points. Open source data I think is good for exploration. But what we're finding more and more now is it's not about big data, it's actually about the right data and it's about getting globally diverse datasets that meet the need of the AI that you're actually trying to train. And more often than not, that will not come from open source datasets. They are very specific data that need to be collected to solve very specific problems.

And so, I think to...from our perspective, open source data is often not that useful. It's kind of very generalised, and it's big data, but it's not necessarily the fact that we just want to mine data for outputs. We want to be able to actually build really intentional products based on very specialised data that we then go out and seek to collect from the right stakeholders to build something of practical use within the clinical environment.

In terms of releasing code, and this is a huge problem, code and models. It's quite difficult to do that actually. But I think what should be more...or I think what's happening particularly in the healthcare sector is when you're going to publish your data and publish your clinical studies, there is actually now a requirement to release at least some information on the methodology in which you undertook that AI, some of the frameworks, and algorithms that were used in that process, and rigorously testing the outcomes.

I'm not sure just releasing the model is useful and it’s proprietary information of these companies that have spent a lot of time, effort, and investor funding to be able to build these products. So I don't think it's commercial reality to expect everyone to release those, and I don't think you need to. I think you just need to be transparent about how you've tested, validated. Have you used the right approaches? Have you tested in enough different environments, and does it work effectively?

Bill Simpson-Young: And one thing that we come across a lot is some organisations we work with talk about transparency in their systems, but they're not transparent about the actual specified objective of the system, and you don't have that clarity of the objective of the system. Any other type of transparency is pointless. So being transparent about what you're actually trying to achieve with a particular system or algorithm is, I think, is critical. Chris, were you gonna add to that? No, OK.

OK, so let's move on to the next question. Mark, do you have anything to add to that? Or should we move on? We'll move on. OK, the next question is, are we regulating the ethical code or developing the ethical code? I'm not sure how to interpret that.

Chris Dolman: I'm not quite sure what that means.

Bill Simpson-Young: And I think just a bit of context, I'm not sure where that question's coming from, but when we're talking about regulation here, we're not sort of assuming that regulation needs to exist for companies to build ethical software. It's more that when... As a society, it makes sense for the regulation to exist, so we want to make sure that regulation is the right regulation that's going to avoid the types of harms that are actually out there. And not... One of my big concerns has been in the past, I'm less concerned about it these days, was that with a lot of... it would be easy to regulate AI in a way that really stops some very positive, useful uses of AI that would make our world a better place, including a fairer and more sustainable place.

If the regulations are the wrong regulations, they can easily end up make it hard to do good things. So it's really important that when there is regulation, the regulation is indeed reducing harms and increasing goods. Now, I don't think any of us are sort of... I think all of us are working in organisations where those organisations have ethical goals to do work beyond what the regulation requires - so not just be lawful but also be ethical. And, so, I think that's a bit of a given. Let's just look at the next question.

Trick question, "don't you think relying so heavily on regulations to address ethics and safety is just an open door towards companies training an AI with the goal of being compliant rather than actually being safe and ethical?" I'm a bit... I know as far as you're saying. But, Chris, would you like to say anything more about that?

Chris Dolman: I mean, I think your regulation should be sufficient that that shouldn't be a thing that's possible. And if it turns out that it is, you need to enhance your regulation so that it's not possible. Obviously, there's always going to be movements and things happening over time, but I don't see that necessarily as a barrier. I just think it's...yeah, you need to have regulation that's sufficient to do what you want it to do.

Mark Caine: If I could just add to that, I think we talk a lot about what regulations we're likely to see, what regulations should be there, what's the right level, what's the right amount, etc. We don't talk as much about how and sort of how regulations ought to be developed, how they ought to be promulgated and communicated to the regulated parties, how the actual regulatory process and compliance and review and assessment kind of mechanisms work. And I think that it's in those conversations that we can find the answers to that question and find a path forward that allows us to avoid, I think, the very credible risk that was flagged in it. But I think that it's through a little bit more emphasis and focus on the mechanics of it.

And it does need to be a little bit different for AI, I think. And this is some place where even when we're talking about the application of existing laws and statutes to AI, we actually do need different ways of assessing the AI system than we assessed, in some cases, other forms of gender discrimination, for example, or lending discrimination. We actually have to be able to do the assessment and monitoring and evaluation aspects of that regulation differently. But these are all things that I'm confident are doable and happy to see that there are some really exciting effort, including just close by to you all, in New Zealand, to really think about what the regulator of the future ought to look like. Less focus on the 'what', more focus on the 'how'.

Bill Simpson-Young: OK, well, let's finish with one last question, and I want each of you to give your view on this one. So you might need to think a little bit about it. But we've only got just over two minutes, so you won't get much of a chance. OK, so the question is, "what will be the most prevalent ethical AI-related issue in the next ten years?" So maybe think about it for a second and let me know when you're ready to say something. OK, let's go to Michelle.

Michelle Perugini: It's gonna be around data diversity and bias for sure. I think that people are building these products in the way that they've typically built software and it doesn't work like that for AI. You can't iteratively build AI. It needs to be based on a sufficiently diverse and unbiased dataset from the beginning to make sure... or not unbiased but representative dataset from the beginning to ensure that it's going to apply in practice. And I think there's a lot of challenges doing that, and managing data quality, managing data diversity, and being able to access these globally diverse datasets is quite a big challenge. In fact, that's what our entire company is premised on.

Bill Simpson-Young: OK, Chris, what would be the...

Chris Dolman: I'll go indirect discrimination, given that every single piece of data you've got is correlated with something or other that's protected by discrimination law, like without question - whether the use of that data is reasonable or not is often unclear, and the protected attributes are often not observed. So we certainly...I mean, this is a point I've made virtually every public meeting I've had in the last couple of years, so not a new point. But it's one that we certainly are gonna have to grapple with over the next decade or probably hopefully sooner.

Bill Simpson-Young: OK, and for the last, last word, Mark, so what do you think will be the biggest AI ethical issue?

Mark Caine: It's a great question. I'm gonna go a different direction. I think the two that were mentioned are up there. I think that the question of how the vast majority of ordinary people all around the world who do not spend their days in the Techtonic conference, and thinking about AI and working on AI, how they are brought along or not with this AI future that is being created right here and right now and which is already touching their lives in far more ways than they're aware of or that indeed we may be aware of.

And, I think, can sort of citizens be brought along in a democratic and consultative way is ethical in its own right as a sort of matter of being a democratic country that sort of holds those values as Australia is, but I think also in terms of, are there gonna be other sort of fallouts around ethics issues because people don't feel like they were brought on. They don't understand what's happening. They're angry. They don't know why they were denied that loan. They don't know why they didn't get insurance, etc. So...hard to choose, though, between the many, many options of what could be a big ethical issue over the next ten years, unfortunately.

Bill Simpson-Young: So that's a great way to finish. And thanks for that, Mark. Really good. And so thank you very much to Michelle, Chris and Mark. I really enjoyed that discussion. And we had...oh, this is the end of the session. We'll close up now, and then the main summit will resume in 15 minutes. OK, bye.

Chris Dolman: Thank you.

Description: TECHTONIC 2.0 screen from the beginning.

Below, blue text on white reads ‘This session has now concluded. Thank you for joining.’

(MUSIC)

Hide publication menu: 
Show menu