Main navigation
Main content area

See how AI can benefit Australia and deliver better outcomes for regional and marginalised Australians.

Moderator: Mark Pesce, leading futurist, author, entrepreneur and innovator

Panel:

  • Dr John Flackett, Head and Co-founder, AiLab (Artificial Intelligence Laboratory)
  • Dr Anand Rao, Partner, Advisory, PwC
  • Rachel Howard, Head of Impact Technology, Minderoo Foundation

Transcript

DESCRIPTION:

A title slide. On a white background: The Commonwealth coat of arms: Department of Industry, Science, Energy and Resources. Text: Techtonic 2.0. Australia’s National AI Summit. 18 June 2021. This session will commence shortly. A montage of images, including a woman in field operating a drone, a microchip, mining and manufacturing machinery, a rover, and people in workwear. A countdown timer appears and counts down from 1 minute.

(MUSIC PLAYING)

DESCRIPTION:

Within a window on screen, a suited man appears. The title slide is his background. Text below him reads: Mark Pesce – Leading Futurist, Author, Entrepreneur and Innovator. Text at the top of the screen: Techtonic 2.0 – Australia’s National AI Summit. The Commonwealth Coat of Arms – Department of Industry, Science, Energy and Resources.

MARK PESCE:

Welcome back to Tectonic 2.0, Australia's National AI Summit. This is breakout session number four, which is all about using AI to deliver for citizens. So, we're exploring all of the opportunities that are available for AI technologies to benefit individuals and communities. We're looking at the barriers that would prevent adoption of AI in Australia. We're looking at the actions that Australian government and businesses and communities can take and looking at this comprehensively, it's not a minor issue, this is not just some sort of frosting on the top, because we have to understand that AI is becoming synonymous with our daily lives. Alright. It is helping us search the web, Google's got some of the most sophisticated AI capacity in the world. It's improving our health and our well-being, you know, I wear a smartwatch and this smartwatch is now increasingly connected to all sorts of systems that can detect whether I have a heart arrhythmia or what my temperature is and whether I may be catching covid. It's making our homes safer. It's making our cities safer. The clever use of AI is actually helping us confront national challenges. So, you have the CSIRO and the National Council for Emergency Services developing Spark, that's that AI powered bushfire modelling and prediction tool, which is being used to help save lives during the darkest days Australia's experienced. You have in Kakadu, you have the Indigenous owners and rangers, the CSIRO, you have Microsoft Australia, you have University of Western Australia and Parks Australia and Charles Darwin, they are all collaborating to mix AI And Indigenous knowledge, to solve complex environmental challenges and so, there are clearly enormous opportunities here to help us deliver better services, to realise benefits for communities, for citizens, for regions, for addressing the grand challenges that we have as a nation and there are no shortage of those. But there are big questions here, is it going to be safe? Is it going to be equitable? Does it make the world a better place? When it comes to AI, do we know what good looks like? And we have a stellar group of panellists here to help us answer that question today. I will introduce them one at a time and I'll ask them to share from their own experience and their own vision, how AI affords benefits to individuals and to communities, and so, let's start with Dr John Flackett, John is head and co-founder of the Artificial Intelligence Laboratory, AiLab.

DESCRIPTION:

Three more video feeds appear on screen, alongside Mark Pesce. They are identified as: Dr John Flackett – Head & Co-Founder, AiLab (Artificial Intelligence Laboratory); Rachel Howard – Head of Impact Technology, Frontier Technology, Minderoo Foundation; Dr Anand Rao – Global Artificial Intelligence Lead & Partner, PwC Advisory. Rachel Howard and Dr Anand Rao’s video feeds both feature plain white backgrounds, while Dr John Flackett’s features a blurred image with the text: www.ailab.world, followed by a Twitter logo, then @_AiLab. An AiLab logo is in the top right-hand corner of his video feed. 

MARK PESCE:

John, share with us some of your experiences, where AI has been delivering benefits for individuals and for communities.

DR JOHN FLACKETT:

Thanks Mark. It's really great to be with you and my fellow panellists today. Obviously, I'm joining you remotely, so I'd first like to acknowledge that the land that I'm on today in Adelaide, is the traditional lands of the Kaurna people, I'm fortunate to live on their country and it's an honour to be here. Look, you mentioned in your introduction, actually, AI's based on systems that are already deeply embedded into most people's lives.

DESCRIPTION:

The text in Mark Pesce’s video feed changes to read: Breakout Panel 4: Using AI to Deliver for Citizens. Dr John Flackett, AiLAb. Dr Anand Rao, PwC. Rachel Howard, Minderoo Foundation.

DR JOHN FLACKET:

When we're shopping online, AI powers suitable recommendations for other products, forcing us to buy stuff that we, you know, we do want really, don't we? Most of our inboxes, email inbox are pretty much spam free nowadays, because an AI algorithm that automatically detect your junk mail. We navigate easily between places with incredibly, incredibly precise AI route planning on online maps. We talk to our phones in natural language and we even get it to perform searches or play music for us, which, by the way, learns our preferences so it can recommend other music or alternative chains that we may not have heard before. Like you said, there's my watch, there's my smart watch that's monitoring our health, doesn't so much monitor my health that is just nag me to get up and do stuff when I've been sat down for too long and in the wider community, artificial intelligence is being used to monitor traffic control flows in cities, by adjusting traffic lights and that's not only to reduce queues, but also to stop the build-up of dangerous levels of carbon monoxide. Critical infrastructure such as, the electricity grids or wind turbines are being monitored in real time by AI based systems, that can identify faults and either shut down the equipment before it fails completely or the system automatically calls out a maintenance team. We've got drones equipped with cutting-edge image processing that's being used to recognise (UNKNOWN) or unstable roads after storms, or even keep track of endangered species to help protect them, and artificial vision systems are even being used by local councils to identify potholes in roads and invasive weeds in (UNKNOWN), and I guess, my point is with all of these examples, is that AI doesn't stick within certain industry verticals, but rather it's a technology that cuts right across all sectors, and because of this, you know, there's huge opportunities for countries that embrace AI and invest in developing homegrown capabilities, to design and develop AI planned solutions for all the citizens.

MARK PESCE:

So, then are we saying that AI is something more like electricity? Where you don't talk about electricity being a benefit, but it's really all of the ways that people use electricity?

DR JOHN FLACKETT:

Yeah, sure. It's hidden right? We don't think about it and it's becoming more and more like that now and it just powers our life.

MARK PESCE:

Yeah. Alright. Now, we'll turn to Dr Anand Rao. Anand is a partner in PWCs advisory practice. He's the global artificial intelligence lead and innovation lead in PWCs emerging technology practice. So, Anand, you've no doubt seen numerous organisations implement AI technologies to provide better services to consumers, to citizens, how commonplace do you think AI will be in the day to day lives of all of us in the next five to 10 years?

DR ANAND RAO:

Thanks Mark. Great to be here with you and the other amazing panellists up here. It's a very tricky question, alright? So, I've been in AI for 35 years and asking what will happen in five years and 10 years and predicting it with a crystal ball, maybe the AI can do better, but as humans, we have always failed in our predictions. So, with that, I'll still answer your question. I've been a big believer, I would say, in systems thinking and also scenario planning. So, I'll answer you by essentially taking maybe two scenarios, alright so two extreme scenarios, one, a utopian scenario and other, a more dystopic scenario. I know so far we haven't really talked about any of the negatives of AI, but maybe I'm the first one to start that, but let's start with the more positive view, right? So, we just heard John mention about AI being in every industry sector. So, AI is known for, it's a general purpose technology, as we call it, so it permeates every industry, whether it's manufacturing, high tech, financial services, agriculture, mining, all of those and every functional area as well, whether it's marketing, operations, finance, research, you name it and AI is embedded already and all the companies are using it, countries are using it. So, if you just project forward maybe a five year, 10 year span, I think AI will be even more embedded, I mean, you gave some great examples as well, right? So, we have been using AI for a long time in our e-mail systems, in our search systems, so that's going to continue, if anything, more. So, we look at AI very much as sensing, thinking and acting. If you look at all three departments, your AI's going to get better and better at sensing. So, we already are attaching with sensors, we talked, I think, at the manufacturing panel about IoT the sensors, so and the industrial IoT, that's essentially going to enable all kinds of interactions, so conversational interfaces, gesture based interfaces or natural language processing, real time translation. All of those things are going to come from a sensing perspective. Now, if you go to the next area in terms of thinking, again, the thinking that's being done by AI, so far has been largely about automating tasks, maybe a little bit about making decisions, but I think that is going to get better and better, better reasoning, making better decisions. So, we talked a lot, I think, in the previous sessions about augmenting human decision making. There are some things that humans do very well, some things that AI does very well, right? So, lots and lots of data and then trying to synthesise from that do very well. We can apply it, right? So, in that sense, there's going to be a lot more in the thinking space that AI is going to be doing and finally acting, right? So, acting in terms of either physical or digital, right? So, again, whether it's robotics, drones, autonomous vehicles, autonomous flying vehicles or in the software sense, right? So, automatically sending out emails and us scheduling our trips and so on. So, there are all the great things that are very likely to happen, AI essentially going to be embedded at ubiquitous. Now, let's get a little bit to the dark side, the dystopic view, right. So, all of the things that I just mentioned would obviously lead to better society, so abundance in terms of what is available, more spare time for people, all of those are there in the utopian scenario. But again, one of the reasons why we talk about responsive AI and inclusive AI is because of some of the risks, right. So, we see a continued escalation, as we are all very aware about cyber wars, powered by AI, now, right? So, it's ransomware again, being very targeted, using AI as to who we go after, how we go after, all the bad actors engaged to a variety of bad actors, big disasters are triggered by just faulty AI or even malevolent operations of AI. So, the list goes on and the information is getting very personalised, tailored by AI and that's also leading to no one notion of truth or fact, it's my version of truth versus your version of truth? So, it could go on, but I think that's also impacting how corporations are behaving, I mean, corporations are hoarding personalised data and they could be manipulating customer behaviour. So, if you look at the other side of it, yes, it could be very beneficial, but we are all being led by maybe authoritative governments or it could be companies owning the data and pushing us or what they call us, nudging us into what they want us to do as opposed to what you and I really want to do. So I think it's all leading to maybe a potential scenario of widening wealth gap, education gap, health gap, all of those things. It is of the more dystopic view, I think the reason why we're all talking here about some of these is to prevent that, right? So that we can learn and then make good plays in the utopian world, and not really dismiss this sort of dystopic view.

MARK PESCE:

And you set things up perfectly for our final panellist, Rachel Howard. Rachel is the impact technology director for the Frontier Technology Initiative at Minderoo Foundation. So, Rachel, the Frontier Technology Initiative is building additional ecology that empowers people and I love it when we use that word. How do you see AI empowering people today and in our future?

RACHEL HOWARD:

Thank you Mark for your question, and it's great to be here with my fellow panellists, John and Anand for this really important discussion on AI for citizens. Frontier tech is an initiative of the Minderoo Foundation and is a philanthropic foundation. We exist to arrest on fairness and create opportunities to better the world and that's a really, you know, inspiring and challenging mission. For Frontier tech, trying to apply that in the technology space and if we particularly think about the context of AI, this major instrument for change for us and what you might say is the raw material of the Internet, you're trying to think about how we arrest unfairness and create opportunities is, you know, is a really big concept and question. We're going through an industrial revolution, but unlike the steam engine revolution, AI, the difference is that it can make decisions about people's lives and livelihoods. So, I think AI that empowers people and its best possible version upholds things that people value in their lives and livelihoods and the way that we think about that is, you know, things that we often take for granted, physical and mental safety, it's like freedom from coercion, persecution, freedom to participate in the democratic process, high levels of privacy, equal opportunity to have a say, get access to a job, get access to quality health care or education, and in even having social equality and being free from discrimination. So, there are things we take for granted in our society today, but what we need to remember is that it's not a given that those things translate when we build, you know, our emerging technology. So, a couple of big trends, having to think about what I think is big themes around AI that empowers people is three that, you know, that I'll share today. I think a lot of people are thinking about healthy social spaces online. This is the translation of the conduct and the values we hold in our physical spaces, like our schools and our libraries and our workplaces and our play spaces, that those, that sort of conduct and those values are translated into our digital spaces, you know, we don't wanna see racism and misogyny, we don't wanna see bullying or misinformation in our digital spaces and so, the things I see people working on is how to have humanising language and how to cultivate belonging and how to have reliable information, how to build strong local ties and, you know, reverse polarisation. So, that's one big theme. Second big theme I see is around inclusive growth, and I think this is the real megatrend post pandemic, it's redigitized, rebound the economies. I see a lot of investment in advances towards equal economic opportunity, and that is things like access to financial services, quality education and training, investment in human capital and in tech, diversity and inclusion in building AI because we know diversity inclusion in the design build, builds better products, and the third one that I'll leave you with is around something people are calling the new care economy. I think this is another macro trend, I think it's AI that facilitates quality care. This is things to support ageing in place. We think that senior caregiving is a 180 billion dollar industry. Globally, there's solutions around training for carers, nurses, we see technologies around, you know, EdTech for school attendance, home schooling, disabilities therapy. We see things that support working families and we see talent, diversity and inclusion, you know, devising solutions out there. So, these are things, I think a post covid, you know, rebounding our economies to make sure that there's high access, there's quality caregiving and high inclusion, I think they're big topics.

MARK PESCE:

OK. So. it sounds like from the three of you that there are really enormous opportunities for AI to deliver tangible benefits for Australians, but we've got to be clear here, realising the AI future is gonna require equity, it's gonna require that all Australians are gonna be able to participate in and realise those benefits. This isn't just a revolution for some, but to do that, Australians are gonna need to be able to trust that the technology is gonna be used responsibly. It's gonna be used safely, it's going to have to promote inclusivity and public acceptance of AI, sure, it's trending in the right direction, but there are still a significant portion of Australians who are hesitant here around a technology that can present some of fairly confronting basis. So, I'm gonna open this up to the whole panel. Why do you think trust is low in AI in Australia? And what do we risk by not building that trust? Let me begin with you, Rachel.

RACHEL HOWARD:

Alright. Thanks Mark. A great question. So, I think, I mean, I can understand why there is some concern and worry. There's a lot of unknowns, it's fairly new territory and the extent to which AI is impacting our lives is really becoming quite significant, I mean, you know, Anand and John, you mentioned loads and loads of use cases. We know that our commercial lives are monetized through hyper personalisation. Our physical spaces are monitored, our human service interactions are automated. There's predictive systems making decisions about our life that are fairly material about whether we get loans, whether we get job ads, whether we get public benefits and our lives are increasingly online and I think that people don't feel quite in control of that just yet. So, if I look at (UNKNOWN) information commissioner last year survey around privacy, we see that two thirds of people don't feel in control of their privacy online and that, you know, 90% want more control and choice about what's collected and we see the digital inclusion index saying that the digitally included and the digitally excluded is fairly substantial and it's widening in some areas, particularly the regions. So, I think that, you know, as I mentioned Mark, the AI that empowers people, It delivers on what people value and I sort of laid a foundation for what I think people value, you know, that participation, that privacy, that agency and control and I think the industry and government that can build trust by demonstrating that, you know, by saying we value what you value and, you know, we're not gonna be opportunistic or extractive, we're not gonna wait to be caught out, we're gonna be on the front foot of this and really be demonstrating how we're building those trust measures in and I think absolutely that industry, business and government will be rewarded for that. They would get rewarded through brand equity, customer loyalty, repeat business and trust in government and I don't think trust is soft stuff. I think we're talking about hard business value drivers, intangible assets, including social licence to operate is up to 50% of enterprise value for public leaders of tech. Trust is a global traded commodity and Anand, I was really interested and inspired to see the pivot towards trust that PWC is making. With the big announcement recently around the business pillar of trust and helping your clients build trust in the market.

MARK PESCE:

So, Anand this is perfect, you're coming to us from America right now. How is the landscape there? Because, yes, as Rachel just said, PWC put out this big statement, made the big article in New York Times about how you're really pivoting to be able to help businesses establish trust as one of their core values, because it's seen now as being very important. Are the Americans in this area more or less trusting? How are they working with this?

DR ANAND RAO:

Yeah, very interesting question there Mark, and I think there are, as Rachel was saying, there are sort of two interrelated issues here. One is people trusting AI and the other one is, I think, people trusting other people and what AI is doing, is essentially surfacing, the latter, right, of course, you need to trust the AI and you need to have certain things in place, but if you look at all the debate that's been going on in bias and fairness to a large extent, it is more centred around people not trusting other people, again, maybe triggered by AI and other technology, but that's sort of going on. So, with that, your question was very much how is it in US. Now, just I would say the notion of trust in institutions in the US is much lower than in Australia. Again, having spent more than a decade in Australia and a decade or more in US, I can see that as well, right? So, just the trusted institutions, trust in employers and a recent trust index essentially shows, especially during the pandemic, the trust that people have in their employers actually went up in Australia by 3% points, whereas in US it went down by 2%. I was surprised actually in the UK it went down by 6% in the UK, so substantial drop there, and if you look at the trend in US here, the trust index has been dropping and over the past seven years, it has actually dropped 14 points from 62 to 48. So, there's a diminishing level of trust. So, and I think it's really reaching that sort of crisis proportion if you like, and this loss of trust is not just in businesses, right? So, it's sort of transcend business to government bodies, institutions, so anything that is federal, state, all of that and also with media sources, right? So, people are glued to their own version of the truth, as they said earlier and media sources are very polarised as well and the trust in business and political leaders, that's also divided and in us, I think there's another, I guess, worse situation where there really is a big divide, I think, between sections of the population and because of that polarisation of views. So, one of the reporters has called this the mistrust doom loop and it's become a vicious cycle. People see some, I mean institutional biases and therefore, they don't trust rather than building that trust and working together, what we have done is essentially lock ourselves into smaller and smaller groupings and I love the what Rachel was saying, in terms of the healthy social places, right. So, there are no such social places now, where things are fact driven or truth, right? So, that's what is sort of driving this notion of trust actually going down and I think there is light at the end of the tunnel. I know, I think Rachel spoke briefly about ESG, right so, environmental social governance, that's something that companies are adopting, our countries are adopting, various groups are adopting, and that is again, looking at getting much more inclusive growth going, getting companies to focus, not just on shareholder value, but a much larger stakeholder to be able to talk about sustainability, inclusivity, diversity, inclusion all of those. And as I think Rachel mentioned, so we as PWC, have launched this trust institute, essentially looking at how we can educate businesses, how we can bring them in and how they can change the way they think about society at large in addition to their customers and build that trust. I think that's really critical and I agree, it's a very much a megatrend as we move forward and I think everything depends on that trust. Trust between people as well as trust in the technology like AI.

MARK PESCE:

So, John, in order, I guess to either stop this doom loop or to get us out of it, we've heard a lot about the role of the increasing awareness and understanding, and we saw that there was a lot of that in the consultations for the AI action plan, that it was an idea to engage as a way to do that, so how do we see awareness fostering trust?

DR JOHN FLACKETT:

That's a really good question Mark and, you know, I think, actually I'm very lucky in my work to help introduce concepts of AI to decision makers and sometimes, when we're doing that, there's apprehension in embracing and adopting AI because there's been quite a lot of fear mongering, I think about machines, especially about machines with this superintelligence, that's gonna decide that humans are no longer required and, you know, there's obvious parallels here with the depiction of killer robots in films, but, you know, a much more concrete example is the fear that AI will replace all human jobs and then, you know, mentioned that a little bit earlier. I might have this wrong, I think there was a recent study from the University of Queensland that said about 72% of people distrusted AI. And their biggest concern was this AI's perceived effect on employment. However, I want to be upbeat as well. And the reality here and if we're talking about awareness, the reality is that AI and other emerging technologies are actually generating jobs. So last year, the World Economic Forum published its future of jobs report and that actually estimated a net increase, I think, of 12 million jobs due to AI by 2025. So I think this is the whole point of awareness about AI and its impact on the world is absolutely critical to trust. So, so with our clients, as we gradually workshop use cases, that are suited to AI solutions and show that really the current AI tools and techniques are actually only really good at single specific task, something we call narrow AI. People start to realise and get a bit more comfortable actually, that not all jobs are at risk. In fact, you know, most cases (UNKNOWN) automation sits alongside the human worker and it frees them up from repetitive tasks. So overall, AI helps automate tasks, supercharges productivity, and it gives humans more time to spend on other areas of their work and personal development. And that, in turn, leads and enables innovation, new business ideas, opens up a completely new way of dealing with business processes as well.

MARK PESCE:

And are we missing a trick here by telling people, cause we heard in the opening panel there's a lot of examples of how products are built better, They're built more safely, they're built more reliably because there's AI in the loop looking for defects. Are we maybe missing a trick, not communicating that? That that's another way to be able to build trust?

DR JOHN FLACKETT:

Yeah, definitely. I think the, the whole communication around AI is quite tricky for a start. You know, 70 years now, we since the term's been coined, we're still trying to figure out what it means. You know, I hear all the time when people say, you know, an AI or the AI, you know, that doesn't exist. We don't have an AI in that in that way. Right. It's not a thing. It's not, it's a field of study. And but obviously, when it hits the mainstream, we have these new... the words get changed a little bit, you know, and, and obviously when we reported this, when we're reporting and talking about AI, we need new things to talk about. Right. So AI is not good enough, we need we need artificial general intelligence where computers are as smart as us. But that's not enough either. We need artificial super intelligence, where they're like, you know, (LAUGHS) but going, going back to trust, though, around this, I think people are genuinely right to worry about that and algorithms, AI or not. Don't forget that we've been automating and using machines with statistics to predict and make predictions and make decisions for people for a long time, determine outcomes for us especially in terms of autonomous weapons and using facial recognition for surveillance, for instance. And I think the those kind of things make people lose trust in AI and in fact when machines are used to make decisions, but we don't know that they're being used. Right. And I can't remember either Rachel or Anand mentioned this a bit earlier, that transparency is horrible for people. And that's actually something the Human Rights and Technology report from the Australian Human Rights Commission, everyone should read that report. It's really, really good. But that that actually does talk quite a lot about that transparency of us knowing that decisions are being made by machines.

MARK PESCE:

Alright. So, that brings up the question of the collective steps and actions that we actually need to create this trusted, secure, responsible AI future. So, Anand, are organisations having the right discussions about trusted and responsible AI? And how can they start asking the right questions here when it comes to AI?

DR ANAND RAO:

Yeah. I would say that (UNKNOWN) still are very much learning. I think John did some great examples of people not really understanding AI. Right. So the whole education awareness. I would say most of the business leaders, not all of them, but most of the business leaders are still learning what exactly is this, what value does it bring and therefore what the risks are. So, what I would say is that, as you all know, AI has been the latest cool technology. There's so much hype around it. And usually it's some executives, some board level even nowadays wanting to do AI. Right. So they start off exactly as John said. If you need to do AI, can you hire someone to do AI? And I think that's a very wrong premise to start with. You don't want to be doing AI, you need to be solving business problems or taking care of whatever strategy you have, increasing customer retention and selling new products. So that's your core business, AI can be used. The first question I would say that the company should be asking is, do I really need it AI to solve this particular problem? Right. So if I don't need that, just don't go there. Right. So there are better ways, maybe cheaper ways to solve the problem, solve it. Right. So go only to those place, use AI only where you cannot. That's the first thing. And then I think once you get past that, is this is really a problem that can be addressed or better addressed with machine learning, natural language processing, computer vision. Again, the areas of AI. Right, it's not one blanket AI. Then I think you start looking at the risks of errors. Where is the bias coming from? Right. So in its development, if its data, explainability, robustness, safety, security, privacy, transparency, accountability. There's a whole dozen or more things that people need to worry about and that's when it starts happening. Now, when we have looked at organisations, there are no more than in our surveys. It varies by geography, right? So some countries are more advanced than others. Roughly only around 20% of the large companies are essentially deploying AI ie machine learning models and natural language processing models at scale, at an enterprise level. Only 20% of them are so really doing it. For them, this has become a concern and they are addressing it. They just started addressing it because now it is very much used at scale with their customers, and if something goes wrong, it is a big issue. It's a it's a headline of the newspaper. Right. So now they are addressing that. The remaining what over 80% , I think 30% are still exploring. Roughly 50% are doing proof of concepts. They're just testing out various things. So for them, this is really not such a big deal yet because they are much more worried about, hey, is there really a real value here? Is it all hype or do I get really some value? Let me figure that out and then worry about all the risk that you guys talk about. Now, did we just see whether it's valuable to use, so they don't worry about it. But I think that's also a bit dangerous. You can have a good ROI without considering the risk, but as soon as you consider the risk, maybe the ROI goes out the window. So I think you need to be conscious of that as well. I think there are a number of issues here that organisations should be thinking. I think they're getting on that path, but still a long way to go.

MARK PESCE:

So Rachel, the Frontier Technology Initiative is doing a lot to ask for and ensure accountability in the tech ecosystem. And of course, this is a very hot topic, particularly in the United States this week, what with a raft of bills going through Congress and a new FTC commissioner who is very antitrust oriented. And, of course, all of this is about protecting the public from harm. Right. And of course, we saw what happened earlier this year with Google and Facebook and all of that. Do you think that Australians are informed enough around the benefits and the harms of AI? And what do we need to do to make sure that this is a public accessible discussion that brings the nation in together to have that discussion?

RACHEL HOWARD:

Thanks Mark. No, I don't think Australians are informed enough. I think there's a lot of unknowns, actually, for people, and I think right now it ranges from instinct, like when something you talk about shows up on your feed and it feels creepy to blind trust that your credit scoring data that's used to make a decision about whether or not you get a loan doesn't have errors in it. So I think a really good place to start is actually more transparency for people. I think that will enable people to participate in the discussion in a more informed way. And I think there's sort of four big things that you can be transparent about. The first is verification that the product works to spec and it kind of does the right thing by you. So that's building trust on entry with plain English terms and conditions, collecting only information that's essential for the service, giving people the ability to object to collection but still access the service or giving them options to move their data. I think the second thing is validation that there's no unintended consequences, that you've risk assessed, you've looked at your AI and thought about all the intended and unintended consequences. You've got good oversight on the outcomes of your AI. And I think in this category, it's also saying what accountability or responsibility you're gonna take for the displacement consequences. And I see really the, the leaders in this who are driving automation in their businesses aren't really making serious upskilling efforts to transfer the displaced workforce. And even at a micro scale that's upskilling for the components of the job that are automated out. I mean, is the transferred (UNKNOWN) added to work, the promise of that, is it happening? It actually takes a lot of effort and my consulting days are spent quite a bit of time thinking about that problem, so that's been a problem. I think the third category is security and people wanna see strong protocols that you're preventing malicious attacks. And in particular, where we see a bit of bad press is when people don't have enough accountability and due diligence on the third parties that handle the data. And I would say the fourth areas around human control that people want to see that there's an accountable person for an AI system. So I think that organisations who are sharing this type of information in the product through explanations and choice are being transparent with people. And that is a really informative and it helps, helps build literacy for people. And I think they can do it through feedback and I think they can do it through reporting and insights on how that oversight is going. I think in the public sector, there's, there's quite a lot of transparency mechanisms that we've used for all kinds of things. There's registers for government data sharing, for example, that just gives people transparency and allows them to, to be informed. I think there's the raw ingredients of getting public debate and participation in the debate is things around how do you drive people to participate? How do you give them access to authority? How do you elevate shared concerns? How do you build bridges between groups and some practical things that I see an inspiring ways of this is things like leaders having hashtag campaigns, just asking people, you know #AskTheMan, you see participatory budget processes. You see things like The Responsive City project uses data smart governance to pre-empt problems, you see SMS polling, you see local journalism doing neighbourhood stories, sharing an experience sharing. So there's, there's lots of offline and online ways to encourage that participation and debate.

MARK PESCE:

Alright. And on that topic 'cause in about ten minutes, we're gonna take questions from all of you on this session. So keep in mind, you can ask these questions on the right side of the window. There's a Q&A tab. You can pop your questions in there. They've sent some along now, which is good. Thank you. That's timely. So you guys have actually been participating if you're on the channel, thank you very much. We will get to your questions in a moment. I have a question for John. John, you founded AiLab to assist organisations in navigating the AI landscape and building awareness. You've talked about your process. What experiences have you had with organisations that are new to the domain? How I mean, are they like alliances? Are they like, oh, let's order an AI. We need an AI we need to get the AI in here and figure out. Then we can ask it what it can do.

DR JOHN FLACKETT:

They bring their AI with them in a box. And then how do we use this... No. Yeah. Look, you know, AiLab was born out of this need, I think, for a greater awareness about what it really was and what it what the capabilities really are. You know, we've been very lucky to be able to educate thousands of people around the globe actually about learning about AI and doing research into AI and providing masterclasses to industry and government really focuses them on being able to approach vendors in the right way as well to ask the right questions. If you think about the way that people run their businesses, you know, they do that with technology, wasn't actually, it wasn't that long ago. The industry couldn't even ask the right questions about which server to buy. And they would rely on, you know, third party providers. They quickly learned that they needed to know at least the minimal amount so that they could have good conversations with them and strike good deals. But I think that's probably where we are with AI as well. So, you know, that's a big part of what we do. And I think many organisations that we've met, they've really been led to believe that the application of AI is actually small, end to end of automation, of complete processes. Right? Using tools and techniques, either that or AI just for research labs or big tech companies. So in other words, improving an AI is really, really hard to do and they call it, whereas the reality is the AI tools and techniques are actually surprisingly accessible now. And I think this was really interesting because in the other, in the first session in AI manufacturing, this was picked up as well. And this is something that we say to our clients as well, is that you need to kick goals early on in your journey. It's really important to do that because that leads to other successes. And so we advise organisations that are looking to get started with AI that they should start really small and focused and pick some of that low hanging fruit because there is an awful lot of that. And so, in other words, look for time consuming, repetitive tasks that demand some level of human decision making. And like I said, that was mentioned previously as well. So I think. You know, for a concrete example, you know, we've worked with state councils that have wanted to start their AI journey and they've started really small by developing simple like machine learning Chatbot to help citizens know when to put the bin out and more importantly, what colour that bin is they need to put out. Right? Cause we all struggle with that. And I although that sounds really simple, if you still have to build a knowledge base, you still have to work with new technologies and new design methodologies. You have to decide what voice the chatbot's gonna have and when to hand off, when, when, when there's no knowledge or the chatbot doesn't understand the question that it's given. And then once you have that general framework, you can you can build to progressively enhance the system. So you are scoring goals quite early on. And I think what's really interesting here is that even though seemingly straightforward for prototype problems that I mentioned as well, like a chapel. Right? It requires good data management because at that point you find out that the data you need is actually siloed across all different departments or it's not in computer readable form. So you have to do all this extra work. So whatever happens, it's a great learning experience. And at the very least, it leads to really good discussions about data governance. And to be honest, at the moment, that's the foundation for building good AI systems.

MARK PESCE:

And it feeds into one of the operating principles that we've learned, which is that the future often begins as an expensive toy. Alright. Final question for all of you. If you can give me sort of a minute answer on all of this. What, in your own opinion, is the number one action needed to ensure that all Australians can benefit from AI? I'll come back to you, John. Start off.

DR JOHN FLACKETT:

Well, I'm gonna stick with education. I think we're still in a massive education piece, whether that's formal education or self directed. We need to gain a collectively, collectively, we need to gain a good common understanding about AI. I'm actually very happy that all of us are not in the right places all at the same time. So that's, that's a really good start, I think, for the wider community, for citizens. There's so many good resources on there out there at the moment. Just Google what is AI for a start you might even find some free AiLab resources up there as well. The other really good source of information is community groups around the country with there's lots of AI start-up groups. We run one in South Australia as well. We build AI awareness by showcasing use cases and getting businesses to show applied AI.

And I think that once we get that understanding, we can have a better collective conversation. I will just go very quickly back to what was said, though. I think it's really, really important to remember that not everybody has access to the technology that we do. There's still many in Australia and across the world that don't have smartphones. They don't have access to the Internet. They can't just Google what is AI or travel to the nearest meetup group, for instance. So we've got to double our focus, I think, on reaching those people to make sure they've got the same opportunities to embrace AI.

MARK PESCE:

Alright, Rachel, what's your number one?

RACHEL HOWARD:

I'll keep it brief and just repeat a theme that I think has been powerful today and it has been a weaving thread through this conversation, which is trust building measures. I think that's good for citizens and I think it's good for business.

MARK PESCE:

And, of course, a good trust building measure is something that happens over time. That trust is a product of continuous, repeated interactions with institutions, with individuals, with artificial intelligence. Alright, Anand, what about you?

DR ANAND RAO:

I would agree with both my panellists education and trust building, especially on the trust building. I would say government can set the bar in terms of how people or other companies should behave, facilitate the coming together of different communities, the public private partnership and what I think people are calling living labs and then also provide the right guardrails. Right. So and again, in terms of the guardrails, I would say some of the discussion that we had and what the minister announced in terms of bringing research very much with industry rights, academic research and industry together to solve problems that are facing the community, I think is something very, very apt for AI to get into.

MARK PESCE:

Alright. So here is the first question from the audience. I like this one because it's gonna turn everything we've been saying on our heads. Is trust, and by the way the word trust is in scare quotes here, is trust the right framing? If people trust AI because they don't understand the risks, is that the right outcome? Should we be working towards systems instead that are trustworthy or deserve trust? And both of those are in quotes, and I like the way it's reframed that. Who wants to take a whack at that one?

DR ANAND RAO:

I can take a whack at this. I mean, we when we kind of had responsible AI, there was a lot of debate at that point as far four, five years back, that we should call it trusted AI. And primary objection to it is trust is not something that you name, and therefore it is trusted, something that needs to be earned. Right? So what companies can do, what governments can do is be very responsible. And that trust is something that you earn by the people. Right? So if it is really what it's meant to be with all the other attributes, then customers or the users will quote and quote label it as trustworthy. Right? So you can't label it and you can't define and say, here are the ten things that if you did this, you are trustworthy. No, I think that needs to be earned. And maybe there is 11th criteria that people will come up with. So I agree with whoever had that question that it's not trustworthy. I think we should be looking at all the other attributes that will lead to the trust.

MARK PESCE:

Rachel or John, anything you add to that?

RACHEL HOWARD:

I think, good summary Anand. Also I have a bit of trouble with trustworthy AI in some senses it's like, does it perform? Does it do what you told it to do? And it can be trustworthy in that sense, but it could be performing a function that's not a healthy function, not a good function. So I have to use responsible AI where we quite a lot and, and think that that's a bit of framing. And I really like your framing. If you've got to earn the trust where that... where we're constantly reminded I mean, as a philanthropic organisation standing up for human rights, when we do this in far more areas than just tech, but when you are responsible AI personally, I don't wanna forget that underpinning anything we do in society is the legal system and that we have to remind ourselves this. There's a legal angle to AI. And it's not just intending to do the right thing. It has to meet the, the legal frameworks and the regs that we have in place. And including, as John mentioned, our human rights laws, which is all about discrimination and removing discrimination and giving everyone the equal chance, regardless of gender, race, religion. And so I don't want us to forget that there is an obligation to meet both the, the frameworks that we already have in place. And it's not just at the choice and discretion of the design of the builder and the owners of the systems.

MARK PESCE:

Anything to add John?

DR JOHN FLACKETT:

I'm not sure I can say I can better those two. (LAUGHS)

MARK PESCE:

Alright, this is another, I think, really good question. It's getting to one of the hard issues here. There is there are a lot of challenges very clearly regarding fairness. And I mean, you can take a look at the troubles that Google has had in its senior AI team around this now. Right? And if anyone is watching this that doesn't know, just Google AI firings, you'll see the whole story. So there's challenges regarding fairness, trust, biases for business and biases that are built into citizen services. Do you find these as blockers for the adoption of AI and policymaking? In other words, are we so afraid that we're gonna build in these biases that we won't use AI? And if so, then what can we do to overcome those challenges?

RACHEL HOWARD:

I can start with that one. We think a lot about this and have done quite a bit of research around government use of AI. I think that the particularly in government services you can be quite fearful about getting it wrong is a big downside. And I'm sure and I'm sure private sector feels like that, too. So, like, is there is there a risk it's gonna deteriorate our ROI here. What's at stake? Is it all too hard? What I think is that unless you're intentional, intentional about looking for your risks and what are your intended and unintended consequences of the systems that you put out there, then you're gonna have a lot of blind spots. And so I think what can be done, I think people who look at this can see we had the Human Rights Commissioner report, including the technical report, and he's certainly saying after three years of work, there's a big risk that there's discrimination in AI and it's usually indirect discrimination where you've removed the sense of the protected variables like gender and race, but the system finds otherwise to infer them. And so what I think can be done about it is being intentional, about looking for the risk and beyond being intentional, having standards in the industry is better than it being at the discretion and choice and skill and will of the teams involved and the individuals. I mean, sometimes these decisions about how to structure the model to, to avoid these pitfalls of discrimination, it's, it's a very kind of deep levels of the organisation that it's those decisions. And the other side of that is not elevated to senior levels, certainly not the board. And so I think putting more standards. So there's, there's clear kind of clear practice guidelines around how to assess risk, discrimination risk and other risks in your AI systems and then having very good models of oversight. And that's not that's not the that's the oversight of the outcomes. What outcomes do that actually produce to the systems actually produce against those variables that we consider protected.

DR ANAND RAO:

Yeah, I would say, Mark, the word fairness is very, very much like AI very misused word, right? So when we say fairness, we mean very different things. And in fact, it's well documented that there are more than 30+ mathematical definitions of fairness. So when we see the headlines, AI is unfair, right? So unfair according to what? Which definition? So my definition of fairness, it's very much agreed to my definition, but may not agree to your definition of fairness. And it can mathematically be shown that there can be no algorithm that is fair across all of those 30+ definitions. Right? So I think we are taking something which is very much at a human level issue. Right? So we don't agree on what fairness means or we interpret it in different ways, in different circumstances. Right? So it's very circumstantial as well. Right? So it's not one thing. So, again, I think we are making the same mistake like in AI, that we are using a word that's in common parlance and then taking it to AI and then say AI is bias. No, we don't agree as people which definition to use, let's sit down and work it out and convey that this is what we mean and this is what the algorithm is doing. Right? So that needs to be done. I fully agree with Rachel. That needs to be done with not just the data scientist, but with the business sponsors, with the community, with the legal, with the ethics people. Right? So much more broader group of people to look at what is the right thing to do there. Right? So then I think some of these things disappear. And also, I know there is also always a higher bar for AI, right. So come back to people. And that's a well known again, I think we should use some of these things and refine it as opposed to not use them at all.

MARK PESCE:

John we have about 30 seconds.

DR JOHN FLACKETT:

Well, very quickly, I mean, I think we actually have a long history of looking at risk and with automated or algorithmic systems in the 90s. I mean, I know a couple of people that have been doing automated credit scoring, for instance, for decades, and they know they have a massive focus on bias and risk. And, you know, maybe it's time to dip into a bit of history as well, that there is a wealth of knowledge out there. And so this issue of collaboration is absolutely key now.

MARK PESCE:

And it reminds us that some of the new problems we think we have with AI are, in fact, very old problems. Huge thanks to John Flackett , to Anand Rao, to Rachel Howard, thank you so much for your wonderful contributions on this panel.

DESCRIPTION:

All video feeds except for Mark Pesce disappear from screen. The text in the background of Mark Pesce’s video feed changes to read, We’ll be back at 4:00 pm!

MARK PESCE:

Now, everyone, we're gonna take another 15 minute break. We will be back at four o'clock. Remember that you're changing live streams again. So you have to go tap on that live stream panel as soon as this stream ends, which is going to be momentarily. We will see you back for the closing session at 4pm. Thank you very much.

DESCRIPTION:

A closing slide. On a white background: The Commonwealth coat of arms: Department of Industry, Science, Energy and Resources. Text: Techtonic 2.0. Australia’s National AI Summit. 18 June 2021. This session has now concluded. Thank you for joining. A montage of images, including a woman in field operating a drone, a microchip, mining and manufacturing machinery, a rover, and people in workwear.

Hide publication menu: 
Show menu