Main navigation
Main content area

Watch an in-depth panel discussion on Australia’s AI future. It includes pre-recorded questions from Year 7 students.

Moderator: Mark Pesce, leading futurist, author, entrepreneur and innovator

Panel:

  • Dr Cathy Foley AO PSM, Chief Scientist of Australia
  • Liesl Yearsley, Chief Executive Officer, A-kin
  • Jeremy Howard, Founder and Researcher, Fast.ai

Transcript

(UPBEAT MUSIC PLAYS)

Description:

Text: TECHTONIC 2.0 - Australia’s National AI Summit - 18 JUNE 2021. This session will commence shortly.

Above the text, the logo for the Australian Government Department of Industry, Science, Energy and Resources. A kangaroo and an emu balance a shield between them.

On the right, still colour images surrounded by pink, white and teal squares: a woman looks at a drone hovering over a wheat field. A man in a hardhat works at a laptop. A mine dump truck. A robotic arm.

Mark Pesce: And welcome back, everyone. We're here for the closing sessions in Techtonic 2.0, Australia's national AI summit.

Description: A webcam of a middle-aged man in a navy-blue suit. 

Text: TECHTONIC 2.0 - Australia’s National AI Summit - 18 JUNE 2021. Welcome back!

Above the text, the logo for the Australian Government Department of Industry, Science, Energy and Resources.

Under the text, the still colour images surrounded by coloured squares.

Text at the bottom of the webcam: Mark Pesce - Leading Futurist, Author, Entrepreneur and Innovator.

A transcript of Mark’s speech runs along the bottom of the screen.

Mark Pesce: So, we've all been off to our breakout sessions. I was in breakout session number four. I don't know which ones you were in. But in session number one, putting AI ethics principles into practice, they took a look at transparency. Not just about revealing the black box, but actually taking a look at how the AI is being built, how the AI is being tested for the applications in which it's going to be used, and what kinds of positive developments are happening around the world as we learn more about how to identify the best practices there, and then scale them into responsible AI.

So, that was stream one. Stream two, the next wave of AI technologies. And this was a popular one. So, we heard, as you'd expect, that the pace of AI development is accelerating rapidly. It's changing. That we have, of course, the natural human tendency to overestimate the impacts in the short term and underestimate the impacts in the long term. But, of course, it's the human capital, the skills, the capabilities, the plant that we have to work with. All of that, which is really going to be critical, because that allows us to navigate any rate of change.

Alright. Stream three, how do we AI proof the workforce? And for us to build the skills domestically that we're going to need to succeed as a nation in AI, we're going to need to think about the ways to be able to transition and the skills that we're going to need in translation and implementing AI. We're going to need to be figuring out how to educate Australian workers and businesses and students so that they feel empowered to be able to do this. And that brings us to stream four, which was very much a theme around empowerment, which is taking a look at all the way that AI is touching our lives.

But now we actually need to think about it critically. It's like, how can we make sure that it is good? And that, of course, falls into a definition of, well, what is good? What is trust? What is fairness? And it's also then taking a look at an approach that allows us to say, OK, we know where we want to go. How do we build systems so that we can constantly ask questions about them, so that we can understand the path they're on and adjust their path to make it more fair and more equitable for all Australians? And listen, if you really like something you heard about in another stream, don't worry, because all of the streams will be available for you to watch on the Department website after the event.

So, we have spent already now three hours at the coalface and we're getting a sense of what is happening, what needs to happen, what's the best that could happen, what we plan to do to make it happen, but it is time now for us to get a bit of a view from a height.

Text: Panel: Future Opportunities for AI in Australia – Dr Cathy Foley AO PSM, Chief Scientist of Australia – Jeremy Howard, fast.ai – Liesl Yearsley, A-kin

Mark Pesce: And we're going to hear from three global leaders who are going to reflect on the future possibilities for AI here in Australia. And again, I have the opportunity to talk to a stellar set of panellists to guide us through this very rich topic. So, Liesl Yearsley is the Chief Executive Officer of A-kin, Jeremy Howard is the Founding Researcher at fast.ai, and Cathy Foley is the Chief Scientist of Australia.

Description: A split screen of four webcams. Mark, Cathy and Jeremy sit in front of a white background. Liesl sits in front of a window with a view of sunlit trees.

Text in the top left: TECHTONIC 2.0 – Australia’s National AI Summit

In the top right, the logo for the Australian Government Department of Industry, Science, Energy and Resources.

At the bottom, a transcript of the participants’ speech runs across the screen.

Mark Pesce: So, Cathy, at the first Techtonic we brought in secondary school students. There was a photo of that in the opening, because we wanted them to pose their own questions about a future in which we know that artificial intelligence will be looming very large. And so, in the run up to Techtonic 2, we again went to a local school in Canberra, we asked the students what was front of their mind when they hear about and when they think about artificial intelligence. So, we have to keep in mind, of course, these kids will be spending their entire lives in a world where AI is increasingly a part of every aspect of their lives. So, let's go to the first question here.

Description: A girl wearing glasses and a black hoodie in front of a board covered with illustrated posters. On the posters: The text, ‘Metal Tech’ underneath a smelting pot. A computer monitor. A man wearing a welding helmet and apron.

Text: Sanvi – Year 7 student.

Sanvi:

My question is, will the advance of artificial intelligence in the future affect our safety?

Description: A split screen of four webcams.

Mark Pesce: Alright, Cathy, what do you reckon? I mean, this is one of the core questions that we're asking, is this going to make for a safer world?

Cathy Foley: Look, it's so important that this has been raised. And it's fantastic it was the first question because I think safety of any new technology is absolutely critical. And I'd have to say that I think there's, in general, good things. So, imagine if we've got to a point for personal safety, such as getting to a point where our Fitbits or any of our devices that we wear now get to a point where they gather information, and AI is able to go through and analyse that and say, you're going to have a heart attack in half an hour's time.

Before I have it, the ambulance is at my door coming and giving me a trip to the hospital so I could get my triple bypass. So, that sort of thing is something which I think is going to have enormous impact on us. At the moment it sounds a bit like science fiction. But these things we're beginning to see happen already. And it's something where, when you were saying before, we overestimate the current and underestimate the future, I'm not sure where that sits at the moment. But I think we're going to be seeing this coming to our lives in so many ways.

We're also going to see it helping, and we've heard in some of the other talks, just safety in work. If we've got automated systems, we're going to start seeing the ability to have greater safety at work. I think we heard one of the speakers talk about AI for being able to identify sleepiness, and cognition of drivers when they're driving. And it's already made a 90% safety improvement on the road. How good is that? So, they're the sorts of things where we're going to see things happen that will make us a safer world, I think, in many ways. Like with any new technology, there's always downsides. And I think the thing which we're beginning to realise now, just from the current development of digital technologies and social media, people are thinking, what is this data doing? Who's controlling it? Is it safe? How is it being used?

And this is where us as a society needs to think now, prepare now for that social licence and agreement as a society as to how we want to manage data, how are we going to have the rules and regulations about how it's collected and used. I think we might have seen Apple just advertising an app now where you can choose whether or not you're followed with your mobile devices. So, we're seeing technology already beginning to respond to that. And I suppose the final thing is data security. And I think what we're seeing at the moment, cyber issues are huge.

But what we're also seeing is the evolution of quantum technologies, which I think will be an add on to and an accelerator into the future to make AI technologies that much better, and more impactful. But what will also bring with that is things like quantum key encryption, which will actually allow the development of unbreakable security codes and things which will mean that we'll be able to have secure data. But we have to get that all ready.

Mark Pesce: Jeremy, Liesl?

Liesl Yearsley: I'll go next. Just a little bit of context, I've actually been building and coding and deploying AI systems for a really long time. A-kin is a public benefit corporation. We're making AI systems for the habitat. They're going to run your whole home as a complex matrix. We're building them for NASA already. And we're starting to deploy them in tens of thousands of homes. Our previous company became a global AI powerhouse and was acquired by IBM, and we were delivering about 60 million live AI human interactions at the time of acquisition. And had about 6-10 giant Fortune 100 companies using it and tens of thousands of developers.

So, I've been around a little bit in AI. Around safety, I think there is, you know, to reference your opening comment there, there's the near term and the 5-7 year term. Absolutely, would I rather have an AI tracking if I'm OK, or some random person who turns up once a week? If I was crossing a street at 10:00 on a Saturday night in a busy suburb, would I rather be in the street that's 100% AI powered cars, or 100% humans? Absolutely 100% AI powered cars. Humans are buggy, we're emotional, we do dumb stuff. So, I think in the near term, AI is going to power a lot more. We're already using AI in our braking systems in our cars, and all sorts of things that we don't even realise we're using it. Long term, I think it's a little bit more oblique and a little bit more dark for me.

In that right now we are deploying AI systems that are able to pervasively shift how our population behaves, and how they do things. You look at what's happened with news in America, and these sort of algorithms that find your biases and amplify them, or you look at things like how we're purchasing things in the home, and what that does to the environment. We're going to sit and eat a pizza and watch a movie on demand because AI has figured out we're going to want pizza in a movie, or we're going to take a dog for a walk. So, all of these micro decisions that we make are increasingly going to be shaped by AI systems. Even how we treat other people.

We're moving into a world where about a third of our relationship time is going to be with some form of AI. And if they're trained to be submissive, if we end up with an army of robots in our home doing our bidding, what sort of people are we going to become? What sort of life choices are we gonna make? And there the safety is going to become a little bit more that will belong. Our persuasion, or our decision making, is going to slowly be kind of battened down to this minimal cognitive cycle, most pleasure kind of world if we keep going with current trends. So, I'm really interested in us thinking about AI not just around regulating and data bias, but really thinking about optimisation.

There's a classic thought experiment in AI, the paperclip problem, where someone said, what if you train the world's best AI to go out and make paper clips? And it got that good, it just kept going, and it mined every single thing on Earth. It melted that frying pan, and it melted my pen, and we ended up with a mountain of paper clips and nothing else in the world. So, that's a classic AI optimisation gone wrong. So, I think in these early days is where we really should be thinking about companies and people who use AI, which is increasingly every one of us, to think about how this is optimising our lives. If you want me to talk about singularity, I can, but we don't have a lot of time.

Mark Pesce: That's alright. Jeremy?

Liesl Yearsley: I think it's close with everything. That's all I'm gonna say.

Jeremy Howard: One of my biggest concerns is that the answer to the question of, will be safer, is that the answer might be, it depends who you are. AI has the ability to increase inequality in ways that are deeper and more substantial than most people realise. And it's because of something called positive feedback loops. Let me give you an example. I've spent the last 10 years in the US, so apologies if most of my examples are a little US centric.

But in the US, Black Americans are arrested for marijuana possession about seven times more often than white Americans, despite using marijuana at about the same rate. So, there's an underlying bias in data around arrests based on race. Furthermore, in the US, it's increasingly common for police departments to use something called predictive policing algorithms, which is where they use machine learning to try to figure out where people are likely to be arrested. And that's trained by looking at arrest rate data. Based on that data, many American police departments send police to the places that the machine learning algorithm predicts there will be arrests. As a result of which, there are more arrests.

That data then goes back into the algorithm to predict where there's going to be more arrests. And that then says, oh, there'll be more arrests in the places previously we told you there'd be arrests. This is an example of a positive feedback loop. And this happens everywhere in machine learning unless you very intentionally try to avoid it. So, any biases caused by history or culture or data issues get enhanced. And this is one of the reasons that I'm concerned that AI may make you more safe if you are already in a position of privilege. And the only way to stop this is to ensure that there's a large, you know, there's a really good diversity of people working on the technology because they're the people who understand the capabilities and limitations of the technology, but also the culture and society in which they operate.

Mark Pesce: Absolutely. Alright, next question from our students. And, Jeremy, I want you to take the first crack at this one. Although I can tell from Liesl talking about the singularity she's gonna want to answer this one, too. So, next question coming in.

Description: A boy sits in the chair vacated by Sanvi. He wears glasses and a black polo shirt with an embroidered white shield logo.

Text: Anders – Year 7 student

ANDERS:

My question is, will AI have an effect on the development of humankind?

Description: A split screen of four webcams.

Mark Pesce: So, that is a deep future question. Where is it going with the development of humankind, Jeremy?

Jeremy Howard: Well, as it happens, I started the data science group at Singularity University. So, I have something to say about that as well. But what I will say is, I want to focus on the timeframe for which the asker of this question is likely to care about, the next 20 or 30 years. And the answer is, it's going to make a big difference. And the reason why is that there's been a recent step change in human capability. Back in the late 50s and early 60s, scientists started developing something called neural networks. And the first one was deployed in the early 60s. And it was a huge machine with lots of lots of wires. And it didn't do very much at all.

Neural networks continued to do very little for decades and decades and decades. And so much so that people again and again decided, this is hopeless, this is stupid. And we had these things called the AI winters. Something happened in 2012 that almost nobody noticed. It is in 2012 one of these neural networks outperformed humans at the very human task of recognising traffic signs. Now, that's not a very exciting problem. But the very idea that suddenly humans can be better at looking at things, and people told us that something happened. And what had happened is that finally, after over 60 years, these neural networks, which have just been gradually increasing in capability at an exponential rate, passed humans.

And since 2012, they've really taken off. And we've seen neural networks now beat the world's best Go player, totally smashed the protein folding problem, they're used at Google to optimise their data centre electricity usage. They're rapidly going all around the world. So, from time to time humanity has a step change in capability. This is one of them. We had a similar thing with steam power and electricity where things that previously required human or animal inputs to provide energy to processors, we could say, OK, we can now use steam or electricity, and the world changed. We now have a technology which is able to do certain types of human intellectual behaviour better than humans. And we're gonna see the same step change.

Now, what happened in the industrial revolution is many historians believe that most people were worse off for the next few decades because of the inequality it created, child labour issues, and so forth. Eventually, things came around, and we're all happier that it's happened than not. But the question of like, what's life going to look like in 20 or 30 years, it's going to depend a lot on how we utilise this technology. I think in Australia, as a country, we are dramatically underestimating the impact it's going to have on society, we are massively under investing in it. If you look at the US already, the top five largest companies by market cap are all tech companies, and are all very strongly AI driven tech companies. Zero of ours are.

So, we're in danger of falling behind economically, but also as a society unless we invest heavily here. But I do think we've got the right skill set and the right foundations to be extremely successful if we put our mind to it. And so, I think we could see Australian society, it could go in either one of two directions. We could become an absolute world leader, a huge economy dramatically better than our size would suggest. Or we could become a net importer of intellectual capability. That's what it would end up being, and that would be a disaster for us as a country.

Mark Pesce: Cathy, do you want to have a go at that one?

Cathy Foley: Yes. So, I want to add on to that, is two things. The first one is just looking at the opportunity to really see how we can make AI into something which does create a more balanced and even society. And that's where we've heard already that there's a potential to go in a couple of different ways. But can we actually look at how we actually make the decisions on the algorithms we design. And I think that's where I'd like to come back to thinking about the equity, diversity and inclusion of those people who are creating the algorithms and creating the actual technologies themselves. If you think about, for example, we've heard that gender is an issue, and that we don't have enough women wanting to become AI scientists or data scientists.

So, that's one thing. And it's a very small proportion of women in the industry. And so, we need to change that because that will impact how decisions and how algorithms are developed. So, and then there's also the cultural thing. We were hearing even about the safety examples before. We've also got, apart from just cognition and ability to make better decisions, there are also deep cultural decisions that are made by us which are dependent on where we've come from. I think the classic one is, if an autonomous vehicle is going to run into something, is that the child or the older person? The granny or the child. And it very much depends on from your culture. In some cultures, the child is everything, in other cultures, our elderly are the highest priority.

So, that's something where it's going to be really tricky to know how to do and how to manage and engage with that as we develop AI capabilities. But the thing that we need to understand is, what is that going to mean for us as a society? So, we're talking about, how is this going to develop humankind? I'm wondering whether it's, again, got two ways to go. One, which is a sort of separating out cultures because of the biases that are created, and can we actually work out ways to improve on that? So, I guess that's one aspect. And then I just want to go back to the actual, our brains as a developer. I don't know about you, but when I was in high school I knew about 200 phone numbers. I'm lucky to know one or two now.

So, our mobile phones are actually our brain extensions. So, my brain's got really lazy at learning certain sorts of things. And I don't know what that means for my brain cognition as I get older. But I also noticed for my children when they were little, they're in their 30s now, but when mobile phones were first invented, and they had screens and menus and things like that, I was sort of dithering around trying to figure out how to use them. My at that time, I think, six or seven-year-old son just picked it up and was programming it and doing it straightaway. How did he learn that? And so, I think we're going to be seeing quite a differentiation across the age groups of those who are just engaged, pick it up because it's, you could sort of crudely say, in our DNA because of what they're exposed to by whatever means from the moment they're born.

And the school children asking those questions now have actually identified that when they're going to be our age this is going to be just everyday stuff, they've navigated it. And then how do we, who are the older generation, well, we’re going to live for a lot longer, so, how are we going to be able to make sure that we're part of this emergence of a new technology that's going to be so pervasive it'll change all our lives in many ways?

Mark Pesce: Liesl, should I just be giving up now and throwing in the towel, because I'm getting up there? Or, can I have some hope that AI development in humankind will actually affect me as well?

Liesl Yearsley: I'm actually a bit of an AI pessimist, quite frankly. But that's why I'm doing something about it and building a public benefit corporation that's an AI company. We aim to have enormous gravity in the world and be in about a third of households. I just want to touch on a few points to Cathy's point about gender. If we are indeed going to be moving into a world where more than half of our decisions are made by AI at a government or corporate, or household level, and yet half of our population is not part of that development. It's not just about getting young women into STEM and studying AI.

AI touches our lives when it gets out of the lab and gets into the world and scales into the world. As a female CEO, you can look at any way that people slice or research this or dissect it, we raise about 2.8% of venture funding. 2.8. In the COVID pandemic it dropped to 2.2. Why? Because women were taking on a third more household load. I raise more money in the US than I do in Australia. We are behind the US in gender bias and venture capital. I have raised in my career, I've listed a company, I've had an exit to IBM, and yet I still today raise more money out of grant funding than I do out of venture funding. Why? Because grants are a meritocracy.

You know, we've got this great ambitious thing we're gonna do, it's too risky, it's crazy. Who's got the best tech, the best management team, the best capability and the best ability to get through this project? So, I think it's not about government giving more grants, it's about really looking at our venture industry and our board representation. And financially supporting risk taking more and not just women who play it safe. Just one more point on that, there was a big study done. I think it was the big 500 start-ups have like an annual, big hackathon pitch thing where hundreds of companies go and pitch.

And on the whole, blokes got asked company opportunity questions. Like how you're going to own your market. And woman get asked all the time things like, how are you going to defend yourself? So, I'm just going to say that about gender, I run my AI company differently. My last company was the first company in the world to put human level interactive agents in banking frontline. Nowadays, you see them everywhere. You go to your bank, and there's a pretty skilled chatbot that'll talk to you. We were the first. This was like 10, 12 years ago in Australia, right here. And you know what would happen? People would be mean to these typically female bots. They would say, you're stupid, you're dumb, you don't know what you're talking about. And our clients would say, well, you know, the customer is always right. And I'd say, actually, no.

We are teaching people to behave badly to a submissive section of the population. And then we're transferring that behaviour. So, I used to build in rules. We called them the nice guy, nasty guy score. And if they started behaving badly, we'd go, blap, blap, blap and end the call. And I'm not bashing anyone here, because women are just as bad. Probably sounds like I'm bashing, so, I'll just put it on the table. What I'm saying is I think about the problem differently. I think about how we're training a population. I used to be a teacher.

So, I think about the world my son is going to grow up in, and what sort of decisions he's going to have made around his life. I know we don't have a ton of time. So, I just want to talk quickly about the original question, where is AI going? There is a third wave coming. We've heard a lot about some neural networks, and they actually did come about in the 80s and 90s. Back in the 50s and 60s, we had expert systems or symbolic reasoning. They could reason, but they didn't learn well. Neural nets can learn a lot, but they don't reason well. Even AlphaGo that we talked about earlier, that state of the art beating the AlphaGo game. Think about AlphaGo for a minute.

So, the AI is cracking it, yeah, but AlphaGo has a defined goal. You know how to win. It has defined variables, got black and white pieces. It's got 20,000 games that it's been able to watch play. When last you walk into one event in your life today, when you walked into a new room or a new scenario, and you had a defined goal, perfectly articulated variables, and 20,000 data sets of how this was won or lost in the past. We don't reason like this, OK.

We have what's called adaptive reasoning. If you want to look more at this, DARPA has done some very thoughtful pieces on this third wave of AI. If you take the average neural net today and you try to put in a human brain, it would melt our cortex. So, it's inefficient. We're just throwing enormous amounts of biased data and compute power in it. But there is a third wave of AI coming. And we're working on it in our lab. I'm absolutely sure there are other labs around the world working on it. And it's going to bring about a new generation of AI that will be almost indistinguishable from humans, and how we act and reason, which makes it even more important for us to think about. It won't be knowable.

We can't know everything about how it's thinking. I don't know why my adaptive brakes have decided to slow down, because it knows more than I do. So, that's not the answer, but the answer is, what is the outcome? What do we end up with to end up with a better society, a better life, safer roads. My last point on singularity and then I'll stop.

Mark Pesce: Well, actually, hold that, because you get the next question. And trust me, this is a setup for you. Alright. So, can we play the final question, please?

Description: A girl sits in the chair vacated by Anders. She wears a black shirt with white shoulder tabs underneath a black jacket.

Text: Darcy – Year 7 student

DARCY:

My question is, how much do you think artificial intelligence will advance in the future?

Description: A split screen of four webcams.

Mark Pesce: Alright, Liesl, so how much will AI advance in the future?

Liesl Yearsley: AI was already smarter than us in many, many ways. One of the most interesting papers I wrote is, somebody decided to go and poll about 200 or 300 academics and scientists and commercial people in AI and ask them, when are we going to reach singularity? When are our computers going to be smarter than humans?

Mark Pesce: Wait a minute, for everyone who's listening, could you please just in a sentence tell them what you mean when you say singularity? Because I feel like everyone here does, but maybe not everyone listening does.

Liesl Yearsley: OK. So, colloquially we think of singularity as a point at which computers or AI, A, becomes smarter than us, B, are able to get smarter and smarter without humans telling them how to do it. So, it's like a runaway effect. OK. So, if you took a poll of a couple of hundred scientists and people working in the field, yes, some will say 100 years, some will say next year. But the bell curve added around 20 years, which is kind of interesting. So, I want you to think about this for a minute. All that we have to do to achieve singularity, when we see singularity in like the dystopian movies, it's kind of this moment where the computer goes rogue.

We think about this moment where, again, A, it's got smarter than us. But we seem to think that the other point is when it's got past where... We seem to think that the thing about getting smarter than us is that we can turn it off before that point. I don't think that's singularity. I think singularity is where it could be dumber than us, could have a brain the size of a newt. But if it's able to improve itself, it's theoretically singularity. And if we don't turn it off, that's theoretically singularity. I would like to challenge you to imagine, which I can find a perfectly viable scenario, that somewhere in a lab, somewhere in the world, someone is training an AI to read all of the academic literature on AI, and all the papers on AI, and design 100 experiments, maybe 1,000, maybe a million experiments, and then build them and test them, and then figure out what's the next way to improve itself. I find that a very feasible scenario.

And I also don't see anything in our commercial world that says we're gonna stop that progression, because everyone wins a shit ton of money. Excuse me, a lot of money, and a lot of power in the world. So, I think singularity is less about this evil AI. It's gonna be a different form of intelligence. I think we're gonna have to co-evolve, and we're gonna have to accept that we're moving into a world where our technology is going to become equivalent to us. How do we treat it? How does it shape us as a society? And how do we co-evolve? And how do we make sure we're optimising as parallel species towards the same end goal?

Mark Pesce: OK, Jeremy, how far does it go? How far does it advance? How do I know that you're not actually doing the experiment that Liesl just outlined?

Jeremy Howard: No comment. Look, I worry a lot more about short to medium term issues, frankly. I mean, so the first thing I'll say is I think we have only scratching the very tip of the iceberg when it comes to what deep learning can do. And I think in our lifetimes it's going to change our economy and society dramatically. It already is. But I think hundreds of thousands of times more. I'm pretty sure it'll be more substantive than the impact of the internet. And it could be as substantive as the impact of electricity. So, if there is some future technology breakthrough that's going to be on top of something that's already done a lot.

But my concern is, before we, you know, even if it's possible we get to a point where there's a technological singularity, before we get to that point, all these resources, this data, this compute, the algorithms, are in the hands of people. And generally speaking, they're actually in the hands of corporations. And corporations, by their nature, are sociopathic entities. And I worry that corporations with a lot of capital, putting that into data and compute and algorithm development become more and more powerful. And what actually happens is, we end up in a situation where it's not so much a super powerful AI, but super powerful corporate entities that we actually can't control and they definitely don't have an off switch.

And, yeah, my issue is like, OK, how do we deal with that? How do we actually make it so that this technology is really being used for society's benefit? Because it is a very natural monopoly creator. And people that run companies love monopoly creators. They flock to them, they invest in them, they want to be the first to create them, and then they want to extract from them. So, yeah, I think AI can go a long way. But we have to be careful that society can handle it.

Mark Pesce: Cathy, you got the last word on this, and the last word on the panel.

Cathy Foley: Well, look, I'm gonna say that I think this is where we need to make sure that we engage with the social science and the social licence, and that we bring in the actual HASS part of, so we don't just think of STEM, but we think of the actual humanities, arts, social sciences. Because I think that's something where they need to be deeply integrated, they need to make sure that they provide us with a moral compass that makes sure that what we decide as a community, what our elected officials realise that they have the power to make decisions to control things in a way that allows society to have good AI, not bad AI.

And to maybe understand how those controls are, now we always say the market drives an awful lot of the outcomes. But we do have government setting the policy and the regulation. And they're sort of like the rules of engagement. And so, maybe that's going to be what's going to be absolutely critical in the future, is to grow the HASS side of stuff so that we actually end up with an AI future that is the one which is a utopia, not a dystopia.

Mark Pesce: OK, and brings us back to that idea of responsible AI as being a centrepiece of the AI action plan. Thank you so much, Cathy, thank you so much, Liesl, thank you so much, Jeremy. This has been a very fun panel that went places I had no idea it was going to go to. And now, it is my pleasure to invite the Minister for Superannuation, Financial Services and the Digital Economy, and Minister for Women's Economic Security, Senator the Honourable Jane Hume to speak.

Hide publication menu: 
Show menu