Learn about the future of AI and what we can expect from the technology in the next 5–10 years.
Moderator: Dr Jon Whittle, Director, Data61, CSIRO
- Anton van den Hengel, Director, Centre for Augmented Reasoning, Australian Institute for Machine Learning
- Liesl Yearsley, Chief Executive Officer, A-kin
- Belinda Dennett, Corporate Affairs, Director, Microsoft
(UPBEAT MUSIC PLAYS)
Text: TECHTONIC 2.0, Australia’s National AI Summit, 18 June 2021
Text: This session will commence shortly.
Above the text, the logo for the Australian Government Department of Industry, Science, Energy and Resources. A kangaroo and an emu balance a shield between them.
On the right, still colour images surrounded by pink, white, grey and teal squares: a woman wearing a cap and a red plaid shirt looks at a drone hovering over a sun-drenched wheat field. A man in a yellow hardhat works at a laptop. A mine dump truck sits on red earth.
(UPBEAT MUSIC FADES RAPIDLY)
Description: A split screen of four rectangular webcam feeds, arranged in the centre of a white background, each with a round-edged teal border.
Text in the top left: TECHTONIC 2.0 – Australia’s National AI Summit
In the top right, the logo for the Australian Government Department of Industry, Science, Energy and Resources.
In the top left webcam feed, Dr John Whittle, a man with short brown hair wearing a grey collared shirt and glasses, appears in front of a turquoise and white background that includes the CSIRO Logo and the text: DATA 61.
In the top right webcam feed, Liesl Yearsley, a woman with long honey blonde hair and glasses, sits on a beige couch in front of a window that looks out onto sunlit trees.
In the bottom left webcam feed, Anton van den Hengel, a man with short blonde hair who wears a light blue shirt and a dark navy suit jacket, features in front of a plain white background.
In the bottom right webcam feed, Belinda Dennett, a woman with long, straight brown hair, wears white wired headphones and sports a black jacket in front of a plain white background.
Auto-generated captions run along the bottom of the screen in black text as each speaker talks.
Dr Jon Whittle: Hello everyone, and welcome to this session as part of Techtonic 2.0 on the next wave of AI Technologies. So we've got a fantastic panel today that are going to give us a glimpse into the future and give us their perspectives on what we can expect from AI in the next five to 10 years both in terms of research, but also business adoption. So I'll introduce the panellists in a few moments. But first of all, let me just say a few words to kind of set the scene. I always think that Australia has a pretty amazing capability in Artificial Intelligence. There are various global rankings where we do very well. So if you look at the Stanford's Global AI Vibrancy Index, for example, Australia ranks eighth in the world for AI. The Nature index ranks us about 10th. And in particular in that index, there's three Australian universities that are ranked in the top 100 globally.
And there's also a recent ranking that came out that uses a broader composite AI index based on Oxford insights that ranks Australia about 11th in the world. So pretty strong overall, I would say. However, we are a small country and so arguably to compete on the world stage, we need to do things differently than other countries in the world. We don't necessarily have the same levels of investment as some other countries do. We don't have the same number of big tech companies that other countries do, nor do we have the sheer numbers of trained AI engineers and scientists that other countries may have.
But what we lack there, I think, we gain in potential and with that amazing strength, particularly in the university sector. I think if we can bring all of that together, we can compete in certain areas on the world stage. And that's really what we're going to discuss today. We're going to be looking at what the next wave of AI Technologies are and in particular, where Australia can take a leading role.
It's worth reflecting that, you know, AI actually has quite a long history. AI was invented in 1950. So it's not a new technology. I mean, actually gone through two so-called AI winters during that time where the reality failed to live up to the hype and investment stocked. Hopefully we won't see a third, but we'll get into the details of what we can expect to hear from AI in the next five to 10 years. So just to introduce our panellists who, I think, bring a varied range of perspectives on these questions.
So first of all, we've got Anton van den Hengel. So he is a director of Applied Science at Amazon. He's also director of the Centre for Augmented Reasoning at the Australian Institute for Machine Learning. He's a professor of computer science at the University of Adelaide and is also a fellow at the Australian Academy of Technology and Engineering. And in fact, Anton was the founder of AIML, that's the Australian Institute for Machine Learning, which is Australia's largest machine learning research group and has ranked as high as number two in the world for computer vision research. So welcome, Anton.
We've also got Liesl Yearsley here with us today. So she is the CEO of Akin, which is a deep tech AI company building AI for the habitat. Prior to that role, Liesl was CEO of Cognea, which is a AI company that became a global leader and was acquired by IBM Watson. At the time of its acquisition, Cognea had a number of fortune, 100 companies as clients and over 20,000 developers using the platform. So great to have you here today, Liesl.
And then finally, our third panellist is Belinda Dennett, who is corporate affairs director at Microsoft Australia. She's worked for Microsoft Australia's corporate affairs team since 2012, working across the intersection of technology policy, geopolitics and society and she currently leads the AI policy development locally. Prior to joining Microsoft, she spent five years working as a senior policy adviser to the federal government in the Communications Technology and Digital Economy portfolio.
So I think you'll agree that that's an absolutely fantastic line-up of panellists that I'm sure are going to answer all of our questions today. Just a reminder that you can enter questions for the panel at any time. That should be a Q&A window in the browser that you're using for the stream. So do please enter questions into that and then we will take those questions as we go and towards the end of the discussion. So the first question I really want to ask the panel, and we'll kind of go round one by one and give each a chance to answer this question. It's a very simple question.
And that is what can we expect to see from AI in the next five to 10 years? How is it going to transform industry in particular? So I might start with you, Liesl, to give us your perspective on that question, if that's OK.
Liesl Yearsley: Yeah, sure. I think we're going to see three megatrends swift in the next five years. The one is that, sorry, I've just got my husband in another room and he's actually talking quite loudly. So forgive the background noise here. The one is that AI are going to become much bigger decision makers in everyday life.
In America, over 64% of households now have Amazon Prime and Amazon's really an AI company, Google, Facebook. Most of the giant companies are really thinking about AI more and more. So we are going to have AI that are in our homes, in our lives. The grandchildren of your Amazon, Alexa and Google Home Hub, your Microsoft Suite are going to become much more sophisticated, much more able to predict and anticipate what you want and need. And I believe, I've no doubt that we will be handing more than half of our household decisions or our day to day decisions over to an AI. It will be having about a third of our relationship time with that. If you don't believe me, think about something like, say, Google Maps or, you know, if you're writing an email and auto completes for you or if you use an air conditioner or if you drive a car that has adaptive braking systems. These are all actually AI technologies that just kind of, you know, enter in our lives in a gentle background way and end up becoming more and more part of what we do.
I think in that world, we are going to see a very different relationship between customers and brands. You know, in five years from now, I'm going to care less about, you know, I won't care about if my organic broccoli comes from this supermarket or that supermarket or my mortgage comes from this bank or that bank. Because I'm going to have a personal AI that's increasingly understanding me, making those decisions for me and taking all the cognitive cycles out of interacting with brands. We really don't want to interact with brands. We want to live our lives. We want to have organic broccoli or non-organic broccoli or not broccoli at all or go on a holiday or save money or, you know, be with people we love. So I think that's a big trend that we don't realise how significant and impactful it's going to be. The second big one, I think is that AI is going to become a lot more immersive and in some way overt ways but in some ubiquitous ways.
So often think about the progression of technology is as kind of a pipe like a fat pipe, like the first time human beings leapt onto the back of a horse. We're effectively using something stronger than ourselves to actually amplify our ability to get around the world and to get information from the world. And then you look at something like the printing press. We're essentially amplifying, again, our ability to exchange, communicate, to exchange information with our world. Yeah, how we put our words up and consume data. And every age of computing has brought a progression. So think for a minute about these things. We didn't have them 20 years ago. They were a fat pipe of information where we make our feelings and wishes and thoughts and intentions known to the outside world, and it comes back to us. If you take this away, it's like a piece of your hand is missing. So it's a tool that's become part of who we are. And AI is going to become much more immersive, ubiquitous. We'll have AI glasses, we'll have AI that are predictive and anticipatory and stuff in our environment that's adapting as we walk through it. And we will be thinking less and less about it but relying on it more and more.
Then a third mega trend that's coming if you're interested in the winters that Jon mentioned. There's also been DARPA talks about three waves of AI. Our first wave was very structured. We're looking at symbolic reasoning, you know, teaching computers the relationship between objects. These systems can reason, they're expert systems or rules based systems.
They can reason but they don't learn very well. And we had the era of machine learning which is our second wave based on biological theories and paradigms. And these systems, you can throw enormous amounts of data at them. If they have enough compute power, they can learn but they don't reason very well. So the third wave of AI that's coming, we're working on this in our labs as well. I'm sure others are, is adaptive reasoning, more like a human thing. So able to reason and learn but not need the kind of data and compute power that AI today have. So those are the big three mega trends. So I'm interested I think in a, live in a very different world. But there's not going to be a clear line where we go, oh no, I'm stepping into an AI future. We just going to use this more and more and more until one day we'll look back and go, huh? I don't know how I lived without this.
Dr Jon Whittle: That's fantastic, Liesl. Like I said, if you're painting this picture that AI will just become part of the fabric of our society and will and change the way that we live. I think that's a good segue to you, Belinda. So you've got a particular interest in AI policy. So what do you think that the next kind of five to 10 years AI is going to look like, in a particular, this picture that Liesl has painted as a kind of society where AI is fundamentally embedded on that particular policy? Questions that we need to be thinking about around that?
Belinda Dennett: Thanks Jon. Yes, so I think responsible AI moves from the periphery to becoming at the core of AI development, ensuring that it's developed and built in a responsible way. There's growing awareness around some of the challenges that AI has. How do we make sure they're used responsible?
So I guess that's my area of interest. I think what is interesting that challenges perhaps some of the ways we thought about policy development and regulation is, you know, to what Liesl was talking about, and that is how AI is developed. So we agree. We think there's a paradigm shift at the moment. It's been around supervised learning where AI models learn from data sets that humans have curated and labelled. That has limitations. What we're seeing now is the developments in unsupervised or self-supervised learning where systems can process just huge amounts of data. It's less structured, it can be unlabelled, and it learns patterns and relationships between those different pieces of information.
So it's much more open ended and it closely mirrors the way humans learn about the world. So, I think for me, that raises challenges around how we think about policy and how we think about regulation. Lots of discussion in recent years around algorithm transparency and whether someone should monitor the algorithms, and that's just not going to apply in this way AI is going to be developing. So I think that creates some new challenges.
But I'm quite optimistic. I think the new applications that are available, using this kind of AI in the creative fields. We've seen GPT-3 which can write poetry, It can write your emails for you. I think that's really fascinating. But also in solving societal challenges, climate change. We've seen AI used in COVID in the pandemic. Microsoft's been looking at antigen mapping project to see how the body reacts to different diseases and to vaccines. So I'm quite optimistic that there are huge breakthroughs coming.
Dr Jon Whittle: Thanks Belinda. Now Anton, you've been at the forefront of kind of AI developments for quite a while and as well as I actually am, I did a PhD in AI that I finished back in 1999, and then I decided to leave AI, thinking this AI thing will never really work.
So, I always tell people, if you want an idea and a prediction of what the future's going to look like, then probably don't come and ask me because I'll get it wrong.
But you stuck with it. So, I mean, where do you see the technological developments that we're going to get in AI and particularly thinking of that history that we've had, we've had those winters where things got overhyped.
You know, we've got Belinda now talking about AI systems that can write poetry and things like that. I do still feel that there's a lot of hype around what AI can do, and we probably need to be more informed about what it can't do. So, what's your views of where we're going in the next ten years?
Anton van den Hengel: Thanks, Jon. I would love to take credit for having had better insight than you. But really, I was just standing in the right place when my research area happened to become fashionable. But I think that, as with so many things, we tend to overestimate the short-term impact and underestimate the long-term impact.
Technologically, as Belinda says we're moving out of a process or the research at least is moving away from supervised learning towards weakly supervised learning. That means this kind of matched (UNKNOWN) shift from discriminative towards generative models. What that really means in terms of capability is systems that can learn from less data and learn to do more interesting, more challenging things from less data.
And as Liesl says, there's more focus on systems that interact with humans to achieve, to work with them rather than to do something static. I've been working on visual question answering recently, which is a new, entirely new area that we've applied, for instance, to trying to have robots that you can just tell to do something and they'll do it.
So, the other trend, though, is more of what we've seen in the last ten years. So, along this line whereby things sneak up on us and we miss the big impact. The last ten years have seen a revolution in the way we gather information. Just the way we watch TV has changed completely. Journalism's changed. So much has changed. And a lot of that change has been driven out of multinationals. One of the things that AI does is to create global markets. And we've welcomed these enormous companies into our lives that operate these global markets. That trend will continue and it will continue slowly. But the process that those multinationals have run will continue. And that offers all sorts of opportunities and all sorts of challenges for Australia.
So, there’s a big question for us, particularly about whether we're going to be a follower or a leader in this area, we do have amazing AI skills in this country, we've been punching above our weight for more than 30 years. But we face a challenge whereby there is incredible investment, not just out of governments, but out of companies everywhere else in the world. And Australia is lagging on almost every measure.
Dr Jon Whittle: Yeah, and that's a good segue to my next question, which is to really, you know, we've got this future where AI's going to be everywhere. The natural next question is what is Australia's role in all of that? And as I said earlier, I think Australia does have very strong capability in certain areas in AI and certainly some of our universities. But Australia is also known for not ranking very well on various innovation metrics.
So, what's your view, Anton? I mean, do you think that Australia can be a leader in this space or do you think that, you know, we're resigned to the fact that we're just going to follow behind other countries that have got the big tech companies and the big dollars behind them?
Anton van den Hengel: No, I think we're in a really strong position. Australia's got a lot of what it takes to do this. We are actually in a very good time zone. We've got an amazing quality of life. We've got a fantastic education system. But more than all that, we've got a wonderful set of liberal democratic institutions that offer value for a brand in Australia. And we have great research tradition. We've got all of the pieces that it takes and we've seen Israel, South Korea, Singapore, all take out fantastic positions in this tech by being adventurous, both on a government level, but also on an industry level. Australia does suffer on all of those innovation measures, partly because we've got it too good, frankly.
But that is an opportunity, it's not necessarily a problem. We do as a nation, in my opinion. I'm biased, I suppose. But as a nation, we do tend to focus on tech transfer, and frankly, in that process, not university bashing, when the truth is that universities are doing very well internationally on the basis of not very much, it's Australian companies that fail to innovate seriously, not only in AI but in absolutely every other area as well. But that's a solvable problem. You can look at the other examples around the world, and it doesn't really, doesn't actually take all that much to solve this problem. So, with a bit of commitment, I think we're in a very good position to solve this problem.
Dr Jon Whittle: So, are there particular things that you think we need to do differently as a country? I mean, you're probably well placed to answer this, given that, you know, you've got one foot in industry and one foot in university right now. So, is it just a case of continuing to do what we've done and continuing to try and find our way? What should we change?
The magic power to change things, what would you do?
Description: As Anton speaks, Liesl stands up carrying her webcam with her and walks through a light-filled house. Audio muted, she comes to a stop and talks to someone off screen, gesturing as she speaks. She turns and walks back to her original position, taking a seat on the couch.
Anton van den Hengel: The opportunity in AI, in this tech, is that it creates disruptive businesses in all sorts of areas. So Google weren't running a small search engine before they started searching, before they started Google and Uber weren't running a small transport company before they started Uber. These companies started recently and they revolutionised it. The opportunity with AI is thus to disrupt the global marketplace. And we can do that from Australia. There are actually companies doing it from Australia. There are some great examples.
A lot of what we focus on, we tend to, however, focus unfortunately primarily on existing companies, and the existing companies have shown great reluctance to actually innovate on any level. So, and this is instead what Israel did was to really invest in start-ups, to take some of the people with these amazing world-class skills. We have the skills here and back them in their attempts to take on the world and build a global market for Australia and for Australian tech. It takes resilience, right? We don't have great skills in this area at the moment and it will take multiple rounds of failing before we get the skills to do.
So, just by doubling down, committing to support for AI-enabled start-ups, we can take on the world, like we have. We've got everything it takes. All we need is to double down on the capability to generate the people. It's research that generates these people that take on these things, right? Google came out of Stanford, PhD students. It's about people with really strong understanding of the tech that generate these new opportunities and can see how the technology can create a revolution on a business opportunity. We're really well placed to do that.
Liesl Yearsley: I don't agree. I think Australia is going to keep slipping. But sorry if I'm just, I'm very passionate about this because I've been capital raising and growing companies in the space for over a decade now. I've always raised far more money in the US than I have here. My last company actually moved to America. And suddenly my business catapulted. I was slugging it out for eight years in Australia, just not getting much here. I moved there within two years, we had a big massive deal and a big IBM acquisition. And the Google founders actually dropped out of their PhD to build Google.
The big challenges we have is, you know, I think to grow our AI industry and a lot of our initiatives and focus is on academia and institutional AI and really as Anton correctly pointed out about start-ups, but not just about saying, oh, let's give some start-ups, you know, $300,000 grants or $1 million grant. That's nothing. Nothing. In the US, you know, 2 million, 3 million dollars is a pre-seed round, before you even know what you're doing. Assuming AIs are sitting at about 12 to 15 million dollars when you just started to get product-market fit.
In the US I went about three or four years, just experimenting and breaking stuff and throwing things out the door before we even had to produce an application. So, you know, it's not just that the quantum of the capital here. The capital doesn't really allow you to take risk in Australia. I know we think we have a much more robust venture capital industry than we did a while ago. But you still have to show your blue-chip customers and your big blue-chip partnerships. And to me, most of those companies are from like the last century.
So you spend your time in Australia running around trying to prove yourself as a start-up, running around, trying to prove that you've got something, that you've got these big customers that are validating it. Whereas in the US they say, here's $20 million dollars, go reshape a market, go break the old paradigms and build something new. So, we, you know, the company I'm running now is primarily funded with the US capital and most of our market, most of our growth is in the US.
You need three things for innovation. You need talent, you need capital and you need a market. We don't have a big enough market share. We just don't. We don't have enough capital or the capital we have is not as interested in wildly risky billion-dollar plays. We do have talent and that's extraordinary. I absolutely agree that, you know, the people that come out of our universities are world-class. Often if you're building something innovative, you are the most interesting game in town. So, you get to retain your talent, which is wonderful. But I think we have a very long way to go in a climate where entrepreneurship is seen as something you do if you don't get to become a doctor or a lawyer. It's like sort of, oh, well, you know, oh, it's OK, my daughter decided to become an entrepreneur instead of, you know, and that's not a cultural thing that we esteem here.
Dr Jon Whittle: And this is a question to any of the panellists. I mean, is it a solvable problem or is all hope lost? I mean, it's always useful to kind of reflect back on history. And have we seen improvements over the last five to ten years? Does that give us any signals that we might get some improvements in the next five to ten years?
Anton van den Hengel: Yeah, I think it is a solvable problem. And I agree with almost everything that Leisl has said. I think we don't have smart money, but most of what we've got in Australia is dumb money. I'm afraid that somebody warned me more than a decade ago not to take dumb money. And at the time I dismissed the idea, I was ready to take any money. And they were right. I took dumb money and killed me and it was just it was a horrible experience.
When you talk to a venture capitalist in the US, they tend to be an engineer. They understand the space, they understand the tech, and they're capable of understanding the market that you want to move into or the opportunity to disrupt that market. When you talk to a venture capitalist in Australia, they tend to be an accountant and they want to talk about addressable market and your margin per widget.
The guys from Google didn't finish their PhDs, but they did start them. They got there and they understood that search page ranking is a matrix inversion problem, right? Because they deeply understood what matrix inversion can do. Matrix inversion is obviously a subject close to my heart and probably something Jon spent a lot of time doing as well.
But we do have the talent. I disagree about the market. We have a global market just the same way as everyone else does. Companies come here to try out their tech in Australia because it's a kind of contained market. It's also a good microcosm of the US and to a lesser extent, the UK. So, I think that we've got a lot of the bits of the puzzle. What we don't have is the kind of risk appetite that Lisel us describing in the US capital market. We don't have those funds that are willing to back people on the basis of a vision and the chance of making, of creating a uniform, I said uniform, a unicorn.
What we have is a bunch of people who want 10% per annum return on their funds at a very low risk. And that's why so many of our graduates, so many of our great people go overseas. When you go overseas, you meet Australians everywhere in the tech sector because they've had a great education, they are really good and they've been unable to find a job in Australia because we just don't invest in the same way.
Liesl Yearsley: Anton, I've been there with dumb money thought I could just override it with smartness, just give me the money. I'll do something with it. Oh, my gosh ball and chain around the ankle, for sure. But I had a very direct experience that when I was raising capital for my previous company, the one that was acquired by BM, I was sitting in a venture capitalist office in Sydney, and he literally said to me, I don't care what you think your evaluation is. Our fund mandate is we invest $3 million, we want about a third of your company. So, you are worth $10 million, no matter what.
And exactly the same time I was in diligence with the US fund who's seed run was 10 to 15 million dollars, who also wanted about a third of the company. So, by definition, getting on a 12-hour, 15-hour plane flight meant I was tripling the value of my company for the exact same company, same technology, same uptake, same everything.
So, yeah, I think it's getting better. We safely seeing a lot more Australian VCs understanding that they've gotta be competitive on a global market. So, things like SAFE notes weren't done here at all five or 10 years ago, they're like a convertible note that allows an entrepreneur to grow fast without a lot of legacy clauses in all of their invests and deeds. So, we're seeing a bit of that. But yeah, as I keep saying, we're still easier at the moment to raise capital for very, very risky big play things, like we're playing in what we think is the major productivity challenge of our century. It takes four hours a day to run a home. It's about 20% of our GDP, about one in 10 people have a severe disability, and that costs them about 60 hours a week to manage.
But even as disability aside, a regular home, it's this ridiculous that my grandmother to me she got a whole bunch of technology and appliances. I mean, so I have a whole bunch of technology and appliances that means I can be a working mother. Yet I'm still doing four to six hours a day as many working dads and working parents. So, we are trying to solve this problem, it's not been solved and it's difficult and it's risky. And yeah, there's a lot more audience for that more abstract future play.
Dr Jon Whittle: Belinda, I just wanna bring you into this conversation. I mean, do you think there's a role for the larger tech companies to play here and kind of supporting what Australia could become in AI over the next five to 10 years? And I think it's certainly true that in recent times we've seen more interest from the Amazons and the Microsoft’s and the Google of the world in doing things in Australia. Is there a role for them to play?
Belinda Dennett: I guess I'll start with challenging what do we call success? I guess I get a little bit fatigued that market caps and valuations and number of unicorns is our measure of success. Maybe that's a big company speaking, but what we see is companies, our partners that are doing amazing things out of Australia. I'd point to Willow who are building digital twins, who are exporting their services all around the world that developed here, got staff here.
We see lots of that. Now, to me, that's successful. They are building new IP, they're delivering amazing services. So, I'd love to see the conversation change around that we have this sort of one measure of success and maybe it's a media preoccupation with tech billionaires and market caps and big numbers. But I think there's so many great stories out there.
I think the role of the big companies is underplayed and we see big tech bashing and the tech lash phenomenon. But when you look at, Microsoft's been in Australia for 39 years, we employ 2000 people here. We have 16,000 partner organisations who are building new IP and contributing $20 billion to the economy. So, I think that the big companies are undervalued for the role they play in supporting new development and new local companies, I'd love to see us talk more about the tech ecosystem as opposed to just Australian tech companies. It's a bit of a bugbear of mine that I feel like we define success in quite a narrow way, but I'm not in the venture capital world, so I'll take Anton and Liesl's views of that as their experience.
Dr Jon Whittle: That's actually a good segue. Actually, I'm just looking at the comments and questions that are coming in from our audience, and do please, everyone out there continue to add questions. We will start to pick those up in the next few minutes. But there's a comment in there which essentially says that the challenge is not so much to get the public sector in particular to adopt the next wave of AI tech but to get them to actually use the current tech, which I think relates to a broader question about business adoption of AI.
We've talked a lot about the start-up world, but there's lots of other ecosystems out there that we would like to adopt AI if we're gonna get to this future, that Liesl has painted of AI being everywhere. So, do you think we're in a position where kind of businesses or the public sector can adopt AI right now, are they adopting AI, what are the challenges for them to adopt AI, what do we need to be doing differently to help them to adopt AI? And I'll throw that open to any of the panellists that wants to have a go at that one.
Anton van den Hengel: Yeah, this is a great opportunity. I think in having the various governments as a customer it's something they do quite well in the US and they're a bunch of states mandate that they need to proportion of the acquisition of their spend from local start-ups. And that gives companies their desperately needed first contract.
So, there's good opportunities there and we're doing some, you know, the Australian Institute For Machine Learning is doing some great work with the state government in South Australia in remote pastoral assessment, these kinds of things working on improving both farming and environmental outcomes. There's really good opportunities there. And when you talk to people in the state government, they've got no end of good ideas about how we can help.
There is a challenge in this kind of fast follower narrative that we're, or even slow follower narrative that is getting pushed. So, we're just gonna be a place that waits for AI to become commoditized. And then once it's commoditized, we buy it, which is a bit like saying that we'll just wait for the steam engine to be commoditized and we buy, I suppose. But the difference with this AI tech is that you don't buy the steam engine, right. You buy the wheat, you don't buy the computer, right. Australia missed the ICT revolution despite having a building.
I think we built the world's fourth modern computer in Australia. And nonetheless managed to miss the ICT revolution. The difference for Australia, the difference in this instance is that if you wait for it to be commoditized, the market is gone. It's not that you don't buy the tech, you buy the produce, right. So, we're not buying from Google. We don't buy the machine learning tech, we don't buy a search ranking engine, we buy ads. So, that market is gone. We're not, that money goes to California.
And as with so many of these things, you don't get to buy the tech. It's not like, you get to buy computers from Taiwan, or you get to buy electricity-generating equipment from Germany, or solar panels from China. What happens in AI is that the market is then there, right. They're making the money out of it. So, it is this kind of narrative about being a follower and waiting for it to be commoditized will have a very different impact this time around.
Liesl Yearsley: I have a couple of thoughts to add there. I think Anton's right about commoditization, that also does bring opportunity though for companies who've not dabbled with AI, who are wondering what to do. You know, 10 years ago, if you wanted to, get a speech to text engine or unstructured data analytics using AI it was very difficult. You know, half a million-dollar enterprise-level deployment. Now, it's become very democratised, sort of like in the 80s or 90s if you wanted a CRM system, you have to get 100 million dollar, I won't name any companies, you know, Oracle or something, giant installation. Now you can get HubSpot or Salesforce, whatever 50 bucks a month, the same thing's happening with AI, you can grab TensorFlow, you can grab Microsoft has incredible suite of tools to experiment. And the bars getting lower and lower for companies to pick up and adopt and experiment with technologies.
There are two areas where I was bashing Australia's capital earlier, but that's but I think we have some really interesting things. We could be thinking about more. The one is something one of our early investors actually said to me, he said, in California, you have, it's like the forest with these giant Redwood trees just crowding out all the lights. You have the Republic of Google and the Republic of Amazon, the Republic of Facebook they own all the data, they own everything. They own the tools, they own infrastructure, they own the devices that people are interacting with and managing. And in Australia, it's kind of interesting. You can actually grow up a company and we're a society that's very very protective of our individual rights and our privacy but we're also a lot more willing than most to share information for the greater good. And I think governments very progressive here and very interested in looking at, like our Universal Health Records how well did we do with COVID? Because of like check-ins and QR codes. I mean, wow, good on us.
So, that this idea that we actually have a set of data that has a population we believe belongs to us, and belongs to the greater good, even though we absolutely insist on and should, you know, on us being sovereign, our own data being sovereign to us. So, I think there's something in there that we can really play with where we could give innovative companies, playgrounds to actually do great things in AI without having to just be a commodity or an app in somebody else's massive ecosystem.
Third area I just wanna mention quickly, as I'm in love with robotics, I've been my whole life, I can't wait for an army of robots. Like, I, Robot NS-5 but good ones. I'm so sick of doing dishes and getting my family up in the morning with coffee and having to make dinner and having to clean the floor and having to clean the window and having to do all the stuff.
And we're actually physically building robots here in Australia. We have all these agricultural robots, mining robots, we're building robots that we're gonna be shipping to NASA, and we building them here in Australia, we're putting our own epigenetic brain in them. So, I think we have niche talent. So, we have this kind of global asset of data and a willingness of a population to do sort of population wide deployments. But we also have very, very niche talent. So, we could just have a lot more. We might find the next great interface in our lives is not a smartphone. It might be some other technology combination. We may well be able to invent it right here.
Dr Jon Whittle: Linda, did you have any thoughts on how we can help businesses to adopt AI? You must have seen a lot of this in your current role?
Belinda Dennett: I actually have a fairly optimistic view of the business take-up. If you saw the ABS characteristics of business data a couple of weeks ago, cloud is the greatest uptake of technology in business and Cloud is, of course, the precursor to AI. I think it was around 57%. And I suspect that's underreported because if we're relying on self-reporting, I would say there's a whole lot of businesses that don't know they're using cloud. And I would say equally with AI, so the AI used self-reporting use dropped off massively, but Jon, you said you were studying AI in the 19... I won't quote you. I think you mentioned AI has been around since the 1950s, sorry, you weren't studying in the 1950s.
Dr Jon Whittle: I definitely wasn’t studying it in the 1950s.
Belinda Dennett: Apologies.
Dr Jon Whittle: AI lotion that I use on my skin. It keeps me lovely.
Belinda Dennett: So, AI has been around for a long time. And the example I always talk about is Clippy, like. Remember Clippy, Clippy was AI. So, AI is infused into, if you're using an accounting software package that sort of makes recommendations to you about tax deductions. You're using AI. If you're using spell check, you're using AI. So, I kind of think it is there, there is uptake. The Cloud, it kind of reminds me of the early days of Cloud of how are we gonna get businesses to adopt the Cloud? And even then, I don't think they knew they were already using the cloud. So, I'm optimistic. Now this may not be creating, this is using AI, but as Lisa said, we have Microsoft, we think Cloud is the great democratizer. It democratises it.
So, if you're using the Cloud, you're getting, whether you're a one-person company or a huge government agency, you're getting the innovation, the features, of all the big-cloud-scale cloud providers. And we're doing the same with AI. So, build your own AI. Here's the tools you can build it. So, I'm a little bit more optimistic about that. And I don't think the business adoption and use is as bad as the picture we paint. And I certainly think COVID accelerated that. And it's now, how do we lock in those games?
Dr Jon Whittle: Great answer. And we also I'm aware that we have quite a few public sector professionals in the audience today. And there's a question that's come around public sector. We've talked a lot about business adoption. And the question which is, do you think that AI is engaging with the sort of problems that will be useful for the public sector?
For example, I've seen a case of using AI to pull information from old PDFs to translate old public service documents into a modern format. So, you touched a little bit on this earlier, Anton but are there particular opportunities or particular things we should be doing to support the public sector in solving problems using AI?
Anton van den Hengel: Yeah, absolutely. It's. most of the technology is actually already there. It's just a question of applying it towards these goals, but the next generation of tech is getting even better at these kinds of processes. So, one of the trends that's happening now is towards personalisation. So, we all used to go and buy albums. So, for those of you that you're old enough to remember when an album was a piece of plastic that you bought as a unit that had 10 to 12 songs on it, and now nobody buys albums anymore. My children think it's hilarious that you would be buying songs in some curated order. That was inflexible. Now you buy songs in individual, you know, song-ettes and put them together in any way that you want, you know, you don't have to wait for your television channel to put on what you want to see, you watch what you want to see whenever you want to see it.
And this end, you know, actually ads, Google ads is the great example of personalisation where the revenue from ads has gone through the roof with their effectiveness. And that personalisation process has huge applications in government because so much of what government does is about trying to make one big decision that kind of suits everybody and doesn't really suit anybody. Whereas personalisation means that you can make decisions that are actually targeted at individuals, and there are a huge range of opportunities to do fantastic good out of ordinary decisions that government make without really needing to, you know, spend any more or invest, so we could direct public housing in the place where it's gonna make have a better impact, we could make health decisions more personalised, we could, you know, I have no doubt, personally, that taking homeless people off the street saves money in total.
The only problem is that it saves a little bit of money in at least 14 separate places. And if we gave homeless people somewhere, you know, services and actually paid for them centrally, then that would save money in total, And one of the things that ML can do, AI can do, is put to, you know, is actually do the full economic model of all of those things, predict what the impact in 10 years time on somebody's life will be of given them, you know, taking them off the streets effectively. And, you know, join the dots that would enable somebody in government to make the decision they probably want to make already. It's just providing better evidence because it can make personalised predictions.
So, the opportunities are enormous. One of the challenges is this great fear, whenever you talk to public servants, they're very enthusiastic about the opportunity and absolutely petrified that they're going to wind up on the front page of tomorrow's paper. I personally think that we should underwrite that risk, you know.
So, if you're a public servant and you wind up on the front page of tomorrow's paper, we can just guarantee that if, as long as you haven't done anything that's, you know, underhand, then that will have no impact on your career and you won't get a please explain from the minister just because, you know, somebody's written a story about you that misrepresents what happened. You know, just that single action, I think, would unlock a lot of the goodwill, you know, a lot of the fantastic initiatives that people inside the public service are already trying to drive.
Dr Jon Whittle: Thanks and that that relates to… sorry, Liesl.
Liesl Yearsley: Jon I have to say something about policy and government and population. I think there's a very dark side to this, it's the elephant in the room that I think policymakers and people miss a lot, and that is this, when you think about a highly personalised AI that's able to observe most of the movements you make in your daily life, most of your communications, able to predict and anticipate what you're going to need, and then give you a human-like front end that says, hey, I'm here for you, I love you, I'll do whatever you want, I'll make your app work in a certain way, I'll talk to you. We have an already shifting population behaviour at scale. Whenever we would go into a new sector in my last company, we would have a couple of big organisations depending on what was happening economically. So, banks might say, hey, we want more credit card debt or we want more credit card signups and more personal loans and more mortgage debt. And we would double it against a baseline with a good high quality predictive personalised AI.
The same thing happened across purchasing decisions, you can double the amount of junk people buy, and bring into their lives burning fossil fuels and, you know, someone really hit home for me. I was living in San Francisco and I had a couple of different home hubs, all AI-powered prediction engines on the back and very human-like front-ends. And I remember opening my door one day and I had six boxes stacked like a pyramid, I had post-it notes inside bubble wrap, inside a box, because I don't have to think about it. I just said, oh, home hub, give me post-it notes.
My son, at the time I was gonna teach him to be a good young man to cook and clean up after and budget, and he had the job of making us dinner one night a week. He figured out he could just stand over our home AI and say give me pizza, and the pizza would arrive. So, what's happening is that the AI that's been built and put in our lives is driving towards an optimisation outcome, and that outcome is get someone addicted to a platform or get them to buy a bunch of stuff.
We were building companion characters from media companies that people were spending 20 to 40 hours a week talking to an AI and not leaving their homes, not going for a walk, not going to work, not meeting their friends. It was awful. I used to pull back from those projects and try and donate the technology to good projects just to kind of counter-balance it, and the company I'm running now is actually a public benefit corporation for this reason.
But what I'm saying is, we are going to have a population where we are, you know, to me what's happened in the US in the last few years is a result of giant technical systems mediating and filtering and deciding what we're seeing and shifting population behaviour, population belief systems at scale.
So, back to this pizza example, you know, the shareholders of the particular company that delivered that won a fraction of a cent, but we broke our household budget for meals, we broke our family value system, which is, you know, boys have to learn to cook and clean.
We ate a thousand calories and there was an environmental impact to that. None of that was thought about because a whole capitalistic system is around, you know, shareholder value, transaction events, eyeballs. And we don't actually have an alternate system to actually fund innovation, except grants, which are very early stage. So, that to me is the big elephant in our room.
Again, I think five or ten years from now, you will be on default mode. Your dinner that you eat, people are, I'm on a board of a bank and people spend more than half of their disposal income in certain demographics on takeaway and, you know, Uber and just stuff that that just gets suggested to them and is driven by AI. So, I think there's a significant shift coming in population behaviour.
When we have an army of submissive female AI doing our bidding, we've watched population behaviour change. You know, we'd have AI that you could say to it, hey, you stupid cow, and I would say I'm sorry, sir. And most of them do that now. You wouldn't do that to a human. And that behaviour became entrenched and people transfer that to human operators. So, really at a policy level, I really think, you know, just like we've got triple bottom line reporting around environmental and social impact, we really need to think about optimisation.
What are we optimising the population that we serve? Whether it's our clients or our population or the AI technologies we're using. We can't stop AI. But what are they working for? Are they working for that fraction of a transaction event in California or are they working for a family or an individual's life becoming a little bit better day by day? If we don't get this right, the cost is going to be massive health blowout, environmental costs and all sorts of other social costs. So, we are not transferring on to the people making the profit out of it. It's a little bit abstract but it's where we're going.
Dr Jon Whittle: Absolutely. We've got a good actually a question on this topic in the chat, and that's, what sort of new regulations do you expect will come alongside the next wave of AI? I mean, we've seen some movement in recent months on this with the proposed EU regulations to regulate AI, and also in Australia, the Human Rights Commission are bringing out this report on human rights and AI.
Belinda, this is your area of expertise. Do you have any predictions for where regulations are going to change that could help here?
Belinda Dennett: Yeah. So, we kind of view that, you know, a whole new area of law came around with privacy, we have privacy law experts, we have privacy law. I think we see that with AI that there will be new regulation, new experts, you know, perhaps there becomes something like in the medical world of a Hippocratic oath for developers of some kind of responsible requirements. I think there are use cases of AI.
So, you know, talking to the government, you probably don't need to regulate Clippy, he's probably OK. But when you're talking about things like facial recognition technology and anything that goes to, you know, biometrics and surveillance, then we probably do need some sort of guardrail regulation around how that is used. So, I think it's hugely challenging, I think it'll be interesting to see where the EU developments go. But yeah, I can't see a world where this is not regulated, I think how is the challenge.
Anton van den Hengel: Jon, I think that, you know, there are... Well, I must admit I'm intrigued about the extent to which, particularly in Australia, we seem to focus on ethics rather than AI, right? We seem to be determined to get the ethics process right before we've got any AI.
I have repeatedly referred to the process about the adoption of cars in the southern states of the US where you had to have a person walking out the front with a red flag waving it, warning everybody that a car was coming, and, you know, actually one of the laws said you had to be ready to disassemble your car and put it in a ditch and cover it with a blanket if a horse came in the other direction.
I think that's, you know, we're kind of in the same position at the moment, a whole lot of very well-intentioned but impractical efforts towards trying to ameliorate the challenges of this very powerful technology. We do need to do something, but, as with so much else, this is a global problem and our, you know, what we decide, we just had a crack at trying to resolve something with Google and their ads revenue. And, you know, I think it pretty much highlighted that we don't have a vote, at the moment we have such little standing globally, that our position is irrelevant, and if we're not a participant in this space, then it will always beat us.
The reason that we started developing satellite tech here in Australia was really because we wanted to join, get into the security council of the UN so that we'd have a vote, right? We needed to be good at something so that our vote would count. The same applies here, we need to be good at this if our vote is going to count for anything. Otherwise, you know, we'll have a gold-plated ethics system and absolutely no ethical outcome.
That said the primary ethical challenge we face isn't about how we apply the AI we have at the moment. The primary ethical problem we have, as far as I'm concerned, is who's missing out. AI is largely in the hands of a bunch of companies and, you know, very well-educated middle-class people, and there's a whole lot of, you know, there's an infinite range of other places that you could apply this tech to do a great world of good, and it's not happening.
I'm personally trying to start a foundation to fund the application of AI in some of these areas. We've been working to try to prevent human trafficking. And there's, you know, nobody else is going to invest in preventing human trafficking, and it's a problem that's right for the application of AI, and there's a thousand other ones that just aren't getting done and really have no prospect of getting done. There's a much, much bigger ethical issue than whether we use face recognition in quite the correct way in Australia, you know, horrible things are happening and they, you know, we're not even, AI is not even touching the side unless there's a whole lot of money associated with it.
Dr Jon Whittle: Absolutely. We're getting towards the end of our time, we've just got a couple of minutes left. I might round up by asking one final question to each panellist, and maybe this is a bit unfair because I haven't prompted you with this question yet, but, you know, we've only got two minutes left, so a 30-second answer each, please.
But this panel was all about, you know, what we can expect from AI in five to ten years. Let's do a thought experiment and imagine that we reconvene this panel ten years from now with the same set of participants, what's the main topic that you think should be discussed at that panel ten years from now?
Belinda, what do you think?
Belinda Dennett: That's interesting. I think whatever comes next after AI, I think this will move so quickly, we'll be beyond that, you know, maybe it's, whatever comes after quantum ten years it's a long time in the tech industry.
Dr Jon Whittle: Good answer. Liesel?
Liesl Yearsley: AI would be running more than half of my life, I'm gonna have a drone that goes and get me a coffee before I even know I need one, I'm gonna, you know, just have everything I do in my daily life on background autopilot, who do I belong to? Who does my eyeball and my dollar and my time, my time is the only finite thing, but who does that belong to.
Dr Jon Whittle: Thank you. And Anton.
Anton van den Hengel: I expect that, well, I really hope that what we'll be saying is that we've solved all of these first world problems, how are we going to go start to solve some further ones.
Dr Jon Whittle: Great. Well, look, we've reached the end of our time. So, it remains for me just to say a big huge thank you to our panellists today. Liesl Yearsley, Belinda Dennett and Anton Van den Hengel, I thought it was a great discussion. And also a big thank you to all of our audience today for being so active putting in questions, which I tried to get to as many of them as I could.
There is a slight break now before we will all reconvene at 4pm for the main Techtonic stream. So, I'll see you all back there. But if we don't see you all before, I look forward to seeing you in 10 years at our reconvened panel, where we will be discussing the next wave of something other than AI. But thank you very much and see you all later.
Liesl Yearsley: Thank you.
(UPBEAT MUSIC PLAYS)
Text: TECHTONIC 2.0, Australia’s National AI Summit, 18 June 2021
Text: This session has now concluded. Thank you for joining.
Above the text, the logo for the Australian Government Department of Industry, Science, Energy and Resources. A kangaroo and an emu balance a shield between them.
On the right, still colour images surrounded by pink, white, grey and teal squares: a woman wearing a cap and a red plaid shirt looks at a drone hovering over a sun-drenched wheat field. A man in a yellow hardhat works at a laptop. A mine dump truck sits on red earth.
(END OF RECORDING)
- Techtonic 2.0
- Welcome, opening and keynote address
- Panel session: AI applications in manufacturing
- Primer on artificial intelligence
- Stream 1: Putting the AI Ethics Principles into practice
- Stream 2: The next wave of AI technologies
- Stream 3: How to AI-proof our workforce
- Stream 4: Using AI to deliver for citizens
- Panel session: Future opportunities for AI in Australia
- Closing address and remarks