Use a model, if you know what we mean

Sometimes in order to understand a theory or something complex, you need to build a model of it. Sometimes these models can teach you something surprising.
Photo of the band Right Said Fred at a concert.

UK band Right Said Fred, the excellent inspiration for our headline. Photo by Sven Mandel/CC-BY-SA-3.0, via Wikimedia Commons.

Sometimes in order to understand something complex, you need to build a model of it.

Models are everywhere, including:

  • Prototypes in human-centred design and service design are models of possible solutions. Designers make a model (out of cardboard, virtual reality pixels, LEGO, etc.). They then test it with people who would use that solution.
  • Economic models can simplify complex systems (like, y'know, the economy). These models predict what might happen, and to make a proxy for human behaviour.
  • Strategic foresighting/futures uses models to play out different scenarios. Strategic foresighting games and war games are essentially models of complex things (in this case, the future and wars). The CSIRO’s Australian National Outlook is a great example of foresighting. It uses a number of models, and then suggests three scenarios, and policy guidance on how each might be achieved.

There are even a few people who are convinced we're actually living in a model right now (hi Elon!)

Learning models

As computing power and the use of artificial intelligence (AI) increases, we're able to produce more complex models. This in turn increases what we can learn about the thing being modelled.

A team from Google's Deep Mind research lab showed a learning pathway that works in AI also works in our brains.

One way AIs learn is by doing a task (like, making horrifying pictures of dogs). Another layer of the AI then checks the task (the layers are where the 'deep' comes in). If the result of the task is good, the first layer understands the 'good' assessment as a reward. It then adjusts its approach and repeats the task.

By repeating this again and again and again (and again), eventually the AI makes slightly less horrifying pictures of dogs (or wins at chess, or makes pizza, etc).

Neuroscientists have wondered whether or not our brains might learn the same way.

Turns out, they do (read the whole thing - there's a lot in there). Usually people are at pains to stress how different AI is to our brains; in this case we're more like the machines than we thought.

Testing your theory

But the real key here is how useful a model can be to test a theory. As Helmuth von Moltke (and a lot of people after him) have pointed out – no plan survives first contact with the enemy (or even friends, as you’ll remember from the last dinner party you tried to organise).

Next time you've got a theory of how something might work, try building it out as a model. Then test it with the people you’re building it for. You might just learn something surprising.

See also

The Public Sector Innovation Network (PSIN) was an Australian government network helping public servants understand and apply innovation in their daily work. PSIN ceased on 8 January 2021.

See more PSIN resources or read about PSIN on the National Library of Australia Trove archive.