Agent-Based Models and the Ghost in the Machine

Agent-Based Models and the Ghost in the Machine

October 7, 2019

Originally published on Economics from the Top Down

Blair Fix

In the opening post of this blog, I described my ‘top-down’ approach to studying society. This means studying groups of people without trying to reduce everything to the actions of individuals. It’s not that I think individual actions are unimportant. Of course they are important. The problem is that we are hopelessly far from understanding the complexities of the human psyche. In this post, I’m going to dive into the problem.

Let’s talk about the difficulties of modelling human behaviour. To model human behaviour, we need to create a little simulated ‘agent’ that makes decisions just like a human would. We call these ‘agent-based’ models. We create a whole bunch of little simulated humans. Each has an algorithm for making decisions. We then let these agents loose and see what happens. The results can be very interesting.

But ‘interesting’ results are not what science is about. Science is about the search for the truth. So ultimately we want to know if these models mimic actual human behaviour. The problem is that this is not easy to test. Modellers typically let loose their simulation and then look at the aggregate results. If the aggregate results are consistent with real-world data, the modellers celebrate. The model ‘explains’ human behaviour!

But not so fast. A model can be completely wrong and still produce results that are consistent with observations. Consider the distribution of human height. It follows a bell curve, centered around some average height. But there are many different mechanisms that can produce a bell curve. So suppose we create a model that produces the correct bell curve for human height. It does not mean that our model is correct.

Suppose we assumed that adults’ height varies randomly through time. This is easy to model mathematically, and we could make it produce the correct distribution of human height. But the model is completely wrong. Adult height does not change (except when we reach old age).

So the hard part of modelling is not testing the model results. That’s easy. The hard part is testing the assumptions. You see, testing the assumptions is something that has to be done independently of the model. It’s part of experimental and observational science. It’s actually really hard, and few economic modellers try do it. Almost 50 years ago, Wassily Leontief went on an epic rant about this very problem. His complaint: economists modelled without testing their assumptions.

Sadly, most economists did not change their ways. Was it because Leontief was just some crank? No! Leontief was the President of the American Economics Association at the time. Economists did not change their ways for a far more perverse reason. If they had actually tested their assumptions, they would likely have found that most of their assumptions were wrong. Or worse yet, that they were impossible to test.

But enough about the sorry state of mainstream economics. Let’s look at how hard it is to model human behaviour.

The Ghost in the Machine

To model human behavior, we need to simulate the ‘ghost in the machine’. This is the (metaphorical) demon in our brains that drives our behaviour. I’m going to break the ghost down into three parts, based on what is accessible to our conscious minds. I call these the “three ghosts in the decision-making machine”:

The Three ‘Ghosts’ in the Decision-Making Machine:

  1. Awareness that we are making a decision
  2. Awareness of the trade offs that we are weighting
  3. Awareness of the decision-making algorithm

For any given decision, we can be aware of 0, 1, 2, or all 3 of the ghosts. We can use this awareness to define four types of decisions:

The four types of decisions:

  1. Instinctual decisions
  2. Intuitive decisions
  3. Quasi-rational decisions
  4. Rational decisions

Let’s go through each one.

Instinctual Decisions

Instinctual decisions occur when we don’t know any of the three ghosts. We don’t know the decision-making algorithm, we don’t know the trade offs being weighted, and most importantly, we don’t even know we are making a decision! This may seem absurd, but these instinctual decisions probably outnumber all the others.

Take the act of walking. This takes decisions. How long should your gate be? How high should you lift your feet? How hard should you push off? How far apart should your feet be?

Are these really decisions? Yes! We figured this out when we tried to teach robots to walk. It’s remarkably hard. The robot must be given an algorithm for making these decisions. The robot must explore the possible ways to walk. Then it must judge which way is best, and make a choice. We make these decisions unconsciously. Some people — dancers — receive special training so that they have conscious control over all the possible decisions. But this takes years of practice. The rest of us are happily oblivious to these choices. We can thank evolution for this instinctual ability.

Intuitive Decisions

Intuitive decisions occur when we are aware that we are making a decision, but we don’t know the other two ghosts. We remain blind (either totally or partially) to the trade offs, and we are blind to the decision-making algorithm.

Choosing a life partner is an intuitive decision. We are fully aware of this decision (most of us, anyway). But we are unaware of the second ghost — the trade offs we are weighting. We’re aware of some of the trade offs when choosing a mate. But certainly not all of them. And here’s an unsettling thought. Are the criteria that we think we are using the ones that we are actually using? This is hard to know and hard to find out. But I suspect we may often delude ourselves.

Lastly, when making intuitive decisions we are oblivious to the decision-making algorithm at work. We just get an intuitive feeling for what to do. When choosing our life partner, we call the algorithm ‘love’. It remains happily beyond our understanding. Again, we can thank evolution for this ability.

Quasi-rational Decisions

Let’s move up the ladder to quasi-rational decisions. Here we know that we are making a decision, and we know the trade offs being weighted. But we do not know the decision-making algorithm.

This happens in situations like courts. Here we clearly define the decision to be made and the trade offs to be weighed. A judge is presented with a specific set of facts, and is supposed to consider these and only these facts. The judge’s job is to weigh these facts to determine a verdict. You may protest that we do know the decision-making algorithm. It’s dictated by the law, judicial precedent, and so on. But alas, this is an illusion.

If the decision-making algorithm was determined by the legal system, then all judges would render the same verdict if given the same facts. But we know this is not what happens. Judges have different opinions and different values. In the language used here, they have different quasi-rational algorithms for making decisions. These algorithms are not known to us, and it is doubtful they are known to the judges. Thanks to evolution, we are equipped with a mental machinery that can weigh different facts. But we do not understand how this machinery works.

Daniel Dennett calls this “competence without comprehension”. It’s one of the miraculous things about life. Organisms are little robots that are good at reproducing. We don’t need to understand why we do what we do. We just need to do it.

Rational Decisions

Let’s climb to the final rung of the ladder: rational decisions. Here we know all of the ghosts in the machine. We know we are making a decision, we know the trade offs, and we know the decision-making algorithm. In humans, this type of decision is confined to very artificial scenarios, such as mathematical logic.

The beauty of formal logic is that it provides the decision-making algorithm. It defines how we weight different trade offs. For instance, logic dictates that 5 is greater than 4. So if we need to decide which number is greater, there is only one correct answer.

Economists use the term ‘rational’ with wild abandon. But I use the term quite restrictively. I reserve ‘rational choice’ only for those decisions where the algorithm is known to us. And because the decision-making algorithm is known to us, we can prove that there is only a single correct answer.

Defining the ghost by what the machine can do

Having defined four types of decisions, let’s return to agent-based models. The curious thing is that virtually all such models live in the rational decision-making space. In fact, we are really good at making these models. Look at the success of chess-playing programs such as Deep Blue. These programs are little computer agents that make rational decisions about how to play chess. They weight every move by the probability that it will lead to winning the game. And they are spectacularly successful. They crush us humans.

But before we celebrate, we need to reflect on the goal of agent-based models. The goal is not to beat humans at chess (or any other game or task). No, the goal is to play the game like a human would play. This means the simulation must make all the same mistakes that humans are prone to make. The program must make quasi-rational decisions.

This task is more difficult than building a rational chess-playing computer that can trounce humans. Only in the most contrived situations do humans make purely rational decisions. As the logic gets harder, we make mistakes. The algorithm of formal logic becomes too hard and we start to rely on algorithms given to us by evolution. These algorithms make mistakes. A correct model of human behaviour must simulate these mistakes.

So why do most agent-based models live in the rational decision-making space? It’s simple. Rational decisions are the ones where we know the algorithm. In games such as chess, the algorithm is defined by the rules of the game. Because we know the decision-making algorithm (we defined it when creating the game), we can teach it to a machine. We may not be able to do the computations ourselves, but this is what computers are good at.

The problem is that the real-world is not a game. We don’t get to define the decision-making algorithm. Most of the decisions that matter in our lives are either intuitive or quasi-rational. This means the decision-making algorithm remains unknown to us. Of course, this algorithm should be scientifically studied. But we don’t need agent-based models to do this. We need experiments.

If we want to build agent-based models, it’s far more fruitful to go to the lowest ghost in the machine: instinct. Why? Because here the decision-making algorithm is easiest to mimic. Take the behaviour of people in crowds. In crowds, humans behave instinctually. We move with the crowd, but also protect our personal space. This makes our behavior fairly easy to model. Other animals, such as penguins, show similar behaviour.

There is another philosophical reason to model instinct. It guarantees that our models will not overstep their bounds. To model instinct, we need to find very specific situations where our instincts take over. Crowd movement is a good example. Having created an agent-based model of crowds, no one would dream of using it to explain state formation. It’s limited applicability is clear. This guards against economic imperialism.

Introspection: Does the ghost play tricks on us?

Microeconomics is a form of ‘armchair psychology’. The early microeconomists used self reflection to postulate algorithms for how humans make decisions. For these economists, their introspection indicated the self-evident truth of their theories. Here is Lionel Robbins making this claim with bombast:

[T]he chief … postulates [in economics] are all assumptions involving … simple and indisputable facts of experience …

We do not need controlled experiments to establish their validity: they are so much the stuff of our everyday experience that they have only to be stated to be recognised as obvious.

We are lucky that engineers don’t think like Robbins. Imagine an engineer saying: “We don’t need controlled experiments to establish the strength of a bridge. It’s strength is the obvious stuff of everyday experience”. This doesn’t pass the laugh test. Yet this thinking forms the basis of microeconomics.

My point is that we need to question the validity of self reflection. Is it a valid tool for understanding how we make decisions? Suppose I make a decision, and then reflect on how I made the decision. Here is the problem. How do I know if my reflection is correct? I can’t just trust that it is. Self-delusion is one of the hallmarks of the human experience. (I know this because I am the greatest scientist on Earth). Having self-reflected, we must test the hypothesized algorithm to see if it predicts future decisions.

This takes difficult experimental work. We need to set up decision-making scenarios and test how humans behave. Then we make an algorithm that mimics this behaviour. In other words, agent-based modelling must be an outgrowth of experimental psychology. Cue shudders from economists.

When the ghosts talk to other ghosts

To make things even harder, it is not enought to model the decisions of individual humans. The model must also mimic how humans behave in groups.

The problem is that human groups are more than the sum of their parts. When humans interact, we update our beliefs. So not only must we model how individuals make decisions. We must also model how the decision-making algorithm gets updated as individuals interact.

In groups, we adopt shared ideologies that influence our decision-making algorithms. We’re prone to groupthink. The resulting group behavior can go in all sorts of directions. It can lead to military atrocities, to great humanitarian works, to triumphs of science, to mass suicides, to vain monuments, to creative visions, and so on.

When you stop to think about how complex this is, it boggles the mind. Are we ready to model this complexity? I think not. We should be modest and admit our ignorance on this front. It’s better to admit we know little, than to stubbornly hold on to theories that are wrong.

Having outlined all of the difficulties with agent-based models, I admit that they have a place in science. They are useful as thought experiments. We suppose that humans have a specific decision-making algorithm. We then use an agent-based model to explore the consequences. This is a perfectly valid tool for investigation.

But we don’t get to pretend that this thought experiment says anything about actual human behaviour. To do that, we need to test our assumptions. We need to show that our decision-making algorithm is how humans actually behave. As I’ve tried to show here, this is extremely hard to do.

If agent-based modellers want to make strides in understanding human behaviour, they will have to become a sub-discipline of experimental psychology. Oh the horrors!