We’re all Bayesians

Human decisions can be viewed as a probabilistic problem: in a world full of uncertainty, how do we understand what we observe and respond rationally?  Decision-making, inductive reasoning, and learning have all been modeled using Bayes’ theorem, which says that the probability of event A given that event B occurs (the posterior) depends on certain known or estimatable probabilities: the probability, without any other information, of event A happening (the prior), the probability of B occurring given that A has occurred (the likelihood), and the probability of B occurring (the model evidence).  In an article published today, Acerbi et al. looked at factors that make our probabilistic inference suboptimal.

For example, suppose we want to decide whether to wear a dress or pants.  The decision probably depends on whether or not we believe it will be a hot day (event A).  We’d probably just check weather.com to make a rational decision or just choose the outfit we like the best regardless of the weather, but suppose all we have at our disposal is memory of the temperatures last summer (event B).  What we want to know is the chance that today will be hot given our knowledge of what summer weather is typically like.  We probably have a prior idea about what today’s weather will be like, knowledge of typical summer weather, and some level of confidence in our understanding of last summer’s weather.  We have all the quantities needed to apply Bayes’ rule and pick out our outfit for the day.

Of course, nobody sits down and computes the posterior probability of a sunny day every morning by thinking about the prior.  And this is where we have trouble making decisions:

if prior experience has complex patterns, we might require more trial-and-error before finding the optimal solution. This partly explains why, for example, a person deciding the appropriate clothes to wear for the weather on a June day in Italy has a higher chance of success than her counterpart in Scotland.

In this example, the prior distribution of temperatures on a June day in Italy might be centered around a high temperature, whereas the distribution for Scotland might have two peaks or be somewhat flat.  We’d assume that it would be easier to guess the average of a Gaussian distribution than a more complicated or skewed one.  To see how this plays out in rational decision-making, the authors tested peoples’ ability to predict the location of a target using a variety of different prior distributions.  They concluded,

This finding suggests that human efficacy at probabilistic inference is only mildly affected by complexity of the prior per se, at least for the distributions we have used. Conversely, the process of learning priors is considerably affected by the class of the distribution: for instance, learning a bimodal prior (when it is learnt at all) can require thousands of trials, whereas mean and variance of a single Gaussian can be acquired reliably within a few hundred trials.

We might have a harder time figuring out patterns of weather in Scotland than in Italy, but the paper suggests that we’d have an equally difficult time deciding what to wear in either country.  Perhaps we just disregard Bayes’ law and make irrational decisions based on our emotions or personal biases.

Advertisements