Leeds Business Week: Behavioural Science of Forecasting and Predictions
Notes are provided below, alternatively, you can download a PDF of the slides and notes by clicking here.
Welcome everyone, my name is Kash Ramli, and I am the Marketing and Behavioural Science director at Needle Partners. Needle is a full service international law firm with a strong focus on corporate and commercial work. We have offices in Leeds, London and Malaysia. Aside from the usual responsibilities of marketing, my role also includes the application of behavioural science to the way the firm operates and strategizes.
Behavioural science or the more popular term, behavioural economics, is the culmination of psychology and economics research with a focus on decision making. Traditional economics assumes that people are rational decision makers that predictably act to maximise their utility. However, psychology research has found various examples of where this rationality does not prevail and that people make irrational decisions. This will therefore be the overarching theme of today’s seminar.
I won’t be going into the nitty gritty of mathematical formulas for economic forecasting or the analytics of making predictions. Rather, I will be using the tools of behavioural science to first show you the flaws of forecasting and experts. I will then discuss the cognitive biases that affect us all when making predictions and decision in general. I’ll then be a bit more positive and highlight the traits of good forecasting and will share some strategies that help avoid these biases when making predictions and decisions in your business. To begin, I would first like to get you all to make a prediction in a quick little game
Your objective is to try to outwit each other. Your mission is to predict the behaviour of others - I would like you to pick a number between 0 and 100. 2 o To win, your number must be two-thirds of the average of all the numbers chosen by everyone else. So for example, if the average number of everyone’s guess is 90, you would win by choosing 60 (which is two-thirds of 90). If the average is 10, you would win by picking 7. Now write that number down, or just remember it.
So how many of you made a random guess? What numbers did you guess? So the process people go through in coming up with an answer for this is usually as follows. If everyone chooses a random number, the average should be around 50. Two-thirds of 50 is 33. Some people might go a step further and think that if everyone is going to say 33, they should state two-thirds of 33, which is 22. The global average is usually around 28 (an average of those choosing 33 and 22). So the winning answer would be two-thirds of that, which is 19.
But why don’t people go further and say 2/3rd of 22 is 15, and 2/3rd of 15 is 10…..all the way to 0? An answer of zero would be what is known as Nash’s Equilibrium, and in a world where everyone is hyper rational, this would be the answer everyone gives. Rather, most of us use what is called “K-Step” thinking, where K represents some small number, 1, 2 or 3. Also those that just pick a random number are 0-step thinkers. What I’m trying to demonstrate here is that people often fail to think more than a couple of steps ahead. This is obvious in many areas.
Similarly, it is this K-step thinking that leads to market bubbles forming. So part of thinking multiple steps ahead is to do with anticipating what may or may not be coming. And that is where good predictions and forecasting comes in.
Where the ancients focused their energy on turning lead to gold using the science of Alchemy, today’s business try to create success using the science of predictions. I’m sure that most of you are well aware of the importance of making predictions when it comes to the success of your business. However, predictions in your business can have many applications so its worth just briefly pointing them out:
Business planning: This is the more obvious application of using predictions, but a good business plan always includes accurate estimations of what the future holds for both the performance of the business as well as the market it is entering.
Controlling stocks: Predictions can help you take control of inventory or production. Getting an accurate picture of future sales will help you maintain a “leaner” production process.
Counter-factual: Even if you are unable to make accurate predictions, either due to complexity of the system or a lack of data, you can still utilise predictions by using it as a point of measure for any campaign or special 4 project you develop. You can make a prediction of your business without the intervention, then use that to determine the impact of the new intervention.
Predicting competitor reaction: Knowing how your competitor will react to whatever action your company takes is almost as important as predicting how the market will respond. Are they going to replicate your new product or service? Are they going to respond but cutting prices on other core products and services to make up the difference in your gains? Knowing how your competitors will react will help inform your contingency plans and allows you to stay multiple steps ahead. I once saw a documentary on one interesting but very creepy application of predictions. It was the prediction of beauty in children for modelling agency.
There are traditionally three methods of forecasting; qualitative, time series analysis and projection and causal models. Choosing the right one is mainly based on the context and content of the forecasting. You can’t do a time series analysis if you have no historical data, you wouldn’t use qualitative method if you need very specific and accurate predictions, and you wouldn’t try to make causal models for abstract concepts. The focus today is not necessarily on these methods but the general mental processes that come into play when utilising any of these methods. So lets have a look at some of the shortcomings…
Forecasting is a huge industry. And the industry that profits most from predictions, whether accurate or not is…finance. However, there has been growing evidence showing that the accuracy of many of the predictions in finance are random at best. For example a study by S&P Dow Jones last year showed that many mutual funds performed worse than pure chance in the past five years. That is, flipping a coin would’ve led to a better performing fund. Furthermore, the funds that did manage to outperform their peers one year were likely to be poor performers the following year. This is what the Princeton Professor of Finance, Burton G Malkiel, calls “a random walk”. The term “random walk” is seen by those on wallstreet as an obscenity….an epithet created by the academics to discredit them. In his class, Professor Malkiel once had his students create charts of fictional stocks by flipping a coin. The stocks started at a price of $50. Each day, the stocks either gained or lost fifty cents in value depending on the coin flip. As the class soon realized, their stocks’ “investment history” looked realistic. Malkiel even showed one to a “chartist,” an investor who picks stocks solely by analyzing stock market charts under the assumption that certain patterns repeat themselves. Malkiel describes the analyst’s reaction: “One of the charts showed a beautiful upward breakout from an inverted head and shoulders (a very bullish formation). I showed it to a chartist friend of mine who practically jumped out of his skin. "What is this company?" he exclaimed. "We've got to buy immediately. This pattern's a classic. There's no question the stock will be up 15 points next week.“"
In 2010, a Russian circus monkey named Lusha picked an investment portfolio that “outperformed 94% of the country’s investment funds” to great acclaim. Given 30 blocks, each representing a different company, and asked “Where would you like to invest your money this year?”, the chimp picked out 8 6 blocks. An editor from a Russian finance magazine commented that Lusha “bought successfully and her portfolio grew almost three times.” He suggested that “financial whizz-kids” be “sent to the circus” instead of rewarded with large bonuses. Now I’m not saying that all fund managers have no skill. There are several outliers, including Warren Buffett. But what the S&P study and these examples did show is that the chance of any out performance persisting for any extended period is very low. And it’s extremely unlikely that you can pick an outperformer based on past performance.
And its not just finance, though it is more fun to point out the flaws in finance. There are examples of “experts” in various fields that have proven to perform no better than chance in their predictions. In a study conducted on apparent experts in various fields, scientists were able to demonstrate that those who identified as experts were more likely to claim that they knew of and understood the meaning of a bunch of made up terms. So for biologists, examples of made up terms included metatoxins, bio-sexual or retroplex. They did this with experts from various fields, including finance. The scientists were even able to create this effect amongst non-experts by making them feel like they were experts through a knowledge test that was easy but made to seem difficult.
So why do we listen to experts anyway? Well one reason is that as humans we cognitively try to avoid uncertainty. And so we are naturally drawn to anyone claiming to know what the future holds. And when they have some sort of complicated method or system, it becomes a lot more convincing. Not only are we attracted to experts or forecasters, but we are more attracted to forecasters who make positive forecasts. In a recent study it was demonstrated that when forecasts are higher versus lower (e.g., a 70% vs. 30% chance of team A winning a game) consumers infer that the forecaster is more confident in her prediction, has conducted more in-depth analyses, and is more trustworthy. The prediction is also judged as more accurate. This occurs because forecasts are evaluated based on how well they predict the target event occurring (team A winning). Higher forecasts indicate greater likelihood of the target event occurring, and signal a confident analyst, while lower forecasts indicate lower likelihood and lower confidence in the target event occurring. But because, with lower forecasts, consumers still focus on the target event (and not its 7 complement), lower confidence in the target event occurring is erroneously interpreted as the forecaster being less confident in her overall prediction (instead of more confident in the complementary event occurring—team A losing). Before I explain anything I just want to play a quick game with you.
So I’ve chosen a rule that some sequences of three numbers obey…. And some do not. I want you all to have a go at guessing what that rule is. I’ll start you off with an example of a sequence that does obey the rule. 2, 4, 8, Now I want you to try and figure out what the rule is by testing out different sequences, and I’ll tell you if it works.
Well the rule is simply the next number has to be larger than the previous number. Easy right? Tested on kindergarten students, they usually get it pretty quickly. So the reason why may not have got this is down to something called the confirmation bias. We often quickly develop a theory in our minds of what the rule could be and when exploring for answers, we simply ask questions that would support our theory. It never occurs to us to ask a question that may yield a no…..even though there isn’t any penalty for getting a no. When making predictions, we will often only seek out information and evidence to support the prediction we most prefer. Even though when you seek to disprove your idea or prediction, you 8 sometimes end up proving it — and other times you can save yourself from making a big mistake. But we’ll explore this further later.
The point here is that neither experts nor ourselves are cognitively perfect and we all befall various mental quirks that can lead us away from optimal decision making. So the main point of this seminar is to help you all be aware of these deviation from what economists would deem as rational behaviour. Collectively, these deviations from rationality are labelled as cognitive biases. Here is a more formal definition: A cognitive bias refers to a systematic pattern of deviation from norm or rationality in judgment, whereby inferences about other people and situations may be drawn in an illogical fashion. Individuals create their own "subjective social reality" from their perception of the input. An individual's construction of social reality, not the objective input, may dictate their behaviour in the social world. Thus, cognitive biases may sometimes lead to perceptual distortion, inaccurate judgment, illogical interpretation, or what is broadly called irrationality. So let’s explore some of the other cognitive biases that affect people’s judgment and decision making of future events.
A great way to demonstrate this in respect to forecasting is with the use of optical illusion. Take a look at these examples: Our brains create reality on the fly, based upon educated guesses about what is probably out there. The reason is that it takes much less processing power, and burns many fewer calories, for the brain to crudely sketch out what it sees based on a few simple rules such as "shapes that appear fragmented 9 usually aren't, so fill in the gaps", than it does for the brain to rigorously process and compute complete and accurate pictures. Such shortcuts not only conserve energy (our brains consume 20% of our daily calorie usage), but speed up decision making. So going back to the illusions, when we look at them with a bit more conscious effort it is clear that those shapes aren’t actually there. Applied to forecasting, when we try to make assumptions about the future based on limited information, our minds will automatically try to fill in those gaps. We may also experience Apophenia, which is the tendency to see patterns within random data.
Perceiving patterns is not just limited to our visual world. We try to apply, even on a conscious level, to many things….including gambling. I went to a casino a few weeks ago during my friend’s stag and had a go at roulette for the first time. While I was deciding where to place my money, some of the people I was with kept making suggestions pointing to the fact that those numbers or that colour had not come up yet and so it was due to fall on it that round. This was further backed up by the fact that there was screen next to the table that showed the outcome of all the previous spins that night. It was as if there was some magical force of justice that ruled over the table making sure that every number had its time to shine. So numbers that had come up should be avoided and numbers that hadn’t were due. However, as I’m sure you all know, that isn’t how probability works. This cognitive bias is aptly named the gamblers fallacy. Each spin on the roulette table or more simply, flip of a coin is always independent from previous turns. But we see this behaviour not just amongst gamblers but in many aspects.
Similarly, another bias often found amongst gamblers but applicable to the general public is the hot hand fallacy. A belief in the idea that because you’ve been winning you will continue to win in a random event. You see this a lot amongst stock The lesson to be taken away from this in regards to predictions is simply that past information of seemingly random events, does not always reliably predict future outcomes. History is useful in predicting the future only if it is dependent or relevant. And even then, there is still always the risk of what has been termed a black swan event. This is essentially an event that comes as a surprise, has a major effect, and is often inappropriately rationalized after the fact with the benefit of hindsight. You shouldn’t try to predict these, but should rather just build robustness to respond to them.
An alternative explanation for the hot hand fallacy and gamblers fallacy is the illusion of control. Rather then seeing patterns that aren’t there, some people can sometimes believe that they have control over completely factors that can’t really be controlled. You see this lot in sports when people feel like if they wear their lucky underwear, their team will win. Applied to predictions, people will be very optimistic simply because they think they have some magical control over that factor.
Does anyone have any examples of an absolutely ridiculous idea for a start-up or product that they or a “friend” has come up with? You know, when they think that it’s the thing that everyone would want. 12 Well that is known as the false consensus effect. It is the tendency for people to assume that their own opinions, beliefs, preferences, values, and habits are normal and that others also think the same way that they do. So when your business tries to predict the market size for a product that doesn’t yet exist, it becomes tempting to then conjure up a figure based on how much you think people want that product.
The answer is B.
Recognition Heuristic: This is called the recognition heuristic and it suggests that people tend to perceive information that is recognisable to be of greater significance. So the fact that Reebok, Hilton and starbucks are familiar to us leads us to believe that they would be the one have the greatest sales revenue. It is therefore important to keep an open mind when making predictions about the market. The bits of information you may think are relevant may only be so because they are more salient in your mind. For example, the company you think is your biggest competitor may only be so because you drive past their office everyday.
Availability Heuristic: But this effect extends not just to the things we recognise, but also thoughts or ideas that easily brought to mind. This is known as the availability heuristic. For example, there was a post going around recently about the fact that in the last year more people have been killed by selfies than they have by sharks. This is interesting because we naturally see sharks as predators that would kill us if we encountered them. Similarly when we think about start-ups, we always think about the big successes. Because those are the ones reported on by the media. People don’t talk about the failures. But the truth is around 80% of start-ups fail. So when making forecasts make sure that the probability you establish of an event occurring is estimated with an objective broad view and not just a subjective expectation.
As I previously mentioned, many of these cognitive biases are the result of our brain using mental shortcuts or heuristics to make decisions. Heuristics are useful because they use effort-reduction and simplification in decision-making. Despite these examples, these heuristics can also work to help us make efficient judgements or predictions in the face of uncertainty. We often do not have all the information or data required to make mathematically perfect judgments. Gerd Gigerenzer, a professor of psychology in Germany believes that many of these heuristics are actually adaptive tools that will help us make better judgment and decisions when compared to 13 conscious, deliberate thought. This is especially the case when we are faced with uncertainty or complexity. When making predictions about your business, there can often be a lot of uncertainties.
One of the most famous examples of heuristics being used over complex calculations is the story of Harry Markowitz. He had won a nobel prize in economics for his development of Modern Portfolio Theory, which is a theory widely used in finance to help maximise the returns of portfolios. Despite having created this somewhat elaborate theory, when he was asked about how he chose his investments he professed to simply using the 1/N heuristic. Which basically means he equally allocates money across all the N funds. N being whatever number there are. So remember the recognition heuristic? The placement of greater value to things that we recognise? Well despite the fact that choosing the group of familiar companies was not the most optimal choice, it was probably the most reliable choice when you don’t know any better or in the face of uncertainty. So while it is always important to make sure the formulation of your or your expert’s predictions have not been affected by cognitive biases, sometimes, when faced with uncertainty and risk, listening to your gut can be the best option. It makes life a bit more fun as well.
So in the face of all these biases it then becomes hard to trust anyone who provides a prediction or forecast in the absence of irrefutable concrete data. Well after a somewhat dire first half, I’m now going to turn to factors that make good forecasts and after that some tools you can utilise. To answer the question of what makes a good forecaster…..or Super-forecaster, which is the proper technical scientific term, I would like to introduce you to Professor Phillip Tetlock from University of 14 Pennsylvania. He is probably THE eminent psychologist of forecasting and created the Good Judgement Project in collaboration with the Intelligence Advanced Research Projects Activity (IARPA), an agency within the Office of the Director of National Intelligence The project which ran from 2011 to 2013 got volunteers from all walks of life to make predictions on a range of geopolitical and economic issues. E.g. Greece still in EU, Mugabe still president.
The project was run like a tournament and the top 2% of forecasters who were able to make consistent good forecasts higher than chance, received the title of Super- forecasters. Throughout the study, Tetlock observed the superforecasters and was able was able to determine the traits that made them good at forecasting. So the obvious points were: Intelligence: You didn’t need to be a genius….just a bit above the average.
Experience: Like anything in life, the more you practice, the better you get at it, so those that had experience forecasting performed better.
Domain expertise: Intelligence Analyst working with the CIA who had secret information and knew a lot about the subject are bound to be better than a dentist.
But the points that are of most interest include:
People who try to be more open minded about ideas typically perform better. Open minded forecasters aren’t afraid to change their minds, are happy to seek out conflicting views and are comfortable with the notion that fresh evidence might force them to abandon an old view of the world and embrace something new. They’re much more willing to consider unorthodox ideas or results, and to stray from the theories and beliefs they’re comfortable with.
Most important is what Mr Tetlock calls a “growth mindset”, it is related to open-mindedness. It is a mix of determination, self-reflection and willingness to learn from one’s mistakes. The best forecasters were less interested in whether they were right or wrong than in why they were right or wrong. They were always looking for ways to improve their performance. In other words, prediction is not only possible, it is teachable.
Results of the study found that teams performed 10% better than individuals. This can be attributed to the fact that many of the biases we’ve discussed do not affect groups in the same way. Although groups do experience their own set of biases, for example, groupthink. One strategy that can be utilised when sourcing the opinion of a crowd is to simply gather everyone’s opinions and use the average. One of the earliest findings of the wisdom of the crowd was by a British statistician, Francis Galton in 1906. During a fair, 800 people were gathered to try and guess the weight of a bull. Most guesses were either way too high or too low. However, at the end Galton looked at the average of everyone’s guesses and found it to be 1,197 pounds. The ox weighed 1,198 pounds. And there are many more examples of this.
Finally, a short lesson in probabilistic reasoning also led to significant improvement in forecasting. These lessons essentially taught people the basics of probability theory and how to apply to look at and analyse evidence before inferring conclusions. They mostly utilised Bayesian statistics, which basically describes the probability of an event, based on conditions that might be related to the event.
Some participants in the Good Judgment Project were given advice on how to transform their knowledge about the world into a probabilistic forecast in attempt to make them better forecasters. This training, while brief, led to a sharp improvement in forecasting performance. The advice was summarised with the acronym CHAMP: ● Comparisons are important: use relevant comparisons as a starting point; ● Historical trends can help: look at history unless you have a strong reason to expect change; ● Average opinions: experts disagree, so find out what they think and pick a midpoint; ● Mathematical models: when model-based predictions are available, you should take them into account; ● Predictable biases exist and can be allowed for. Don’t let your hopes influence your forecasts, for example; don’t stubbornly cling to old forecasts in the face of news.
So a lot of these small tips can help you improve your general predicting strategies. But now lets take a look at some formalised methods that you can apply to the your business makes predictions and decisions. The focus of these two methods is that it helps avoid the effect of the cognitive biases that we looked at previously. It avoids sluggish meeting based decision making and yes-men mentality that is so often prevalent in some the biggest business failures in the world.
Prediction markets Prediction markets are something that are utilised by several major companies including Google, HP and Microsoft. It is based on the idea of the wisdom of the crowd. Prediction markets essentially allow participants to stake bets on the likelihood of various events taking place. So the Good Judgement Project mentioned earlier is an example of a type of prediction market. You browse various questions on potential political events and you buy shares based on whether you think they will occur or not. In a business setting you probably wouldn’t bet real money, but for example, Google employee would bet using a made up currency called Goobles, which could be exchanged for gifts. The things that people can bet on any internal or external events. Some examples include:
- The likely success of a new product - When a competitor is likely to release a new product - The emergence of a new market - When the company’s project will be complete
One of the great things about prediction markets, is that you can make it anonymous, and therefore provide a channel where people can voice their concerns about the prediction and planning of a project.
For example when Boeing was first developing the 787 Dreamliner, which was delayed by four years, the company had an ongoing prediction market in place. However it wasn’t utilised by the CEO who every 3 months kept saying ‘Yes of course we will meet the deadline’ across the entire four year delay. Looking at the prediction market there was only a 5% probability of that first flight attempt happening.
We’ve all heard of a post-mortem, where a corpse is analysed to determine the cause of death. Well a pre-mortem is the hypothetical opposite. So at the beginning of a project, once everyone has been briefed of the details of the project, you ask your team to envision that the project has catastrophically failed. You then ask everyone to think about every possible reason for this failure and to write them down. Try to encourage them to think of the things that wouldn’t usually be mentioned simply because it’s too silly or impolitic. For example, there was a report last week about a radio station in Zimbabwe that went off air for a few hours because baboons chewed through their cables. If you mentioned this in a meeting, you would definitely be laughed at but it still happened.. So once you have a good amount of reasons, you can then go through and identify possible ways to prevent this from happening. Obviously some of the problems might be so unlikely that it may not be worth addressing. But you can run a quick cost-benefit analysis on these to determine how easy or cheap those problems are to fix. The premortem doesn’t just help teams to identify potential problems early on. It also reduces the kind of damn-the-torpedoes attitude often assumed by people who are overinvested in a project. Moreover, in describing weaknesses that no one else has mentioned, team members feel valued for their intelligence and experience, and others learn from them. The exercise also sensitizes the team to pick up early signs of trouble once the project gets under way.