How To Think About The Future Differently
The world is getting more uncertain. The bad news is that over the past decade, uncertainty has been increasing. Overall uncertainty has been steadily on the rise. This chart shows the use of the word uncertainty itself in the country reports of the economist intelligence unit. And it’s not just there, it was recently noted that the word “uncertainty” appears more than 60 times in the IMF’s latest global outlook, about twice as many as in the 2022 versions. So everyone trying to make sense of the world is increasingly hedging their bets and referencing this greater uncertainty. This is because there are more and more disruptive events. And they’re happening faster, with less lead time between them. There are these convergence loops between the environment and technology and those are happening with higher and higher frequency.
The boom of GenAI is making this many times worse. Generative AI amplifies the human type of intelligence that underlies every human activity, providing superhuman capabilities in a range of activities. This includes everything from producing content (code, documents, images) to supporting decisions. This means it has broad ranging effects. And since it is in effect creating machine-generated truth, as we saw with the picture of the pope, we are losing touch with what objective reality is.
We used to say that the map is not the territory, but now it seems there isn’t even any longer a map. So it is getting harder and harder to have a view of the future. It is getting harder to set a goal and a vision and navigate toward it. This is a problem as we need to have a clear view of the future to be able to orient teams and organizations toward a goal and a vision.
The good news is that we are not only living in a time of increased uncertainty, we are also living in a golden age of forecasting. Forecasting is the best method we have for creating more knowledge about the future. And there is currently a lot more really smart forecasting happening.
This golden era of forecasting got its start in 2011. IARPA – the US intelligence community’s equivalent to DARPA – the Defense Advanced Research Projects Agency that we have to thank for the internet – launched a massive competition to identify cutting-edge methods to forecast geopolitical events. The Good Judgment Project – led by Philip Tetlock and Barbara Mellers at the University of Pennsylvania – emerged as the undisputed victor in the tournament. GJP’s forecasts were so accurate that they even outperformed intelligence analysts with access to classified data. They did this by setting up an online platform where amateur forecasters joined in and then they trained them (I was one of those that joined, and later became a superforecaster).
In addition to the work we do, there are now also a ton of other online platforms where more and more people provide forecasting. Examples include Good Judgment Open, run by Good Judgment Inc, Metaculus, and many others.
So given my experience as a superforecaster, I believe there are ways to think about the future more clearly and build more knowledge about the future. And I think everyone could benefit from thinking about the future in new and different ways. Here are three different approaches for thinking about the future differently, inspired by my work as a superforecaster.
Thinking About the Future like an Architect
The first approach is that of an Architect. What do I mean with thinking about the future like an architect? Two things: The first aspect is to think of information about the future as scaffolding, like the one you have on buildings. This can be a very rough scaffolding. But you need to have something where new information can be entered in. You should always establish your beliefs about something as early as you can, i.e. decide on what you think is likely to happen based on the information you currently have. Then you have something to work with when new information comes along. It’s really hard to make use of new information if you don’t have that prior scaffolding to put it into.
In Maria Konnikova’s book Mastermind, how to think like Sherlock Holmes, she says he uses the approach of an attic, where he puts information to store it with similar pieces of information. That’s a similar idea, but I think in order to stress the interconnected nature of events, scaffolding is a better image for this. When you have that scaffolding, then you can weigh each new piece of information against your existing beliefs and see if you should update your view. The technical term for this is Bayesian updating. And there is a technical formula for this, but you don’t necessarily need to use the actual formula, basically the idea is just that you have an idea of the likelihood of something to happen, then when you get new information, you weigh that against your existing information and update both based on the new information as well as the existing information. This ascertains that you give appropriate weight to both sets, otherwise there is a risk of overreacting to the new information.
The second thing is to think about this scaffolding as modular and built up piece by piece. Like IKEA furniture. Thinking this way means you can construct and deconstruct your view of the future depending on changes that occur. The future is basically the whole scaffolding, the combination of all of these different events interacting. So you build up your initial view like a scaffolding, with beams and ledgers. Then when something changes, it is easy to just change the scaffolding, you just make one beam thicker or thinner or more or less loadbearing. This works in both directions, you can construct your view of the future from a set of disparate events, and you can also deconstruct the future that way. By breaking things down into smaller components, you can forecast more amorphous trends in the future. When they’re broken down into small components, it becomes a whole lot easier to figure out their likelihood. So this gives you more clarity on the issue. This is related to Fermi-izing after Enrico Fermi, one of the creators of the nuclear reactor. In superforecasting, we take a very broad question, and then there is a set of questions that examine the overall trend from multiple perspectives, such as political, economic, and geopolitical, and as a whole give an answer to that broad question. Take for example how will the China-Taiwan relationship evolve in the coming years. Questions underneath of this could include for example “Will Taiwan accuse China of flying a military aircraft over Taiwan without permission before 31 December 2021?” and “Will Taiwan send any athletes to compete in the 2022 Winter Olympics in Beijing?”
The second approach is that of a mathematician. What do I mean with thinking about the future like a mathematician? I mean having a view of the future that you’re able to quantify. When we talk about the future and our views on it, we often use a very limited vocabulary. We talk about things that may happen, or that we think could happen or that are perhaps very unlikely to happen. This lack of precision is highly damaging when it comes to planning for the future. Staying in the realm of maybe/possibly/unlikely doesn’t give you enough guidance to go on. So we need to turn to numbers here to be able to plan better. Expressing the likelihood of something happening in the future in terms of its probability between 0-100% provides much more actionability.
Just consider the story about JFK in the midst of the Bay of Pigs incident asking the joint chiefs of staff about the likelihood of success. They told him that it had a “fair chance of success”. He took this to mean a positive assessment, but they actually meant this to signify a 3 to 1 chance against. It worked out, but the world was pretty close to a nuclear war. Moving to probabilities doesn’t need be very difficult. As a starting point, you can just translate say, improbably, to around 15%, or expected to 75%. That is a great starting point. Then you can get more precise when possible. It turned out that superforecasters did have much greater use of probabilities such as 21% rather than the even numbers like 20%. But already going from unlikely to occur to 30%, say is a great improvement in precision that will pay off. This leads to the second aspect of thinking like a mathematician, which is thinking in majority scenario and tail risk scenarios.
I have found it to be useful as a starting point to just develop three scenarios for starting to think about a forecast. Basically, the continuation of the status quo and the tail risks. The status quo continuation is what will happen if current trend lines continue, taking into account where we are now, and known things that will happen. When in doubt, you should always go with the status quo. Some things in the world change quickly, like technology, but the fundamental things change very slowly. Such as human behavior, preferences and biases. These take millennia to evolve. Recognizing these fundamental, deeply human, forces helps inform the middle path. It doesn’t mean that things will look exactly as they do now, but that the rate of change is the same.
Here we can utilize something called the base rate. The base rate is how often something has happened in the past. This is often one of the first things you should do when forecasting, creating a base rate for an event. Say you wants to ascertain the likelihood of a political coup in a country in the coming year. A great starting point is to look at how frequently coups have occurred in the past. If there has been 5 coups in the last 80 years, that means an average of one coup every 16 years. This translates to a probability of 1/16 for any given year, or 6%. Without knowing any other information, you can take as a starting point that the likelihood for a coup in any given year is 6%. That is the base rate. Then, after that, you can adjust the number up or down based on specific factors that make this year different than others, but just with the starting point, you already knows that we are likely not in the realm of 85% here say.
Does this only work on events that have happened before? No, even when things seem on the surface to be a completely new event, something similar will have happened in history. History repeats itself and everything has an analogue. An example of this can be the adoption of nuclear fusion. This is clearly something has not occurred before, but there are analogues. The process of developing the technology has its characteristics, but those may be similar to other technological breakthroughs, like quantum computing. And then the process of getting widespread adoption has its own characteristics, but those may be similar to those of other energy adoption processes, such as solar. So putting those together, you can get pretty far.
After you have done the base rate, you can think through the edge scenarios, to get ideas of what the tail risks might be. These are events that are much less likely, just 5-10%. If we take the Russia Ukraine war for example, the status quo continuation is a continuation of the stale mate that we have seen over the past months. And then we have positive and negative tail risks, for example on the positive side, it is 5% likely in the timeframe of a year, say, that there would be a peace accord, probably actually less than 5%, while on the negative side, there is around 5% or less likelihood of Russian use of nuclear weapons. Thinking in these three buckets instead of the whole spectrum greatly simplifies the effort of deciding one’s forecasts.
The third approach is that of a meditator. What do I mean with thinking about the future like a meditator? It means staying calm and avoiding overreactions to things that don’t matter. In meditation, we practice noticing everything. This can be any sensory impression, from a bodily sensation to a thought in the mind. It could be a smell, or an emotion. We train ourselves to be able to notice the smallest things, such as a change in the feeling of temperature on the skin. Importantly, however, we also train to not cling to these. The art lies in noticing, but not holding on to anything.
This is highly relevant when considering how we should form our views of the future. It is super important to notice all signals that are out there, but not to over-update on them. This means you should read widely and skim lots of sources, but treat each individual piece of information with scepticism and not jump to conclusions. This is what we practice as superforecasters, looking for knowledge inside the torrent of information. A good recent example is that of AI timelines, and especially AGI (artificial general intelligence). Recent progress in AI has been astounding. Even experts have been taken by surprise by the capabilities of GPT 3.5 and 4 and other models. So this means clearly that the timelines for AGI have moved forward. But timelines in a Metaculus question on this topic moved closer by decades just in a few months. The community prediction for weakly general AGI is now in 2027. So that seems like potentially over-updating on new information.
The second thing is you can use the clarity of the meditating mind to look for a surprise factor. When you read news, think does this surprise me? Does this in any way lead me to update my views? I looked at a wall street journal newsletter the other month, and realized that most of what was in there had zero knowledge value. An article about the misuse of funds in some government agency – no surprise. An article about the iPhone disrupting another industry – no surprise. But then there was a piece on sudden progress on the Brexit Northern Ireland issue, which might actually be worth paying attention to. So always look for knowledge value in the overflow of information and don’t overreact.
Those are three potential approaches for thinking about the future differently – the architect approach, the mathematician approach and the meditator approach.