Reading the 2023 Batch of Forecasting Reports – Part 1
This is the time of the year when we are awash in reports full of predictions, forecasts, trends and risks for the year and beyond. Having now gone through these, this is the first post in a second part series. This one is on process, i.e. what makes some of these more valuable than others, and next week’s will be on content, i.e. what commonalities is there across all of the forecasts.
It is striking how they differ in terms of the amount of knowledge these reports provide. I therefore wanted to do an analysis of the “Knowledge Quotient” of these different reports, i.e. how much useful knowledge they actually contain, and what there is that make some of them more valuable than others. These reports all share the common goal of trying to provide the reader with certainty about the future so they can make plans with more confidence. But there are plenty of differences.
First there are the surface-level differences in terms of format, source, etc.:
· Some make use of probabilities, like Vox and Scott Alexander, which is of course preferable and value-adding, while most (e.g. Quartz) don’t
· Most of them are for one year out, but a few do take longer timeframes, which is laudable. WEF has both 2 and 10 years out, Atlantic Council goes 10 years out and NC State/Protiviti go 1 year and 10 years out.
· Some are surveys conducted by a publication, such as the FT’s predictions from UK economists or the Atlantic Council, while some have predictions from readers (Axios) or are a collection of forecasts from elsewhere (Nonrival). Most, however, provide predictions from a set of the analysts and writers at the publication, like Vox’s predictions, and those are probably the most valuable.
· Some hold themselves accountable and look back at earlier years’ predictions (Scott Alexander, Scott Galloway), while most just provide a new set of predictions without providing any basis for if we should attach a lot of credence to their forecasts or not.
· Some limit themselves to a certain part of the world, such as tech (Deloitte) or markets (A VC), while others span all domains (like Scott Alexander). These are both of course valuable.
Then some other interesting differences arise:
Trend ≠ Risk
An interesting finding is the confusion in terminology between trend and risk. They really should be complete opposites – risk is the absence of certainty, while a trend is something about which we have some certainty, since it’s a pattern we’re currently observing in the world. Some use these fairly interchangeably, however, like Eurasia, which lists things that are trends, such as the US getting more divided or Generation Z growing more activist, and refer to them as risks.
Prediction ≠ Forecast
I find that a prediction many not always be the same as a forecast. The words seem to be mostly used interchangeably, but there is more knowledge in the forecasts. Predictions seem to be the more sweeping, higher-level statements, while forecasts are more precise. When Vox or Scott Alexander for example apply actual probabilities to their statements, they are forecasts.
Timeframe Insensitivity
There is significant timeframe insensitivity. Some of the pieces looking further out, like NC State/Protiviti and Atlantic Council looking at 2032 and 2033, show a lack of foresight and/or imagination, since they project a world very similar to the one of today. In the NC State/Protiviti report, for example, the top risk in 2033 is the same as in 2023 – a lack of talent. While I’m sure there will be lots of skills needed in organizations in 2033 that the workforce may not possess, the current progress in AI will mean that work, the workforce and labor market should look very different in 2032, and it’d be surprising if talent was still the top concern. Similarly, a 2032 top risk list that includes nothing on neither climate nor AI, what are arguably the two main influencing factors of the century, suggests a lack of foresight.
This kind of timeframe insensitivity, of failing to take into account the increased uncertainty of a later date, is quite common. If forced to give a percentage probability, people would say 10% whether it’s between now 2024 or now and 2025. Metaculus has a good set of series of questions where we can check for this, and see if later dates get appropriately more uncertain forecasts (doesn’t always mean they have to go closer to 50%, though, could be just that the confidence intervals increase).
Accuracy-Editorializing Tradeoffs
Some seem to trade-off perfect accuracy for an editorial bias. The WEF report for example, while being a survey, suggests some editorializing in their focusing on environmental risk. While it is of course completely necessary to draw more attention to the environmental risks we face, to include not a single economic risk in the top 10 for the next two or 10 years suggests at least some blinders or survey design errors.
Conclusion
Overall, therefore, there is knowledge about the future to be gained by reading some of these, for sure, but it is limited. The easiest way to model the world is as a series of base line trends that will continue into the future unless they are disrupted. Superforecasting is all about understanding how likely the status quo trend is to prevail and what the probabilities of disruptions that would interrupt them are. What is most useful therefore are pieces that summarize the current world in terms of trends, coupled with pieces that provide specific forecasts on disruptive events that could take place that would interrupt those trends or shift their direction. Forecasts on trends continuing are nice to have probabilities on, but not essential, while forecasts on disruptive events taking place really benefit from having precise probabilities. Unfortunately, most top risk/top 10 trends/2023 predictions lists are neither fish nor fowl, but rather something in-between, such as loose speculation (e.g. Meta may split up) or high-level summaries trying to do justice to the dynamics of whole countries (Eurasia’s discussion on everything going on in Iran or China).
Appendix
Here is the full list of reports I reviewed for this piece:
· Vox
· World Economic Forum/Marsh McLennan
· FT
· Quartz
· Eurasia
· Nonrival
· CNN
· Deloitte
· A VC
· CFR
· Axios
· PGIM
· GS
· HBR
· Nesta
Next week, I will look at whether there are any conclusions to be drawn about the majority opinion among these for major 2023 events.