1. Evolutionary Factors
C. Uncertainties, Unknowns, Opportunities, Wildcards and Risks
Mapping potentially relevant uncertainties (variables known, but with poorly bounded values) and unknowns (variables unknown or unbounded), by surveying or doing a Delphi with a cognitively diverse group of stakeholders, will offer the forsighter many additional possibilities. We can ask our stakeholders questions about what outcomes or issues they are worried about or fear, and what things they have little knowledge of that they nevertheless suspect may turn out to be relevant.
Uncertainties and unknowns can often be narrowed by doing some research, or learning. We can brief others about our findings and step through a quick survey or Delphi to see if there is any consensus on them, or take them mentally through a Do loop (learn, see, do, review) to see what kind of action items and feedback they generate. As our foresight grows, we can subdivide and better characterize many of them into other foresight categories (opportunities, risks, wildcards, etc.).
Opportunities are possibilities that we value. Discussing these, we’ve moved into preference foresight. The way we generate opportunity lists and maps today is typically very evolutionary, bottom-up, contingent, and subjective. That kind of approach probably makes the most sense in most circumstances. Developmental approaches to opportunities include methods like real options analysis, which seek to quantify the relative probability of outcomes, and the relative value of competing business investments. These are much harder to do, in our current computationally and quantitatively weak state of culture. Once we have reasonably smart personal AIs, we can presume that more quantitative and predictive approaches to opportunities will rapidly grow.
The same is true for risks, which are possibilities we want to avoid. For most of us, risk mapping and insurance are quite evolutionary, bottom-up, and subjective. See Hubbard’s The Failure of Risk Management (2009), for more. The more imaginative and cognitively diverse our group, the better we can see the risk landscape. But risk management also has a large number of formal methods, many of which involve estimating probabilities. While it may be predominantly evolutionary in practice at present, it may switch to predominantly developmental once the AIs arrive.
Recall that we discussed wildcards, or low probability, high-positive or high-negative impact events, in Chapter 6. By definition, they are a special subset of probable futures. So why list them as an important factor in evolutionary foresight? It turns out that people who are very good at looking at the world from creative, evolutionary perspectives are the best at finding and mapping wildcards. As they are low probability, it often takes a creative thinker willing to sift through many possibilities, in order to uncover them. At that point we need a developmental frame of mind to categorize them as low probability (a wildcard) or as higher probability (an opportunity or risk). But looking for wildcards typically starts in an evolutionary manner. Finding wildcards, and choosing to take them seriously (rather than dismissing them) seems to me to be the hardest part of evaluating them. Evolutionary thinkers tend to do both of these things very well. As mentioned, two good books on wildcards are futurist John L. Peterson’s Out of the Blue (1997), and Nassim Nicholas Taleb’s The Black Swan (2010).