Chapter 1. Introduction – Our Emerging Foresight Field

Challenge 4 – Making Critical Judgments

Beyond what we’ve just said about soft trends, we won’t say much more about possibility foresight in this introductory chapter. Our current challenge, as a profession, is to move away from an increasingly exclusive focus on exploring possible futures since the 1980s, and get back to doing a lot more of the other two fundamental foresight types, probability foresight and preference foresight, the way we did in the 1960s and 1970s. We’ll talk more about the changing nature of the foresight profession in a mini-history of our profession, Six Eras of Foresight Practice later in this chapter. So let’s move on now to preference foresight, the third fundamental foresight type in the Three Ps model.

Foresight that changes our strategy, plans, and actions has always involves critical judgment. We express particular values, goals, models, and strategies with respect to these judgments, which influence what we think should happen next. Historically, our field calls this judgment-based work normative foresight. Unfortunately, there has been a noticeable trend away from doing this kind of foresight work in recent years. For some reason, perhaps because it involves conflict and criticism, two critical processes that can make people uncomfortable at first, foresight professionals have become less interested in making explicit value judgments in their work. This reluctance to make judgments has weakened the influence of our field as a result. It’s time for futurists and foresighters to get more comfortable making explicit value judgments, and getting those judgments continually assessed and critiqued.

Remember that in the Three P’s model, strategic foresight always ends with preference foresight. That means it delivers a particular point of view or set of views, in competition with other views. Good preference foresight strives to make explicit its assumptions and judgments, to provide evidence and argument for those judgments where possible, and to subject those judgments to good criticism from stakeholders. It also tries to influence others toward those views using whatever ethical means are seem most likely to be effective. It isn’t done from the sidelines, assuming a falsely “unbiased” perspective. Preference foresight tells us there are no spectators in the judgment process, only participants.

As the futurist Buckminster Fuller liked to say, you can’t get an unbiased (value-free) education, so the next best thing is a multi-biased education, one that exposes you to a range of strongly held and often conflicting preferences and world views, their evidence such as it exists (or does not) today, and the areas of agreement and conflict between these views, so we can best form our own preferred values, goals, models, and strategies. Understanding the competing and cooperating preference maps, agendas, and investments that exist in any domain, being able to empathize with all of the actors, and to communicate with them in their terms, even when we disagree with them, is the best way to form our own adaptive judgments and world view.

Now it is true that sometimes foresight professionals only frame the issues, and we often hold back our opinions at first, challenging our clients to first come up with their own solutions. We also are particularly motivated to uncover a range of alternative futures, to show the choice and uncertainty ahead. But even when we withhold our personal recommendations for action, which makes sense with some clients, particularly early in our engagements, don’t fool yourself into thinking that we professionals aren’t taking a position.

The ways we frame, define, and filter the issues, who is in the room, and the ways we facilitate discussion, involve unavoidable critical judgment. There is no escaping making personal judgments in foresight work, based on our implicit or explicit goals and values. Taking a position in a competitive world requires the courage of your convictions, is always based on imperfect information, and invariably creates conflict. Get used to it. Dealing with the future is always a social and political process. Try to be humble, compassionate, and evidence-based in your judgments, and with the right clients, those judgments will go far. When they don’t, you probably should switch clients.

A neutral point of view, like that found on Wikipedia, or proposed in the written values (neutrality, imagination, expertise) of the World Future Society, is a good starting point in learning about the world. But you’d better not end up there with your foresight work. If you do, you will fail to develop your own unique strategic viewpoint and specific recommendations for change. Good foresight always goes beyond a survey of the probability and possibility landscapes to recommend continuation or change in our or our client’s strategy, plans, and actions.


A good Wikipedia strives for a neutral point of view. A good Futurepedia must be more like a prediction market.

Unlike Wikipedia, a good Futurepedia would offer many competing judgments, or Schools of Thought on issues of future importance, with their best evidence, argument, and testable hypotheses on offer for each. With such a platform we’d all be able to better distinguish between the interesting but improbable future ideas that we often like to discuss for their entertainment or philosophical value, and that special subset of high-value and high-probability outcomes. We’d also be able to discover that preferred subset that are technically, economically, and politically feasible and that promise to solve truly important human problems.

A good Futurepedia’s pages would first strive to take an evidence-based, probabilistic, predictive, and partly quantitative approach to each issue, using a cognitively diverse, scientifically-minded and critical crowd. It would next strive to outline possibilities within that probable framework. Finally, it would offer starter maps of preferences, agendas, and investments in relation to that issue, as they presently appear. It should also be as open, cognitively diverse, digital, and globally accessible as possible, with both reputation and financial incentives for those who take winning positions on future outcomes. Its users would self-educate on both expert and lay crowd foresight using the platform, and develop stronger, ongoing critical judgments.

Physicist, futurist, and author David Brin is fond of saying that “criticism, or reciprocal accountability, is the only known antidote to error.” By inviting and engaging in constructive criticism, a form of mental conflict, and listening with humility and an open mind, we make our judgments more accountable. If you are lucky, your future views, plans and actions will receive a good deal of well-meaning criticism. Ideally criticism will come first from colleagues, who will be constructive and on whom you can test your ideas, but you also want to seek criticism from clients and from the general public. The better you listen, the more you’ll learn all the ways you are wrong, how to improve and qualify your views and language, where you need to rethink your position, where you need more evidence for it, and how to advocate more persuasively for whatever views and ideas remain standing.

I have come to a number of my own critical judgments in my years as a practicing foresight professional, some of which my coauthors and colleagues may not share. I do hope that some of these are “good judgments”, as forecaster Philip Tetlock, director of The Good Judgment Project, might define them. We shall see. I welcome you to criticize my judgments, so we can both benefit from your criticism. I hope you will find some value in my judgments, even if you do not share them, and in my stories, even if you are not convinced of them. I think it would have been less valuable to you as a reader if I had tried to write from an “unbiased” perspective. It would certainly have been less valuable to me.

One great benefit of continually making your judgments explicit, both our tentative ones (most of our judgments) and our firm ones (a much smaller set of assumptions, beliefs, and visions), and soliciting criticism of them from your peers, is that it helps us distinguish between weeble (probable, well criticised, yet still standing), possible (plausible, experimental, still untested) and faulty (popular but very improbable) ideas and stories. That in turn allows us to make much more adaptive visions (motivating, aspirational futures). Let’s turn to that challenge next.

Share your Feedback

Better Wording? References? Data? Images? Quotes? Mistakes?

Thanks for helping us make the Guide the best intro to foresight on the web.