2. Developmental Factors
A. Associations, Trends, Dependencies, and Constraints
Associations are simply correlations between two things. As we all know, correlation is not causation, but it starts us on the trail for causal relationships. Longstanding associations, or high probability associations in variables that seem critical to the system or future in question may even lead us to find causes, forces, or relationships that appear broadly or even universally optimal or developmental (see Systems Laws in the next section). Foresighters that cultivate a data-driven, investigative approach (Skill 1: Learning) will find many potentially relevant associations.
Trends are quantitative associations between variables, followed over time. Time series analysis and forecasting is foundational to all good foresight work. When we do societal and technical forecasting, it is always hard to know how long the association may continue to hold. Both investors and futurists know the phrase “The trend is your friend, until it ends, or bends.” Any trend, particularly a short-term trend, like we see in entertainment, consumer culture, or fashion, may bend or reverse itself at any time.
The 95/5 rule reminds us that the vast majority of social processes are evolutionary, and remember that evolution has no long-term predictable direction, other than greater diversity over time (think of the ramification of organismal diversity over billions of years, starting from a single cell type). Evo devo proceses also have no easily predictable direction, other than greater adaptability over time. Whenever our trend is describing a process where evolution or adaptation are the primary drivers, it may change as soon as the selective environment changes. Alternatively, when we suspect a trend is more developmental, like globalization, liberalization, dematerialization, densification, transparency, the number of internet nodes on the planet, Moore’s law, etc., we have reason to predict that it will last much longer, operate over a wider range, and continue even when the environment changes, because, like a developing organism, it contains its own internal drivers, stabilizing and manifesting it.
Foresighters have collected many rules of thumb for doing trend work. Here are three to start you off:
- The longer any trend has run, and the more places we can find it, the higher the likelihood that it will continue.
- When looking for hidden trends and their drivers, start by looking back at least twice as far as you want to look forward.
- When doing trend extrapolation, don’t expect any current trend to hold for longer than half the time it has held to date.
Dependencies, also called path dependencies, are system conditions that begin as free evolutionary choices, but which quickly become become sunk costs, predictable constraints on future possibilities, due to the high cost of switching after the choice has been made. In biological evolution, many initially contingently discovered functionalities become reusable components of more complex systems. They become harder to change with time, and eventually modules on which hierarchical complexity is built. Modularity is thus a key example of path dependency.
In a small minority of cases some of these modules themselves are developmentally optimal from the beginning. Recall the 95/5 rule, and our discussion of portals. Organic chemistry and RNA, for example may represent ideal developmental modularity. But in the large majority of cases, modules which started as evolutionary choices become developmental, sometime for long periods of time, even though they are not ideally optimal. “Lock in”, or path dependency can occur in both evolutionary and developmental systems, once they become integral parts of larger systems.
For human biological development, think of male nipples, and their vestigial milk ducts (which leak milk in up to 5% of all babies when they are born, even in a few males). Males no longer use this module, which was evolutionarily discovered. At some point it in our distant past, when food supplies were scarcer, this evolutionary choice became part of development, and it has been hard for evolution to reverse out of this module in recent millennia. Some kind of lock in occurred at the genetic level (perhaps these genes are used in other ways in the human body), and so the module persists in development.
For human social development, which side of the road we initially choose to drive on is a classic example of an evolutionary choice that becomes a developmental path dependency. Either side works, in different cultures, but in each culture, once we have a certain amount of drivers, vehicles, and roads adapted to one of the two, path dependency occurs. There may also be a standards war as different regions have to integrate, as happened when independent railroad lines all eventually linked up and had to standardize their track and locomotive sizes, or when VHS and Betamax had to finally collapse to one standard, as the cost of producing for two became increasingly prohibitive as the number of video titles grew.
For technology development, think of the typewriter keyboard layout we first choose to mass produce, and many other social, economic, political, and legal choices that soon become imposed or de facto standards. Path dependencies tell us why we won’t leave the QWERTY keyboard, or get everyone in the world driving on the same side of the road, or easily get us off of Windows and Apple operating systems, or the various internet and web protocols, anytime soon.
But as natural intelligence (deep learning, etc.) continues to grow in our machines, the behavioral and engineering cost of trying new standards, and of integrating to the most efficient and useful standards, will keep going down, allowing some of our longstanding local dependencies to disappear. With some dependencies, we’ll see more evolutionary free choices again, while with others, we’ll see a predictable convergence to a (at least near-term) global optima (see next section) instead.
For example, it is easy to predict that English will be the dominant language learning winner, in the early days of naturally intelligent agents on our wearable computers. Children will be able to learn any foreign language from their smart agents, from birth, at the same time that they are learning their local language. English being the global language of business, and having far more words and being much easier to learn than the closest economic contender, Chinese, all ensure English will get the lions share of new learners, even as all the major languages get new learners in coming years. But once agents get really smart, it isn’t at all clear that English will continue to dominate. But very smart agents may eventually invent and teach us a new Global Language from birth, something as logical as Latin, with words expressing far more diverse and precise concepts, and phonemes from all the various languages, optimized for our anatomy. At that point, then, English will finally be retired, a developmental dependency replaced with an even more optimal one. An NI-built Global Language will eventually allow us all richer, faster, and more precise communication, both with each other and with our machines, than anything that exists today. Those of us who do performance work with our computers, “programming” them by our conversations, may all use such languages, which will be the programming languages of an NI-world. The vision of the Polish opthalmologist and peace futurist Ludwik Zamenhof, when he invented Esperanto in 1887, will have finally arrived, perhaps a century after he started work on it, by a path he may not have anticipated.
Will the US be able to go metric by 2050? Or will we still be stuck with English units over that timeframe? Your answer may depend on when you expect intelligent machines to emerge, as that will greatly lower the difficulty of getting out of this and many other suboptimal (evolutionary) path dependencies. By 2050, how many people will have overcome the suboptimal path dependency of using their little-spoken national language as their primary language, having switched to English or another major language as their most-spoken, and with their traditional language increasingly used only occasionally? Will anyone be using AI-developed languages by 2050? Fascinating questions.
Constraints are functional or structural limitations on the behavior of a system that appear to exist, limiting the dynamics or outcomes of that system. They may eventually be understood in a quantitative, predictive, or causal manner, as probabilities or mathematical relationships, or scientific laws, but in the interim, they are simply limitations that we propose exist. Many examples of constraints have been argued in this book. Our biases and preferences form constraints on what we can see. There are social and moral constraints on our behavior. It’s hard to quantitate and predict them, but we all know they exist, and we also can anticipate conditions when they are likely to fail. I’ve argued that NIs will be constrained to be hyperethical relative to biological humans, and that they will strive to maximize evo devo purposes like the Five Goals and Ten Values even more than us.
The concept of developmental constraints allows us to entertain speculative ideas like the Transcension Hypothesis, which proposes that a Moral Prime Directive emerges in all developing civilizations, a diversity-maximizing policy, due to the wisdom of allowing other civilizations to take their own unique evolutionary paths toward a common developmental destiny–meeting all other intelligent civilizations once they’ve advanced enough to transcend this universe, through some mechanism (black holes, wormholes, hyperspace) which we can only guess at today.
Constraints are particularly important and dangerous. Getting them right is a key to great foresight. See false constraints, or don’t see enough of the true constraints, and you’ve constructed a fantasy world. You can’t see what’s coming.
Recall the false constraints on human population and prosperity that several doomsaying environmental futurists of the 1960s and 1970s saw, in books like Paul and Anne Ehrlich’s (The Population Bomb, 1968), and in the Club of Rome’s Limits to Growth, 1972. By completely factoring out scientific and technological innovation and entrepreneurship, they could not anticipate the Green Revolution, even though it started in the 1930s in the US, and had yet to spread to the developing world when they wrote their jeremiads. They couldn’t see the obvious acceleration of prosperity and performance, due to their acceptance of false constraints. The Club of Rome’s systems model didn’t even have a factor for technological innovation. Fortunately, there were other futurists at the time, like the economist Julian Simon, who did not put such constraints on human ingenuity. See his amazing book, The Ultimate Resource, 1981/1996, full text.
Recall all the space futurists of the 1960s and 1970s, so utterly convinced that we would soon be going to the stars. They wouldn’t allow themselves to see a large set of natural constraints, like how incredibly expensive, dangerous, barren, and unuseful going to space, or to other planets as they currently exist, will be for biological humans. They persisted with their visions, even as we learned all of these things intimately during the Apollo missions. Some futurists anticipated how much better adapted robots would be for space, but very few were willing to realize that machines would inherit space, not us, given all our biological constraints. Even today these spacefaring fantasies continue, championed by a new generation of visionaries like Elon Musk. I am confident however, that as constraints like STEM compression, and the vastly greater suitability of space for postbiological life become clearer, as well as the vast differential in learning ability between human and NI systems, we’ll see these visions refined. Mr. Musk will discover all his earthbound ventures are vastly more profitable and useful than going to Mars with a handful of humans, which will have adventure and inspirational and scientific value, but will never get us a Second Earth. By the time NIs can terraform Venus and Mars into Second Earths, we’ll have very different priorities as a species. Such terraforming might eventually happen, but it would be done as a science experiment by NIs, or as a favor for that small fraction of humans who choose to remain biological. It would have nothing to do with creating a backup of biological life. That function will be entirely taken over by the NIs. It seems clear that in just a few generations the vast majority of minds in our solar system, and soon after that the vast majority of humans, will be postbiological, no longer planet bound, existing in new realms of inner space, not outer space.
We need to be very careful in our thinking about constraints.