4. Superexponential Growth (J-curves)
If you think long-term exponential growth is interesting and disruptive, there’s another kind of growth that is even more curious and potentially disruptive. That is faster-than-exponential (superexponential) growth. Unlike exponential growth, where the curve looks the same at every point, superexponential growth has one or more “knees” in the curve, places where growth suddenly switches from a slower to an even faster (or sometimes slower) exponential mode.
This kind of growth has been called “Hockey Stick” or J-curve growth, as, like a capital J, there is a period of low growth, a knee of the curve, and then a period where growth goes almost vertical for a time. Human population growth is a good example of superexponential growth. Unlike bacterial growth, which is a standard exponential curve (constant bend, constant doubling time), human population grew superexponentially from the advent of agriculture ten thousand years ago right up to the 1960s. Notice in the curve at right that there are a few “knees” for this kind of growth, places where the curve switches to an even faster exponential growth mode. The sharpest knee is seen in the 1700s to 1900s, due to the great efficiencies unleashed by the Industrial Revolution. Another knee occurs in the mid-twentieth century, when growth became so fast it started to look vertical, tending toward infinity in a finite time. That latter state of affairs led eminent systems theorist Heinz von Foerster to publish a short essay in Science in 1960, Doomsday: Friday, 13, November, A.D. 2026, which extrapolated human population to infinity by 2026. That kind of extreme J-curve growth is called hyperbolic growth (H-curve), asymptotic growth, or a finite-time singularity. Von Foerster knew that couldn’t happen, but he wanted to point out the superexponential nature of human population growth at the time.
Why was human population growth superexponential right up to the 1960s? Because additional positive feedback loops were happening on top of the basic exponential growth process of humans creating new adult humans in a relatively constant number of years, the more developed human civilization became. As mortality rates dropped due to much better sanitation and as food became far more readily available, surviving family sizes kept growing larger the more developed our civilizations became. The growing number of surviving family members was the main reason the standard exponential growth curve became superexponential.
Finally, by the middle of the 20th century, around the time of von Foerster’s article, social pressures to start reducing family size and to have children a bit later in life as well, began to be felt in almost every country. Causal factors include greater freedom for women, greater access to birth control, more availability of education, and increasing access to and competition for well-paying jobs. As we’ve mentioned, human population is now widely expected to top out at between 9 and 11 billion over this century. Some analysts, like Deutche Bank’s Sanjeev Sayal, expect population to peak at just 9 billion by 2050, followed by a long decline. We can hope the maximum will arrive this fast, but we can also take action to make the causal factors more widely prevalent in developing nations.
We’ve occasionally seen periods of J-curve growth in technologies as well. Consider human transportation technologies. In 1953, the US Air Force Office of Scientific Research commissioned a study to plot successive maximum speed curves for transportation technologies, each of which followed its own S-curve, and to use the “envelope curve” of all of these to see if they could extrapolate where transportation speeds would be going next. Technology forecaster Joseph P. Martino covers this study in An Introduction to Technological Forecasting, 1972. Martino asks us to note a few things. First, note the shape of the envelope curve on the way to 1953. Chemical rocket missiles had recently delivered far greater speeds than gas turbine (jet) aircraft engines, and those had gone much faster than earlier reciprocating aircraft engines, all the way back to horse power and the Pony Express. The envelope curve was superexponential (curving upwards on a log plot) from 1750 to the 1940s. There is also a noticeable “knee” (a switch to even more rapid growth) in its superexponentiality at the transition to jet aircraft and missiles. Second, note that the extrapolated missile S-curve accurately predicted that we would reach satellite velocity in 1957 and escape velocity by 1959. Both of these events (Sputnik, 1957 and Luna 1, 1959) happened right on time. If the Air Force had believed their J-curve, we might not have been so badly surprised by the Soviet space program. Third, note that the envelope curve mistakenly indicates, when it is extrapolated beyond missiles, that spacecraft should reach speeds of one thousandth of the speed-of-light by 2000 and even faster speeds afterward. Commercializable technology beyond missiles didn’t materialize, and superexponential growth has ended, for now. We ran into physical and economic limits with our current generation of technologies. But again we see that superexponential growth can happen for a long time in our technologies, and it can be quite disruptive while it is occurring.
Let’s return to Kurzweil’s observation that Moore’s law has been gently superexponential when we look at computer performance over the last century. In 2011, Bela Nagy et al. at the Santa Fe Institute published a paper, Superexponential Long-term Trends in Information Technology, J. of Technological Forecasting & Social Change, 73:1061-1083, which looked at performance metrics for information technology back to 1850, and independently confirmed the observations of Kurzweil and others who have made this claim. Why superexponentiality is happening with Moore’s law is unclear. The growth of the global computer and memory chip market has been exponential since 1965, and standard rates of chip shrinking in manufacturing would also predict only exponential growth in performance. On top of these baseline exponentials there must be additional positive feedback loops or other factors that have converged to make performance growth superexponential.
Andrew McAfee of MIT’s Center for Digital Business, and co-author of the The Second Machine Age, 2014 notes that in 1970 corporations spent about 7% of their total budgets on IT. By 2008, it had climbed to 30%, while average IT spending per employee jumped from less than $100 per year to around $3,000 per year. UNIDO estimates that the value-added share of ICT and high-tech machinery has more doubled from 1970-2006, rising from 10 to 22 percent of the global economy. Microprocessors and memory chips now have roles in most of our industries, products and services. Do these or other indicators of ICT’s increasingly pervasive role in human activity help to make its performance superexponential? Are we today spending increasingly more mental and physical effort contemplating and aiding the development of computing technology? Perhaps, but such temporary additional positive feedback loops may provide only a minor contribution to superexponentiality.
In 2009, Christopher Magee at MIT published a carefully researched paper proposing that two thirds of the total progress in computational performance over the last 40 years has come from materials/process innovations. In other words, most of the exponentiality and superexponentiality of Moore’s law has come directly from the technology and its innovation processes. The further we shrink our technologies today, the more we see superexponential efficiency advances emerge.
When will the exponential performance growth of Moore’s Law finally end? We might continue to see exponential or superexponential growth dynamics in Moore’s law and other nano and info technologies for many decades yet. As physical systems, they must one day saturate in growth. Many people have predicted an end to Moore’s law in terms of shrinking transistor size over the last several decades, and so far such predictions haven’t been correct. But within fifteen years transistor size will approach the atomic scale, and so they must eventually end their shrinking.
But when transistor shrinking finally ends, new freedoms will emerge in chip design, options which are today uneconomical. As one example, massively parallel, brain-like chip designs will for the first time be feasible to build. Today, such chips rapidly become obsolete by the time they are built, because our standard serial chips keep shrinking. We also don’t know how to best build or program those brain-like kinds of systems. Aside from a few brilliant pioneers with small budget, we still don’t spend much basic R&D money as a species trying to figure out how neurons think (basic and computational neuroscience), or how to make massively-parallel biologically-inspired designs (neuromorphic computing, neural networks, deep learning). For one of the rare exceptions, see this inspiring 30 min video on Dharmendra Modha’s SyNAPSE research project at IBM Research. Once the easy gains that come from the MOSFET transistor shrinking game end in coming years, there will finally be a compelling financial reason for us to go for these next-level gains. In the meantime, only enlightened foresight takes us there.
Biologically-inspired or alternative design chips, when they emerge, will very likely follow their own Moore’s law doubling curve, for example, doubling the number of brain-like circuits operating in a computer per fixed time period, as long as such doubling gives us better machines. We might then move from a market condition of exponentially more transistors and faster processing per dollar, to exponentially more brain-like circuits and smarter processing per dollar. That could be a much better version of Moore’s law, waiting patiently for chips to stop their shrinking game. The bottom line is that we aren’t smart enough yet to know where or when Moore’s law performance growth will end, at present. But as long as computing provides financial advantage, it makes sense to expect a new, parallellized version of Moores-like exponential performance growth to race ahead after the miniaturization-based Moore’s law comes to an end (at least for a while).
Economic growth has been another fascinating example of superexponential growth. Over a few decades, economies look like they are engaged in simple exponential growth. But as we saw with computing technologies, when we look at economic growth over a long enough time period, it appears superexponential. Consider the J-curve to the right, plotting GDP per capita in Western Europe over a thousand year period, done by the Economist Intelligence Unit in 1999, using data from economist and historian Angus Maddison.
As with human population earlier, we see the knee of the J-curve of GDP growth in the 1850s, during the Industrial Revolution. From that point forward, GDP growth has looked increasingly vertical, when it is considered on a global scale, and plotted over a very long time period.
But while human population growth started saturating in the 1960s, global economic growth switched to a steeper curve in 1960 (picture below). From 1950 to 2000, global GDP increased by 800% while human population increased by less than 300%. Even with our recent recession, global economic growth is now even faster than it was in 2000. We have recently seen more than a decade of annual rates of economic growth in some Chinese cities of over 20% per year, and many developing world economies are growing at rates far faster than the 2-3% annual global rate we saw in the mid 20th century.
Will global GDP growth per capita soon saturate, as human population is now doing? On first consideration, it seems reasonable to expect it would. But if such growth is driven primarily by technical productivity (recall the Solow model), and particularly by nano and info technologies, and if those special technologies remain on fast exponential or superexponential growth modes for the foreseeable future, then economic growth may also remain superexponential for some time to come.
As automation and machine intelligence grow within nano and info technologies, they can increasingly engage in their own technical, economic and intellectual explorations, competitions, and economic activity. Such activities may increasingly take over from technical, economic, and intellectual competitions driven by biological humans. As increasingly smart populations of machine minds are added to society in coming decades, we will very likely see a “decoupling” of biological human population and economic growth.