Chapter 7. Acceleration – Guiding Our Extraordinary Future

The Puzzle of Meaning: We Have No Einstein of Complexity Yet

As we’ve seen, scholars in top-down causality argue that solving this problem of understanding informational causality, or meaning, is the key to solving major problems like the origin of life, and more generally, to turning complexity research into a science that will be an equal partner to physics, able to explain how information structures its environment. I agree with this perspective. More generally, I think understanding the way the most “meaningful” information, as it grows, constrains the future of intelligent systems is the most interesting and important of all unsolved puzzles in science. We will argue shortly that an evo devo approach to this problem will be needed. We need a theory that tells us how any replicating evo devo system, whether a star, a cell, a person, or a technology, is under the causal influence of the information that develops within it and around it.

1927 International Solvay Conference on Electrons and Photons. Back: Auguste Piccard, Émile Henriot, Paul Ehrenfest, Édouard Herzen, Théophile de Donder, Erwin Schrödinger, JE Verschaffelt, Wolfgang Pauli, Werner Heisenberg, Ralph Fowler, Léon Brillouin. Middle: Peter Debye, Martin Knudsen, William Lawrence Bragg, Hendrik Anthony Kramers, Paul Dirac, Arthur Compton, Louis de Broglie, Max Born, Niels Bohr. Front: Irving Langmuir, Max Planck, Marie Curie, Hendrik Lorentz, Albert Einstein, Paul Langevin, Charles-Eugène Guye, CTR Wilson, Owen Richardson.

Unfortunately, as the astrophysicist Timothy Ferris says, in a memorable phrase, science doesn’t yet have an Einstein of Information Theory, or more generally, of Complexity. In the early to mid-20th century, clever folks like those sitting at right (note the sole female, the irrepressible Marie Curie) made great leaps forward in our physical theory. But as we will see, our understanding of information has not yet made the same kind of progress.

Current physics recognizes just four forces, though some have proposed a fifth force, and still others may exist which are presently beyond our science. Our physical models range to the very smallest scales (particle physics) and very large ones (cosmology). But these models say little about the informational content of the systems involved. Today’s information theory, especially our algorithmic information theory, which tells us the computational usefulness of information generated, is still quite immature.

A good way to summarize these shortcomings, as I learned from my mentor James Grier Miller, is that we do not yet have an information theory that tells us about the various kinds of informational meaning, and how intelligence processes that meaning, and why meaningful information has a causal effect on its environment. These are the kinds of nuts we will have to crack to understand information’s obvious power to control the physical world.

Davidson (1983)

The philosophers who created general systems theory originally struggled with this concept of meaning, beginning with the theoretical biologist Ludwig von Bertalanffy in the 1930s and 1940s. von Bertalanffy’s popularization of systems theory, a careful search for commonalities and differences across all complex systems, made him one of the titans of 20th century thinking, though he never received the acclaim he deserved. Mark Davidson’s lovely Uncommon Sense (1983) is a good introduction to his deeply integrative and timeless way of thinking. The Bertalanffy Center for the Study of Systems Science is a small Austrian research institute that continues to carry on his legacy and general approach to knowledge construction.

The definition of meaning was also struggled with by cybernetics and systems dynamics pioneers like Norbert Wiener and Jay Forrester in the 1940’s to 1960s. It has been considered by various nonlinear science and complex systems researchers since the 1970s. Meaning has also been considered by scholars of the life sciences, in such disciplines as molecular biology, genetics, biosemioticspsychology and neuroscience. It has been addressed in cognitive sciencecomputer science and artificial intelligence. Physicists, mathematicians (especially in probability and statistics), and information theorists, attempting to understand the universe as a computational system, have also tried to construct better models of meaning and order.

In the late 20th century, general systems theory gave way to the complexity sciences, the leading current discipline seeking to interrelate all the above and other perspectives. The complexity sciences became a globally recognized field in 1984, with the creation of the Santa Fe Institute. Unfortunately, though SFI remains the leading complexity research group in the world, its annual funding is still quite small, hovering around $10M since 2008. So its ability to tackle the meaning / intelligence / informational causality problem, just one of several research problems on its agenda, has been quite limited to date. There are only a few dozen academic programs in complexity sciences in the US, and just a few hundred around the world. The nonlinear maths and simulations on which complexity science is based are still little taught in most graduate schools, as they quickly get conceptually difficult. Two thirds of SFI’s budget comes from private sources, as public funding is so scant for this kind of work. The US National Science Foundation, which publicly funds all our research institutions, as an annual budget of $7.5B. Obviously, our social priorities do not yet include making serious progress in understanding complexity.

When we take the Big Picture view, thermodynamics gives us a clue that both information and entropy are equally important, as we’ve seen. Information is actually the negative of the entropy, in classic mathematical descriptions of each. One old and simple idea is that energy potential, as it disappears at an accelerating rate in our universe, is somehow transferred directly into the growth of information potential, which we might call complexity, intelligence, and meaning, depending on our perspective. Physicists like Schrodinger in the 1940’s and philosophers like Teilhard de Chardin in the 1950’s both made this suggestion. But the science, if it even exists, remains to be developed. Scholars in maximum entropy (“maxent”) thermodynamics think the growth of entropy globally drives the growth of order multi-locally. But maxent thermodynamics can’t presently explain or predict any of the key natural phenomena we observe, including accelerating change, or the rise of intelligence. It is one of many interesting hypotheses that may never become science.

Prigogine and Stengers (1984)

Many scholars have also attempted to extend evolutionary theory to physics, chemistry, and society, in an attempt at a general model or theory of adaptive order, negentropy, meaning, or intelligence. In one particularly intuitive approach, chemist and Nobel-laureate Ilya Prigogine, attempted to show the global thermodynamic dissipation of energy, in an expanding universe, must drive information creation and multilocal evolutionary structuring, as smaller structures combine to create larger ones, in ever more intricate and information-rich systems. Prigogine and Stenger’s Order Out of Chaos (1984) examines all complex adaptive systems, from the BZ reaction in chemistry to our solar system and its ecosystem, as far-from equilibrium, energy-dissipating “tornados”, of varying lifespans. It was a major influence on 20th century systems thinkers, and is a favorite book of some in our Evo Devo Universe community. The physicist Jeremy England at MIT is an heir to Prigogine. His 2014 paper Statistical Physics of Adaptation, proposes that “dissipative-driven adaptation” explains the way molecular precursors self-organized to produce life, and how self-organization proceeds generally in the universe.

The late Jeffrey Wicken was another scholar taking this general approach. His 1987 book Evolution, Thermodynamics, and Information: Extending the Darwinian Program attempted to synthesize these three processes, each of which surely lies somewhere near the heart of reality. For more on Wicken, see his writeup in the Encyclopedia of Human Thermodynamics (EOHT). EOHT is one of those neat philosophical treasures on the web, a small independent online effort to reconcile thermodynamics with the rise of human-scale order. Today, Professor England notwithstanding, most of this kind of work is done by either a few complexity groups around the world, or by independent scholars. There is still very little funding for it, perhaps because progress has been so slow for decades.

Physicists continue to work on the puzzle as well, and they are also making progress. One of the things most folks agree on is that information, like space and time, has no absolute value, but is relational. This relational view of nature is central to quantum mechanics, so that gives us one great place to start to construct a better theory. Theoretical physicist John Archibald Wheeler, who popularized the phrase black hole, and coined such exotic physics terms as “wormhole”, “quantum foam” and the “one-electron universe” tried hard to understand how the role of the observer, inherent in quantum theory, would cause classical reality to arise out of quantum possibilities. He sought a quantum approach to the meaning problem, a quantum information theory that would explain emergence. He called this problem “it from bit”, the need for a description of how discrete observations (“bits” of quantum information) must compute emergent physical states (the “it” of discrete emergent entities). One of his students, Wojciech Zurech, developed a theory he called Quantum Darwinism, in which a variety of possible quantum states are selected against in the process of “computing” the classical world. Quantum computers are built on the theory of the interaction of informational states called qubits, or quantum bits, a term coined by one of Wheeler’s students, Benjamin Schumacher. So Wheeler’s physics legacy has been even broader than Einstein’s in some ways, as he touched both in general relativity and quantum theory.

It seems too early to know if or when a mature quantum information theory will emerge. But there are some amazing ideas in this domain. As science writer Tom Siegfried notes in Birth of the Qubit, ScienceNews (2017), a leading theorist in quantum gravity, John Preskill, has shown that if we live in a holographic universe, the quantum error correction codes that modern quantum computers use to do reliable quantum computing can be used to model how quantum events might generate our relativistic spacetime. For the physicists, here’s a 2015 talk by Preskill’s team that outlines this amazing and heretical idea. If the convergence of these two models turns out to be valid, it will be a deep lesson that the dream of quantum cosmology is real. Our universe is may be constructed in such a way that the efforts of local intelligences to understand and manipulate inner space must tell us deep things about the nature of outer space, about the cosmos as a system that constrains all of us.

According to the cosmologist Lee Smolin, relational thinking in science first started with polymath and philosopher Gottfried Leibniz, co-inventor of the calculus. One of the more noted efforts in constructing a relational and an observer-dependent understanding of quantum mechanics is Carlton Caves and Chris Fuchs model of Quantum Bayesianism. I have no idea if any of these fascinating ideas will eventually be proven the best general way to account for information, and its relation to such concepts as causality, complexity, and meaning. But I think we’re making progress with these fundamental scientific questions.

From Shannon Information Theory to Meaningful Information Theory: Toward a Science of Foresight

Modern information theory began in the 1940s, when Claude Shannon gave us a new unit of measurement for communicated information, the “bit”. Shannon’s work is a theory of communication, as described in the title of his seminal 1948 paper, A Mathematical Theory of Communication. This was a profoundly useful advance in building our information and communications technology, but it said very little about the meaning of the information being communicated. Any one “bit” of communicated information, in this theory, is as important to the sender and receiver as the next bit.

In evo devo language, we can call Shannon’s work the start of an evolutionary developmental information theory. It provides a mathematical language for mapping all the evolutionary possibilities that may happen in an existing system, and it tells us how information reduces the uncertainty of what may happen. But it doesn’t give us a framework for understanding how information changes the forward predictions of what we would call intelligent systems. So while Shannon is often called the “father of information theory,” he would be better called the founder of this important field.

Gleick (2011)

Shannon was careful not to imply that his theory covered all the relevant dimensions of information. But unfortunately, many who have followed him have downplayed how much we still don’t know about the informational world. A recent example is James Gleick’s The Information: A History, a Theory, a Flood (2011). The Information is well-written, and a great introduction to basic information theory and its central role in our Information Age. But like those who think modern evolutionary theory explains all biological change, it also gives a false impression that information theory is more advanced and explanatory today than it actually is. As the late Terrance Paul explains in an insightful Amazon review, Gleick’s book downplays the debate over the need to incorporate meaning in information theory to make it more broadly useful. Paul, an educational software pioneer, often said that the only way anything can truly improve, or become more meaningful to its users, is via feedback. Amen!

The debate over the relation of meaning to information began at the Macy Conferences on Cybernetics (1946-1953). Shannon’s professor at MIT, Norbert Wiener, thought information had to be analyzed as part of a feedback loop, in order to understand its meaning. Weiner founded an academic discipline in 1948, cybernetics, to try to better understand the intersection of physics, information, and meaning. Cybernetics didn’t make that much progress, perhaps because feedback alone is not enough to describe this forward prediction component of meaning. We need a general theory of intelligence as well.

As molecular biologist Joe Brisendine in our EDU community notes, in biology, meaning emerges when outcomes make a difference to the future evolution of an (intelligent) system. Meaning occurs when the probability of certain outcomes are dependent on the signals received by the structures themselves. In this sense, the concentration of nutrients or toxins in the surroundings are meaningful to a bacterium, action potentials are meaningful to a neuron, and words are meaningful to human beings, because the preferences, the future predictive models and behaviors of the receivers, are affected by the information received. We could call this a useful, or adaptive approach to information’s effect on intelligent systems.

Doya, Ishi & Rao (eds.) (2007)

EDU scholar John Campbell says information is only information if some thing is informed, and that thing must be an (intelligent), probabilistic model. Campbell turns to Bayesian statistics, a broadly useful branch of mathematics that allows us to update future conditional probabilities as information received. At least part of intelligence must work in this general way. A few intrepid biologists are working on Bayesian approaches to brain function, presuming that our brain tries to do perception, decisionmaking, motor control, and other activities using Bayesian approaches. A leading scholar of computational neuroscience, AI, and brain-computer interfacing who takes this perspective is Rajesh Rao at UW. The book Bayesian Brain (2007) co-edited by Rao and others, offers a technical overview of this work, which is not without its critics, as in John Horgan’s helpful article, Are Brains Bayesian? SciAm Blogs, (2016). Other scholars think that our genetic and proteomic intelligence, the kind we see manifested in single celled organisms like Paramecium, also tries to take Bayesian approaches. A Bayesian-like addition to information theory which tells us the way meaningful information updates our preferences in intelligent systems would be a great advance.

But we likely also need an information theory that tells us how information constrains and causes certain futures, regardless of our preferences. Sometimes our environment, operating a larger scale, is funneling us toward an optimum that we cannot perceive, or consider using mental probabilities. The great social systems theorist Niklas Luhmann defined systems as self-perpetuating zones of reduced complexity, in which a boundary is dynamically maintained between the system and its environment, and most exterior information is ignored. He calls such systems “operationally closed”, and observes that while they rely on resources outside of themselves, those resources are not part of their communication. The criteria that constrain which information is selected to enter or exit a system, and how information is processed within it, is how Luhmann defines meaning. This system-based and relational approach to meaning seems necessary if we are to develop a more scientific theory of meaning. See Luhmann’s Introduction to Systems Theory (2012) for more.

As we think about informational constraint, we notice that spatial, temporal, energetic and material persistence is another useful and developmental way to think about meaning. As Physics, then Chemistry, then Biology, then Society, and now Technology have emerged in universal evolutionary development, each new “substrate” has created new persistent constraining relationships among STEM structures and processes. Most of the relationships each of these systems creates are evolutionary, experimental, and ephemeral. But some, once created, will persist across many physical and informational contexts or for long periods of time, or both. Thus the persistence of a relationship in any of the STEM domains, is another kind of information that is always growing.

One word we could use for this kind of information, at the universal scale, is its truth. Consider a book that contains correct physics equations versus one with incorrect ones. Or consider the way physical laws constrain you, whether you want them to or not. Another kind of persistence has to do with the usefulness (adaptiveness) of the information to a specific set of intelligences within the universe. As a society, we find classic literature more persistent than pulp fiction. Our only photograph of our grandmother will tend to persist much longer (be more meaningful) than any random one in our collection.

Let’s close this section with a guess at a future theory of meaning. Information may have meaning because we use it to create possibilities and to experiment (an evolutionary definition), to reduce uncertainty or constrain us in a STEM persistent way (a developmental definition), or to change our forward predictions and behavior (an evo devo, or adaptive definition). All three of these ways may turn out to be useful to understanding meaning in complex, intelligent systems.

In my view, an effective universal theory of complexity will have to do several things. Here are three to consider.

  1. First, it will need to recognize two fundamental ordering processes: evolution (chaotic, creative, divergent, unpredictable change) and development (causal, conservative, convergent, predictable change). It can’t be just an evolutionary theory, it will have to be an evo devo theory, one that recognizes that both processes — while they often oppose each other — are equally in service to adaptation. Classical Darwinian theory does not recognize that development is an equally fundamental process. But if universal development actually exists, complex systems are ever more constrained, into particular developmental futures, the more complex they become.
  2. Second, a theory of order will have to recognize that certain special forms of complexity are not just running up over time, they are accelerating upwards, not only on Earth but presumably everywhere else in our universe like Earth, as entropy increases. Densification and dematerialization, happen at faster and faster rates as the universe expands, at an accelerating rate. On Earth today, the fastest growing minds by far are no longer biological, they’re technological. We’re running ever faster into physical and virtual inner space, and these universal megatrends also must be accounted for.
  3. Third, a good theory of complexity will have to explain the reason that so many of our universal processes appear to be improbably well-tuned for the ubiquitous emergence and long-term persistence of complexity, life, and intelligence. Concepts like the Fine-Tuned Universe hypothesis in physics, the Fecund Earths hypothesis in astrobiology (the opposite of the Rare Earth hypothesis), and the Gaia hypothesis in planetary homeostasis are all examples of this puzzling open problem. A good theory should also explain the origin and ratios of predictable vs. unpredictable processes that generate complexity, like the 80/20 and the 95/5 rules.

Let’s briefly turn to the concept of evolutionary development now, and ask how it helps us to better understand accelerating change.

Contact Us

We're not around right now. But you can send us an email and we'll get back to you, asap.

Table of Contents

guideintrobookwhite

Chapter 2. Personal Foresight – Becoming an Effective Self-Leader

Chapter 2: Personal Foresight

Becoming an Effective Self-Leader

Chapter 4. Models – Foundations for Organizational Foresight

Chapter 4: Models

Foundations for Organizational Foresight

Chapter 7. Acceleration – Guiding Our Extraordinary Future

Chapter 7: Acceleration

Guiding Our Extraordinary Future (In Process)

II. Global Progress: 5 Goals, 10 Values, Many Trends

Innovation: Our Abundant Future
Intelligence: Our Augmented Future
Interdependence: Our Civil Future
Immunity: Our Protected Future
Sustainability: Our Rebalanced Future

III. Universal Accelerating Change

Great Race to Inner Space: Our Surprising Future
Entropy&Information: We’re Running Down & Up
The Puzzle of Meaning: We Have No Einstein Yet
Trees, Funnels & Landscapes: Intro to Evo Devo
Big Picture Change: Five Scales of Accelerating ED
Transcension Hypothesis: Where Acceleratn Ends?
IDABDAK: Social Response to Accel & Developmnt
We’re On a Runaway Train: Being Accelaware

IV. Evo Devo and Exponential Foresight

Seeing It All: Accel., Diverg, Adapt, Convrg, Decel.
Natural (I4S) Innovation: The Evolutionary Drive
Natural (I4S) Intelligence: The Human-AI Partnership
Natural (I4S) Morality: Why Empathy and Ethics Rule
Natural (I4S) Security: Strength from Disruption
Natural (I4S) Sustainability: The Developmental Drive
S-Curves: Managing the Four Constituencies
Pain to Gain: Traversing the Three Kuznets Phases
Hype to Reality: Beyond Hype Cycles to Reality Checks
Exponentials Database: Measuring Accelerations
TINA Trends: Societal Evolutionary Development
Managing Change: STEEPCOP Events, Probs, Ideas
A Great Shift: A Survival to a Sentient Economy

V. Evo Devo and Exponential Activism

Building Protopias: Five Goals of Social Progress
Normative Foresight: Ten Values of Society
Top & STEEPCOP Acceleratns: Positive & Negative
Dystopias, Risks, and Failure States
Three Levels of Activism: People, Tech & Universe
A Great Opportunity: Exponential Empowerment

 

Chapter 8. Your Digital Self – The Human Face of the Coming Singularity

Chapter 8: Your Digital Self

The Human Face of the Coming Singularity (In Process)

I. Your Personal AI (PAI): Your Digital Self

Digital Society: Data, Mediation, and Agents
Personal AIs: Advancing the Five Goals
PAI Innovation: Abundance and Diversity
PAI Intelligence: Bio-Inspired AI
PAI Morality: Selection and Groupnets
PAI Security: Safe Learning Agents
PAI Sustainability: Science and Balance
The Human Face of the Coming Singularity

II. PAI Protopias & Dystopias in 8 Domains

1. Personal Agents: News, Ent., Education
2. Social Agents: Relat. and Social Justice
3. Political Agents :  Activism & Represent.
4. Economic Agents:  Retail, Finance, Entrep
5. Builder Agents :  Work, Innov. & Science
6. Environ. Agents : Pop. and Sustainability
7. Health Agents :  Health, Wellness, Death
8. Security Agents :  Def., Crime, Corrections

III. PAI Activism & Exponential Empowerment

Next Government: PAIs, Groupnets, Democ.
Next Economy: Creat. Destr. & Basic Income
Next Society: PAI Ent., Mortality & Uploading
What Will Your PAI Contribution Be?

Chapter 10. Startup Ideas – Great Product & Service Challenges for Entrepreneurs

Chapter 10: Startup Ideas

Great Product and Service Challenges for Entrepreneurs (In Process)

I. 4U’s Idea Hub: Building Better Futures

Air Deliveries and Air Taxis: Finally Solving Urban Gridlock
Ballistic Shields and Gun Control: Protecting Us All from Lone Shooters
Bioinspiration Wiki: Biomimetics and Bio-Inspired Design
Brain Preservation Services: Memory and Mortality Redefined
Carcams: Document Thieves, Bad Driving, and Bad Behavior
Competition in Govt Services: Less Corruption, More Innovation
Computer Adaptive Education (CAE): Better Learning and Training
Conversational Deep Learning Devsuites: Millions of AI Coders
Digital Tables: Telepresence, Games, Entertainment & Education
Dynaships: Sustainable Low-Speed Cargo Shipping
Electromagnetic Suspension: Nausea-Free Working & Reading in Cars
Epigenetic Health Tests: Cellular Aging, Bad Diet, Body Abuse Feedback
Fireline Explosives and Ember Drones: Next-Gen Fire Control
Global English: Empowering the Next Generation of Global Youth
Greenbots: Drone Seeders and Robotic Waterers for Mass Regreening
High-Density Housing and Zoning: Making Our Cities Affordable Again
Highway Enclosures and Trail Networks: Green and Quiet Urban Space
Inflatable Packaging: Faster and Greener Shipping and Returns
Internet of Families: Connecting People Over Things
Kidcams: Next-Gen Security for Child Safety and Empowerment
Kidpods: Indoor & Outdoor Parent-Assistive Toyboxes
Microdesalination: Democratizing Sustainable Fresh Water Production
Noise Monitors: Documenting and Reducing Noise Pollution
Oceanside Baths: Sustainable Year Round Beach Enjoyment
Open Blood Scanners: DIY Citizen Health Care Sensor Tech
Open Streaming Radio: User-Centered Audio Creation and Rating
Open Streaming Video: User-Centered Video Creation and Rating
Open Values Filters: Social Rankers, Arg. Mappers, and Consensus Finders
Personal AIs: Your Private Advisor, Activist, and Interface to the World
Pet Empowerment: Next-Gen Rights and Abilities for Our Domestic Animals
Safe Closets: Fire-, Earthquake-, and Intruder-Proof Retreat Spaces
Safe Cars: Reducing Our Insane 1.3M Annual Auto Deaths Today
Safe Motorcycles: Lane Splitting in Gridlock Without Risk of Death
Shared Value Insurance: User-Centered Risk Reduction Services
Sleeperbuses and Microhotels: Demonetized Intercity Travel
Space-Based Solar Power: Stratellite Powering and Weather Management
Stratellites: Next-Gen Urban Broadband, Transparency, and Security
Touch DNA: Next-Gen Home Security and Crime Deterrence
View Towers: Improving Urban Walkability, Inspiration, and Community

Chapter 11. Evo Devo Foresight – Unpredictable and Predictable Futures

Chapter 11: Evo Devo Foresight

Unpredictable and Predictable Futures

Appendix 1. Peer Advice – Building a Successful Foresight Practice