Reading time: ca. 15-25 minutes.
By Levien van Zon
Stories are important for how we see the world. In my first article, I argued that stories act as internal models of the world, supercharged by our ability to share them with others. My subsequent post was about complexity and about science. Although much of our knowledge comes from the pursuit of science, its tools have trouble with so-called complex systems. Complexity in this context means that there are many interacting parts. The parts of a complex system are hard to separate because their interactions are significant, they cannot simply be ignored or averaged away.
So far I have been a little vague about a number of things. For instance, what do I mean by “stories”, “internal models” and “knowledge”? And what is the nature of “interactions” between the parts of a complex system? In this article, I want to make some of these things more concrete. Also, I will show that complexity isn’t just there to make our lives difficult, it can actually have an important function.
Knowledge as explanatory stories
Let’s start with the concept that stories are models of the world. Stories are complicated things that are closely tied up with the workings of our mind. They are hard to define and they serve many functions. So for now I will just focus on a common experience: We generally feel that we understand an event, if we can construct a coherent story of how the event was generated.1 Such explanatory stories are often not accurate, of course. Our explanation of an event may be quite different from how the event was actually generated.2 But regardless of their accuracy, the collection of cause-and-effect stories that we believe to be true does constitute our personal knowledge, our internal model of how the world works.
Because a fair number of our causal beliefs are at best inaccurate and at worst plain nonsense, it is useful for a society to have some way to check our shared beliefs. Science can be seen as a collective human effort to figure out whether cause-and-effect stories (usually called theories or hypotheses) actually correspond to processes in the real world. And it turns out that this is not at all easy to establish.
The way to test a causal story is typically through experiment. If we change something in a proposed causal chain, we should see a change in the effect. However it is not always possible to perform direct experiments. For instance, things can be too small or too big to manipulate, like molecules or societies. Or they may be connected to too many other things, as with cells in the human brain. Luckily, people have come up with many ingenious and elaborate ways to establish which causes lead to which effects.3 Ideally we are able to figure out a detailed mechanistic process. By this I mean a description of all the steps through which certain causes lead to certain effects. Knowing the true (or at least likely) mechanism through which something happens is what we usually regard as “understanding” in the scientific sense. Having such a mechanistic description is very powerful, as it may allow us to control things and predict what may happen in the future or in other hypothetical situations.4
A simple complex system
As I wrote earlier, the classical scientific approach has difficulty with complex systems, because such systems have many significant interactions between their parts. For instance in the human body, in human society or in a natural ecosystem, it is hard to manipulate and study parts in isolation. This would be necessary to clearly determine cause-effect-relationships and understand how such relationships work. Yet if we isolate parts, we change the system we want to study. Moreover, cause and effect may not even be well-defined, because there may be causal loops or “feedbacks”. Let’s look at an example to make clear what I mean.
Imagine a small greenhouse containing a simple ecosystem with three kinds of organisms: plants, slugs and hedgehogs. This is a very simple complex system. It is not hard to describe the interactions here: the slugs eat the plants, and the hedgehogs eat the slugs. These interactions are significant in the sense that they are literally a matter of life or death for the parties involved. Hence this system is complex in at least two aspects: interactions are significant and the parts thus behave differently if you separate them. Whether specific interactions are positive or negative depends on our perspective. Clearly the slugs negatively affect the plants, but we can also say that the plants positively affect the slugs (and indirectly the hedgehogs). For an outside observer the two ways of seeing the interaction are more or less equivalent, for the plants and slugs they are not.
Stability and circular causes
While the interactions are easily described here, the behaviour of the system is not so easy to describe in terms of causes and effects. What happens for instance if the slugs eat all the plants? There will be a brief explosion of slugs and perhaps hedgehogs, after which all animals will die of hunger. But while such a slug-armageddon is certainly possible, the little ecosystem can also be perfectly stable. The hedgehogs can keep the slugs in check, giving the plants an opportunity to grow, and the limited number of slugs will in turn limit the hedgehogs. As you can see, there are loops of causality here: the slugs affect the hedgehogs, which in turn affect the slugs, which affects both plants and hedgehogs, and so on. This is an example of circular causality, or what are often called “feedback loops”.
Feedback loops can be tricky to describe, because there is no clear separation of cause and effect. Do the slugs limit the hedgehogs, or do the hedgehogs limit the slugs? The answer is yes. They limit each other. The relationship between slugs and hedgehogs is an example of what is called a negative (or balancing, or attenuating) feedback. A familiar example of negative feedback in human engineering is the thermostat, which turns up the heating if it becomes too cold, and turns it down if it becomes too warm. Negative feedback is in many ways positive, because it can prevent things from getting out of hand, so it tends to provide stability.
The other common type of feedback loop is called amplifying, reinforcing or simply positive feedback. As you will probably have guessed, positive feedback isn’t always positive. An example is the “rich get richer” effect: People who have more money have more opportunities to acquire further wealth than people who have less money. Therefore the rich tend to become richer and inequality in societies will tend to increase unless active measures are taken to prevent the concentration of wealth and power. Positive feedback is very powerful, it drives exponential growth and can rapidly amplify things. But in the absence of stabilising mechanisms it potentially causes instability. We depend on it for our nervous and immune systems to function (among many other things), and it drives biological evolution and economic growth. But it can also lead to explosions, economic depression, ecosystem collapse, runaway climate change and social revolutions.
Understanding complex behaviour
Back to our little greenhouse. Because “normal” human language is imprecise and has difficulty with circular causation, there is a limit to how well we can describe what goes on in this small system. This is despite the fact that our greenhouse only contains three species and our story ignores many other things that are also important (like the sunlight, water, nutrients and microbes that are required for the plants to grow). Our description doesn’t provide enough information to predict, for instance, whether a given constellation of slugs and hedgehogs will destabilise the system by eating too much. For this, we will need a more precise description of what is going on. One thing we can do is construct a mathematical model that describes the main interactions between the three species. I will not bother you here with the details, you can see what this looks like in the footnotes if you’re interested.5 Suffice it to say a mathematical description is much more compact than our descriptive story, and it allows us to make very specific predictions. We can use it to determine how much the animals can eat, or how fast the plants need to grow, for the system to remain stable. In this sense it is a richer description. Although to make the mathematics manageable it also leaves out a lot of details. These details may or may not be important for the questions we want the model to answer. Like our descriptive story, a mathematical model is a simplification. In many ways it is a very poor description of what actually goes on. All our models, whether stories, pictures, mathematical formulas or computer simulations, are tools for understanding the things that happen in the world. They are necessarily simplifications, and they are never as rich and subtle as the real thing.
Exploding hairballs
Now imagine trying to understand and describe an actual, natural ecosystem that isn’t an artificial greenhouse with only three species. Puget Sound is an estuary in Washington State, adjacent to the cities of Seattle and Tacoma. This is what it looks like:
And here is a picture of the main species groups and their interactions in the central basin of the Puget Sound ecosystem:
Good luck. Clearly this is a more complex complex system. Even though the diagram shown above is still much simplified, human language is no longer an effective tool to describe all the interactions shown here. The picture above actually represents a computer model. This model contains a mathematical description of relations between groups of species, such as what and how much they eat.6 These numbers in turn are based on observations and measurements of what actual creatures do. You can imagine how much work goes into constructing such a model. And this is not even a very complicated system, as ecosystems go. Moreover, the model mostly describes relatively big organisms. It lumps the huge diversity of micro-organisms—which constitute an ecosystem onto itself—into a few large groups such as “bacteria” and “phytoplankton”.
Now imagine trying to make a model of what goes on in the human body. The Puget Sound ecosystem model describes 65 species groups. The human body consists of roughly 30 trillion cells of more than 400 different types, plus another 38 trillion or so microbes belonging to more than a thousand different species. Our own cells interact with this personal ecosystem of resident micro-organisms in ways that are essential for our health but are poorly understood. Furthermore, the internal workings of our cells depend on more than eighty-thousand different proteins and on a lot of other types of molecules, many of which we don’t know anything about yet. Much of what goes on in our body is decentralised and self-organising, although some of it is coordinated by our nervous system, which has somewhere in the order of 100 trillion neuronal connections.7 The human body is so mind-bogglingly complex that there is no way a single person could understand how it all works, exactly. Luckily we don’t need to know all the details in order to operate our body. But for medical science this complexity does present a problem.
Getting lost in details
In fact, we almost always see complexity as a problem, and with good reason. If we want to control things, we need to know how they work (at least more or less). If you want a doctor to treat a serious medical condition, it helps a lot if they know what causes the condition, and what happens if you treat it with a certain pharmaceutical drug. Of course trial-and-error can be a valid approach, but it is much more useful to know causal mechanisms (and as a patient I would much prefer the latter). And luckily we do know quite a few mechanisms, the exact processes by which certain things happen in our body, often down to the level of molecules. But figuring these out has taken a gigantic, time-consuming and very costly effort by millions of researchers collecting data and doing experiments for more than a century. And given how much we still don’t know, the end of this effort is nowhere near in sight.
One of the problems is that merely collecting detailed information isn’t sufficient for understanding how things work. As science writer Philip Ball points out in his book “How Life Works”, it clearly isn’t the case that all details matter. If all details were important, life would be utterly fragile in the face of disturbance and damage. But life is remarkably resilient, so buried somewhere in the endless complex details there must be robust patterns and processes that keep things running, even under difficult conditions.
If we collect a lot of detailed observations and focus mostly on these, we easily lose sight of the big picture. We risk missing the forest for the trees, missing the patterns and processes that actually matter most. Unfortunately it isn’t always clear where we should look for these “aspects that matter”. For several decades it was believed that for living beings, genetics is what matters most. This is an understandable belief: Genes contain much of the information that we pass on between generations, and this gets acted on by evolution. So surely our genome must provide a kind of “blueprint” for how to build an organism?8 If we can just understand how to read and interpret this information, then all the other messy molecular and cellular details may prove less important. Or so we believed.
Complex patterns for robustness
The Human Genome Project started in 1990. It aimed to read all our DNA so that we could catalogue our genes and associate them with their function. But when the first results came in a decade later, they were quite different from what was expected. It wasn’t a neat picture of genes determining certain functions, or even mostly encoding protein molecules with clear functions. Rather, our genetic content seemed a bewildering mess of baroque complexity. Even two decades on, it is still unclear what most genetic elements “do”. In the years after the first genome datasets were published, the messy data did turn out to contain patterns: For instance, some combinations of genes were found to behave in ways similar to electronic circuits. Such biological circuits often have clear functions. They can oscillate or switch between states in various ways.9 These circuits are very common in single-celled organisms, but alas, much less common in humans and other multicellular organisms. The “wiring” of the molecular network encoded by our genes actually resembles the brain more than it does a set of simple circuits. An electronic circuit processes electrical signals. Similarly, our molecular networks also seem to direct and process information, encoded in chemical, electrical and mechanical signals. But the information doesn’t neatly flow in one direction or stay in one part of the system. Instead, like in the brain, the information seems to go everywhere, and it can have effects all over the organism.
How can we make sense of this? Why is our body (and that of other multicellular organisms) organised in such a strange and seemingly chaotic and inefficient way? Is this just the historical legacy of millions of years of random mutation?
Perhaps some of it is an evolutionary legacy, but certainly not all of it. Rather, it probably isn’t a coincidence that chemical interaction networks in our body show similarities to the way our brain is wired. Our neural networks are wired to integrate and process information in ways that are relatively flexible and robust. These networks need to be somewhat insensitive to noise, damage and small details. The same is probably true for the molecular networks encoded by our genome. Individual genes do have an effect on the way information is processed by these networks. But the effect is typically small and often it’s hard to predict beforehand what the effect will be. The fact that most gene mutations have only minor effects, provides the system with some degree of robustness. This also makes the process of evolution a lot easier. And the unpredictability of the effects is an advantage, given the existence of viruses and other pathogens. These parasites constantly try to co-opt our molecular machinery for their own purposes. And this would be a lot easier for them if the workings of our biology were transparent and predictable.
All living beings have to survive in a world that is full of noise, variation and elements that can inflict damage. If our biology was sensitive to all of this, it would never be able to build a well-functioning complex individual from a single cell, and make it last for 80 years.
Details can matter, but context matters more
So what does all of this tell us about how we should deal with complex systems? First, it seems that the role of details in such systems is complicated. If we ignore details, we risk coming up with “mystical” cause-and-effect stories. Such stories have no clear mechanism, no clearly defined process through which causes lead to effects. It may take away our uncertainty to attribute effects to deities, the universe, black boxes or invisible hands. This in itself can be useful. We dislike uncertainty, and even false certainties may aid us in many ways, for instance by binding social groups together.10 But often vague, mystical explanations don’t help us in solving the actual problems we face. For this, we need a mechanistic understanding of the processes through which things happen. And to understand such a process, we need to know about relevant details. If we say “everything is connected”, this may be true but it doesn’t further our understanding. To understand, we need to figure out which connections are actually important, and how they influence the behaviour of a system. But even if we do this, we should be aware that our knowledge will always be incomplete. We will usually oversimplify, and miss connections that are important in a certain context.
Moreover as written before, if we focus mostly on details, we may lose sight of the bigger picture. Especially when dealing with living systems, we need to zoom out occasionally and wonder what a system is supposed to be doing. What are its goals, which things matter for this, and what is their context?11 In a living system, the way in which details relate to the whole, can be compared to the structure of human language: If you focus just on letters or on letter shapes, language seems a bewildering mess of small details. It isn’t until you zoom out to words that things start to make some sense. But to fully comprehend language, you need the words to interact with each other and with grammar in sentences. These eventually interact with our mood, knowledge and experience in paragraphs and stories. Words and letters do matter in all of this, but a sentence can remain comprehensible if we remove or substitute words, and words can remain comprehensible if we change or remove letters. Moreover, entire languages can evolve significantly and rapidly while still remaining functional. Like language, the way life works has to be both adaptable and robust. Details do matter, but it’s hard to say beforehand which details do and which don’t. One has to comprehend the larger-scale patterns to understand the role of details.12
The fragile, the robust and the adaptive
Finally, we need to realise that what we often see as messy, inefficient complexity can actually have a function. We tend to dislike complexity. It makes things harder to understand, manage and optimise. We prefer clear chains of cause and effect, which we can more easily communicate, understand and modify. When confronted with a complex system, we mostly seek to make it simpler. But in evolved, living systems, complexity is a feature rather than a bug.
Of course, good design doesn’t have to be complex, but it needs to be both robust and adaptive if it is to survive in the long term. The essayist and mathematician Nassim Nicholas Taleb has gone a step further and proposed the term antifragile. This describes things that do not just persist under adverse conditions (robustness) and are able to recover (resilience), but that actually require some adversity to function well and get better over time (which requires adaptation).
We humans are quite good at optimising things for a narrow set of functions and conditions. We often design structures that seem efficient and in which causes and effects are easy to understand. But these structures also tend to be fragile, especially when conditions are no longer “optimal”.
If we remove or damage any part of such a fragile system, it no longer works well.
On the other hand, when we design things to be robust, we tend to do it by overdesigning. We make a best guess of worse case scenarios (say, the maximum strain a bridge has to endure). Subsequently we apply a safety factor: we design the structure so that it can withstand two or three times the worst-case strain. This approach works well, provided of course our calculations are correct, the scenarios don’t become outdated, we didn’t overlook possible causes of failure and the structure we design is well-maintained.13
Naturally evolved systems seem to do something quite different. They tend to have many complex causal loops that are hard to understand. But such loops do have a function in stabilising the system and in making it less sensitive to noise and damage. Such systems tend to be robust. If they weren’t, they wouldn’t last very long in a world that can often be adverse and unpredictable. They are able to recover from damage and adapt relatively quickly to new conditions.14
Having antifragility built into a system nearly always makes it harder to comprehend (in the words of the late James C. Scott, it is less legible). Still this approach is often more efficient and is certainly more adaptable than overdesigning structures with large safety margins to obtain robustness.15 And a dynamic “antifragile” approach is certainly more robust than designing systems that are so streamlined for “efficiency” that they fail at the least sign of trouble. We overdesign bridges, power plants, cars and aeroplanes, which is good. We streamline our supply chains and our food production system, which is worrisome.
If we seriously want to make our societies and our infrastructure sustainable in the long term (by which I mean: longer than a few decades), we will need to broaden our view to how life does things. We have currently organised our societies in ways that will probably not fare well under big changes of any kind, especially if such changes are sudden and unexpected. Life has survived such challenges many times, and some past societies have done so as well. It would be wise to learn from the design principles and processes through which living systems manage to persist, rather than to assume that we know better, or to ignore such mechanisms altogether. At the very least we should start to accept and appreciate complexity. We should pay more attention to patterns and processes that provide robustness, such as negative feedback loops. But we can do much better, perhaps we can learn to use some principles of adaptive, antifragile complex systems in ways that are beneficial to the long-term wellbeing of humans and non-humans alike.
“Red thread” images by Io Cooman. The Puget Sound ecosystem diagram was adapted from Harvey et al. (2010). Puget Sound pictures are by Buphoff (CC BY 3.0), Vickie J. Anderson (Harbor Seal and Caspian Tern, both CC BY-SA 4.0), Bruce Duncan (Plumose Anemone) and Robert Stearns (Geoduck).
Do you want to be notified when future articles in this series are published? Subscribe to my Substack, or follow me on Facebook, Instagram, Threads, Bluesky or Twitter/X. You can also subscribe to our Atom-feed.
Further reading
Donella H. Meadows. Thinking in Systems: A Primer. Chelsea Green Publishing, 2008.
This book is a classic introduction to “systems thinking”, written by the lead author of The Limits To Growth. If you want to get more of a feel for thinking in terms of interactions, positive and negative feedbacks and dynamic patterns, this is a useful book. Moreover, Dana Meadows makes a number of important points in the book. She argues that we need to think in patterns and mechanisms, rather than in separate events. She highlights the role of nonlinear relationships and delays in producing surprising (and often unpleasant) results, as well as the important effects of limited stocks and limited information. She notes that just searching for statistical correlations doesn’t really help us much in understanding what a system does, and how or why it does it. However, I do have a few issues with the text. For instance, terms like “feedback loop” are used very often and sometimes in a very broad sense. If you call nearly every process a feedback loop, the term starts to become somewhat meaningless. More seriously, some of the terminology, definitions and proposed mechanisms are vague, sloppy and/or inconsistent. Especially when talking about things such as complexity, resilience, goals and self-organisation, Meadows’ explanations sometimes mystify rather than clarify these subjects. Finally, viewing the world in terms of dynamical stock-and-flow systems (as this book does) is certainly useful for understanding systems, but truly complex systems are not always best understood this way.
More of Dana Meadow’s work can be found on the website of The Donella Meadows Project,
https://donellameadows.org. The project has also published several videos, including the short animation In A World of Systems. Many others have also published videos based Thinking in Systems, including the excellent series of short talks by Ashley Hodgson.
Ed Yong. I Contain Multitudes: The Microbes Within Us and a Grander View of Life. Bodley Head, 2016.
https://edyong.me/i-contain-multitudes
Readable, interesting and well-researched, this is by far the best book I have encountered on the complex relationship between animals (including humans) and their resident microbes. Many authors have argued that we should stop talking and thinking about microbes mostly in terms of their elimination. Ed Yong proposes that we replace the outdated “war metaphor” of fighting microbes by a more friendly metaphor of gardening or wildlife management. We should actively promote microbes that are useful and only reduce the occasional “weeds”, microbes that are out of place and cause problems. Indeed, our own body and immune system seem designed to do precisely this: Rather than indiscriminately killing microbes, our immune system actively manages the microbial ecosystems on and inside our bodies. Each of us carries at least as many microbes as we have cells in our body. Collectively, the genes of this microbiome greatly outnumber our own genes. Our body actively communicates and coordinates with our resident microbes, and it depends on them to work well. Our microbes allow us to rapidly adapt to different foods, to ward off pathogens and to calibrate our immune system. Many diseases are associated with a disturbed microbiome, a state called dysbiosis. However, it is currently unclear whether dysbiosis is a cause or an effect of disease. Quite possibly it is both. And also in other ways, our relationship with microbes is complicated and doesn’t fit simple narratives of pure conflict or cooperation.
Yong has also talked about the microbiome at the Royal Institution and at the NPR Fresh Air radio show.
For a more thorough (but much less readable and somewhat exhausting) overview of the microbiome in relation to human health and disease, you can also check out Gut Feelings by Alessio Fasano and Susie Flaherty (published in 2022), or watch one the accompanying lectures and discussions recorded by Harvard or the Hudson Library.
Philip Ball. How Life Works: A User’s Guide to the New Biology.
University of Chicago Press, 2023.
https://how-life-works.philipball.co.uk
Humans are big users of metaphor. To understand unfamiliar or abstract things we describe them in terms of concepts that we are more familiar with. To understand how living systems work, we often describe them in terms of factories, machines, computers or other structures built by humans. In this very important book, science writer Phil Ball points out that such metaphors are often unhelpful, because the way in which living systems work is very different from how we humans design things. For instance, our genes are not at all comparable to a blueprint or a computer program. Our genome doesn’t fully specify how our body is built or how it operates. The word “mechanism” usually evokes images of machines, in which interactions are neatly defined. But Ball points out that the mechanisms through which biology operates tend to be very “messy”, and often involve interactions between various levels of organisation. Life needs to process information, and it does so in ways that look weird and complicated to us but that have proven to be robust and flexible in the adverse and noisy environments in which life operates.
A lot of what I wrote in the article above is based on the ideas put forth by Ball in this book. Phil Ball also talks about these subjects in the Big Biology podcast, episode 119.
Yuri Lazebnik. ‘Can a Biologist Fix a Radio?—Or, What I Learned While Studying Apoptosis’. Cancer Cell 2, no. 3 (1 September 2002):179–82.
In this amusing article, researcher Yuri Lazebnik highlights some of the problems with the current approach to studying complex biological systems. He asks the question whether the methods by which research is conducted in the life sciences would be suitable for figuring out what’s wrong with a broken radio.
Philipp Dettmer. Immune: A Journey into the Mysterious System That Keeps You Alive. Random House Publishing Group, 2021.
https://www.philippdettmer.net/immune
The mammalian immune system is a good example of a system that isn’t merely complex, but that absolutely requires complexity to function. It has to be adaptive, it would not work without positive feedback and it requires a great variety of negative feedback loops to stop immune responses from getting out of hand. At least as important: if it were too simple, pathogens would immediately find ways to defeat or avoid it. Of all complex systems we know about, the (human) immune system is arguably one of the most complex, and written accounts of how it works are generally close to unreadable. This book by Philipp Dettmer is a rare exception. It manages to convey the immense complexity of the immune system while still being readable and amusing (despite its liberal use of war metaphors and violent imagery, which I find a little unfortunate).
Dettmer is also the founder of the popular science Youtube channel Kutzgesagt. You can check out the short video which accompanied the book release, as well as the first two chapters on Youtube. Some of the things discussed in the book are also explained in videos on the immune system playlist, including The Immune System Explained part I and part II, as well as the videos on Fever and What actually happens when you are sick.
Uri Alon. An Introduction to Systems Biology: Design Principles of Biological Circuits. CRC Press, 2019.
https://doi.org/10.1201/9780429283321
This academic book discusses the principles of common biological circuits. For a less academic discussion of these topics with Uri Alon, check out the Big Biology podcast, episode 96: The network motifs that run the world. An alternative and freely available academic resource on biological circuits is the online book Biological Circuit Design by Michael Elowitz, Justin Bois, and John Marken (2022).
Paul Davies. The Demon in the Machine: How Hidden Webs of Information Are Solving the Mystery of Life. University of Chicago Press, 2019.
https://cosmos.asu.edu/publication/demon-machine
See also the Big Biology podcast, episode 33: Magic Puzzle Box, with Paul Davies
Nassim Nicholas Taleb. Antifragile: Things That Gain from Disorder. Random House Publishing Group, 2012.
In this wide-ranging book, Taleb introduces the concept of antifragility. The book has been widely praised for the originality and the importance of the ideas it discusses. However, many have also criticised Taleb’s writing style. For a more technical discussion of the antifragility concept, see Taleb & Douady (2013).
Footnotes
-
Piles of books and articles have been written on what constitutes a story or narrative. Usually, a narrative is considered to be some kind of account of related events or experiences, usually involving people. Such definitions tend to hide the fact that we use stories not just to communicate events or experiences, but also to make sense of them. We don’t just share and consume stories, we constantly construct them as well. This process is known as emplotment: the assembly of events into a narrative with a plot, which establishes cause and effect. As Paul Armstrong suggests in his book Stories and the Brain, we largely do this automatically and unconsciously, our brain links perceived effects to their perceived causes. This way, we transform our experience into familiar narrative patterns. The narratives we construct and recognise are strongly influenced by culture, but our narratives can also end up influencing culture. Collectively, we construct and use narratives to make sense of the world. This is one way in which our mind constructs a “model of the world”.
Apart from consciously present narratives, there are many other ways in which our mind and body contain and create models of the outside world. Unconsciously we learn all kinds of patterns (think of language and motor coordination, but also intuition and the skills to be good at games), and we have many built-in instincts. Some aspects of the world are encoded much deeper into our physiology. For instance, the biological clocks in our body quite accurately reproduce the time that it takes our planet to rotate around its axis. Our body uses this internal model of day-length to organise all kinds of internal housekeeping tasks. All organisms have models of the outside world built into their physiology to some extent, even single-celled microbes. But the neurological models that we animals construct can be much more elaborate, and they have another clear advantage when it comes to adaptability: They can be updated rapidly with new information. Or in normal language: We can learn things, and we can do so relatively quickly. ↩ -
Our explanation of an event may be quite different from how the event was actually generated.
There are many examples of very serious scientific theories that later proved to be incorrect. As recent as 150 years ago, the mainstream view in medical science was that cholera, bubonic plague and many other diseases were caused by miasma, a noxious form of “bad air” emanating from rotting organic matter. And until the beginning of the 20th century the consensus view in physics was that electromagnetic waves travelled through an invisible space-filling substance called ether or æther. (Among some physicists this idea is actually making a comeback in a new form, to explain dark matter or quantum effects.)
Inaccurate theories are not just an academic problem, they can have real and serious effects. In the 19th century, misama theory helped promote the construction of sewer systems, as well as Florence Nightingale’s beneficial practice of ventilating hospital wards (which helps get rid of airborne pathogens and replaces them with their more useful cousins). Both of these things were good (although not because of miasma). But the theory, which was already formulated in ancient Greece, also held back the effective prevention of infectious diseases for more than two millennia. The London-based physician John Snow proposed in 1849 that Cholera may spread through contaminated drinking water. Despite providing evidence for this that actually stopped a Cholera outbreak in 1854, his theory was rejected by officials and most experts. Around the same time, the Hungarian researcher Ignaz Semmelweis proposed that it might be a good idea for doctors to wash their hands and instruments before treating patients. Despite also providing empirical evidence, he was ridiculed by the medical community and eventually referred to a mental institution.
Outside of science, conspiracy theories are probably the most glaring example of explanations that are attractive to many people but are unlikely to be an accurate representation of reality. David Icke’s theory that an inter-dimensional race of blood-drinking, shape-shifting reptilian beings have hijacked our societies would seem to be a harmless science fiction story, if it weren’t for the fact that millions of people earnestly believe it to be true. Icke proposes that we defeat our scaly overlords by filling our hearts with love, to deprive the reptilians of the negative energy that they seek. As solutions go, this could certainly be worse. Other conspiracy theories have had rather more violent consequences, causing millions to be killed in pogroms, the Holocaust, witch-hunts and similar acts of persecution and “punishment”. ↩ -
Luckily, people have come up with many ingenious and elaborate ways to establish which causes lead to which effects.
Techniques to determine cause and effect are known as causal inference or causal analysis. In simple experiments we may be able to observe cause and effect fairly directly. But medical studies involve complex physiological systems that are influenced by many different factors. To be able to say anything about the effect of medical interventions, studies are usually performed in the form of a Randomised Controlled Trial or RCT. An RCT attempts to isolate the effect of a treatment by trying to statistically separate true effects from “confounding influences”. Of course this approach is only possible if we can conduct such trials in the first place. Or if we have data on “natural experiments”, that involve two situations that differ in some crucial aspect but are otherwise sufficiently comparable. Unfortunately there are many situations in which experimental intervention is impossible. It is considered unethical to test something on a large group of people if we suspect that it will damage their health or wellbeing. And we cannot perform experiments on very large systems, such as countries or the climate system. We can also not travel back in time to do controlled experiments under past conditions. In such cases, one common approach is to collect evidence for or against some proposed causal relation from as many different sources as possible. We cannot conduct a controlled trial to test whether smoking increases one’s risk of developing lung cancer or chronic disease, because we cannot ask people to smoke for several decades to see what happens. But already in 1964, based on many different lines of evidence and a set of criteria proposed by epidemiologist Bradford Hill, the US Surgeon General concluded that smoking causes lung cancer. Similarly, determining causality in climate science is based on comparing many lines of evidence, including studies of past climates and the outcome of many different kinds of computer simulations. There are also more formal approaches, which allow fairly complicated causal models to be explicitly formulated and tested using statistical data. ↩ -
Having such a mechanistic description is very powerful
Using the terms “mechanism” and “mechanistic” can perhaps be a little misleading here, because these terms can mean different things. In this article I use “mechanism” to mean the process by which something is done or functions, in a fairly broad sense. However, the main definition in many dictionaries is something along the lines of “a system of parts working together in a machine”. Worse, some dictionaries define “mechanistic” as “thinking of living things as if they were machines”, which is more or less the opposite of my view. My use of terms such as mechanism or mechanistic explicitly includes causal processes that function very differently from how we generally design our machines. Some of the issues around the meaning of “mechanism” in science are discussed in Skillings (2015). Machine-like mechanisms function in a well-defined and highly ordered system with sharp boundaries, and involve clear sequential steps. They have a simple flow of causation. But especially in complex systems, many mechanisms do not meet all or even any of these criteria, for instance because they are stochastic, distributed and/or have many feedbacks. An example of such a complex mechanism is evolution by natural selection. ↩ -
One possible mathematical model of our little greenhouse ecosystem would be the following:
$$ \left\{ \begin{array}{l} \frac{\mathrm{d} P}{\mathrm{d} t} = aP - bPS\\ \frac{\mathrm{d} S}{\mathrm{d} t} = cPS - dHS\\ \frac{\mathrm{d} H}{\mathrm{d} t} = fHS - g \end{array} \right. $$This particular model is known as a three-species Lotka-Volterra predator-prey model, it is a slightly extended version of the “classical” two-species Lotka-Volterra model familiar to most students of biology (and some economists). It belongs to a class known as dynamical population models, which are based on sets of differential equations. In this case, P, S and H are the population numbers of plants, slugs and hedgehogs, a is the plant growth rate, g is the rate at which hedgehogs die and b through f are “rate constants” that represent interaction strength and conversion efficiency. To learn more about such models, you can check out the material of the Biological Modeling course at Utrecht University (which I helped teach for several years). This class of models is also known as mass-action models, and these are used among other things to describe chemical reactions. When applied to biological populations, it basically assumes that individual organisms behave as randomly interacting particles in a well-mixed environment. It also assumes that interactions can be averaged, the model doesn’t really describe individuals but only groups. Despite these simplifying (and implicit) assumptions, these models are quite useful for analysing when a system will be stable and when it will not be. The particular formulation of a predator-prey population model given here can however be problematic for this purpose. The formulas above don’t include self-limitation within species, due to competition between individuals. Therefore they have a tendency to predict overshoot, resulting in oscillations, in cases where real ecosystems would probably be much more stable. In fact, the Lotka-Volterra model is famous precisely for showing that populations can oscillate. However, if we make the model more realistic by adding additional terms to the model for self-limitation, such oscillations become much less common. ↩ -
For a description of how the Puget Sound food web model was constructed, see Harvey et al. (2010), ‘A Mass-Balance Model for Evaluating Food Web Structure and Community-Scale Indicators in the Central Basin of Puget Sound’. ↩
-
our nervous system, which has somewhere in the order of 100 trillion neuronal connections
This is a very conservative estimate and probably too low. Common estimates for the number of synapses in the human brain are 100, 150, 500 or 1000 trillion. Exact numbers are unknown for the brain, let alone for the entire nervous system. In any case, it is clear that our nervous system has lots of connections. ↩ -
So surely our genome must provide a kind of “blueprint” for how to build an organism?
We tend to think of our genes in terms of a blueprint or a computer program, which fully specifies how to build a human. In “How Life Works”, Philip Ball points out that this isn’t a good metaphor. One problem is that it rests upon a black-box concept of “gene action”, an almost mystical mechanism that is supposed to translate the genetic instructions into physiology. But the actual relationship between genes and physiology seems to be much more complex. Actual human development depends a lot on physical interactions between cells, on self-organising patterns and dynamic attractors and on complex regulatory networks in which separate genes play only a fairly minor role. Some genes do specify building blocks, and individual genes can perhaps influence some parameters of the system. But by itself the genome does not fully specify an organism. ↩ -
biological circuits often have clear functions.
For instance, so-called feed-forward circuits are over-represented in the gene regulation networks of especially single-celled organisms. There exist various types of such circuits with somewhat different properties. For instance the type-1 coherent feed-forward loop (C1-FFL) acts as a switch with a delay in switching on, but no delay in switching off. Its function is to ignore short pulses of the activation signal, and to only switch on when the input is present for a longer period, thus filtering out noise. Other common biological circuits act for instance as pulse generators, amplifiers or as oscillators (which for instance drive our biological clocks). What all biological circuits have in common is that their operation tends to be fairly robust to noise and to some degree of genetic change (although genetic variation can lead to variation in “tuning” of such circuits). ↩ -
We dislike uncertainty, and even false certainties may aid us in many ways, for instance by binding social groups together.
The social psychology of uncertainty management is discussed, among other places, in this book chapter by Kees van den Bos (2009, read PDF here). He argues that people commonly experience personal uncertainty to be an alarming situation. When confronted with uncertainty, rather than looking for contemplation and introspection, people generally look for situations or explanatory stories that offer them a quick way out of uncertainty. Often, such explanatory stories are related to ideologies or other worldviews that are shared by many people.
In Sapiens, historian Yuval Noah Harari introduces the concept of “shared fictions”. These include explanatory stories which are not necessarily “true” or accurate, but which we collectively believe in. And because of this, such fictions end up having real power in the world, through our collective actions. Shared fictions reduce personal uncertainty and increase group cohesion. They are very powerful tools to coordinate collective action within human societies, and they play an important role in binding societies together. ↩ -
We need to zoom out occasionally and wonder what a system is supposed to be doing. What are its goals?
Not all systems have explicit goals. For example, an ecosystem isn’t centrally controlled, it emerges from the interactions between separate organisms and species. The individual organisms that make up an ecosystem do have agency and goals, and the dynamics of an ecosystem are at least in part determined by the goals of the individuals in it. Also, an ecosystem can end up having functions, such as managing water and recycling nutrients. These functions can be critically important for the individuals that make up the system. So for an ecosystem we can ask: What are its functions? ↩ -
One has to comprehend the larger-scale patterns to understand the role of details.
This is related to the phenomenon of emergence that I discussed in my previous article. In a simple mechanism, say a machine with cogs, the behaviour of the whole can be fully or mostly traced back to the behaviour of the parts. However in complex biological and social systems, as well as in language, the whole also strongly influences the behaviour (or the meaning) of the parts. Causality doesn’t just flow upwards, from parts to higher levels of organisation. It also flows downwards from higher to lower levels. Phil Ball calls this causal spreading, the spreading of causation across levels of organisation. In physiology and molecular biology this is actually a necessity. At the level of molecules, the world is very noisy and structures are unreliable. Other and more stable organisational structures arise out of the interaction of molecules, this is emergence. When these emergent structures, in turn, strongly constrain and influence their building blocks, this is causal emergence. Some of the causation in the system is moved from the noisy world of molecules to levels that are less prone to randomness, and are much less influenced by individual parts. This may be required to make biological systems behave in ways that are reliable. Also, Ball points out that simple causal structures would be very vulnerable to attack by pathogens. Spreading some of the causation to higher organisational levels may partially “hide” it from, say, a virus, which can only “see” molecules. ↩ -
we design the structure so that it can withstand two or three times the worst-case strain
Overdesigned structures mostly prove to be fairly robust, but catastrophic failures do occasionally occur. In 1940, the bridge spanning the Tacoma Narrows strait of Puget Sound collapsed in a spectacular way, just thee months after opening. Its design failed to account for oscillations induced by strong winds. More recently, the Fukushima Daiichi nuclear power plant in Japan experienced a meltdown in 2011. While the power plant was designed to withstand heavy earthquakes, a tsunami ended up disabling both the regular cooling system and multiple backup systems. And while serious nuclear accidents are thankfully quite rare, serious dam failures occur fairly regularly. A recent example is the collapse of the Derna dams in Libya last year. The failure of these two dams caused the death of 6,000–24,000 people, and was the result of heavy rainfall combined with decades of neglected maintenance. Finally, the ongoing incidents and accidents that plague the Boeing 737 MAX aircraft show that even in the extremely safety-oriented aviation industry, it is difficult to entirely rule out problems with design and construction. ↩ -
Such systems tend to be robust. […] They are able to recover from damage and adapt relatively quickly to new conditions.
Of course adaptation doesn’t guarantee success. There are many examples of individuals dying, species going extinct or communities or ecosystems collapsing as a result of external change. This happens when adaptation is too slow, insufficient or simply not possible. But our immune system shows that adaptation, while not infallible, is very powerful. People whose immune system is compromised in some way are much more vulnerable to pathogens and cancer than people with a healthy immune system. Also the rapid evolution of resistance to pesticides and antibiotics and the speed at which some ecosystems recover from damage show that adaptation should not be underestimated. ↩ -
this approach often seems more efficient than overdesigning structures with large safety margins to obtain robustness
This doesn’t mean that applying safety factors is a bad thing to do, nor that it’s an “easy way out”. Engineering is hard, and applying a safety margin is a very sensible solution. Also note that I chose to use the term overdesign rather than “overengineering”, as the latter term is used when we design things in ways that are unnecessarily complicated. Often this is done to increase safety and robustness, but it can sometimes end up making things more fragile. Hence it is often said that “less is more”, because simpler designs are easier to maintain and troubleshoot. However, neither simple designs nor overengineered ones are necessarily robust in the long term. In later articles I will elaborate on what alternative approaches could look like. ↩