Many social commentators are keen to remind us we live in an uncertain world. They are quick to point to our parlous geopolitical situation with the rising influence of antidemocratic forces in China and Russia; the threat to our health and wellbeing of the Covid epidemic and unpredictable outcomes of climate change. Most would have us believe that such uncertainty is caused by human nature impacting on an essentially stable, predictable physical universe. But that is essentially wrong. The universe, even without human intervention is inherently uncertain.
I was first compelled to think about this strange situation as an undergraduate student studying physics. It was there I encountered Heisenberg’s Uncertainty Theorem. In simple terms the theorem stated that when studying subatomic particles the more certain we could be of the particles position the less certain we could be about its momentum. There was an inbuilt uncertainty such that the more we could accurately assess one parameter the less we knew about the other.
Conventional wisdom would have us believe that we turn to science for certainty, but real scientists understand that such certainty is, and is likely to be always, elusive. As a corollary it is common for people to say something is “scientifically proven”. In the climate debate, for example, the climate catastrophists are quick to tell us “the science is settled”. But this is, in itself, almost an oxymoron because the very foundation of science is to keep the door open to doubt.
Karl Popper was perhaps the most influential philosopher of science in the twentieth century. He had a wide influence on actually defining science. He argued that scientific theories are distinguished from non-scientific theories because the former make testable claims that future observations might reveal to be false. This is known as Popper’s Falsification Theory. True science therefore emanates from a boldness to take a risk of being wrong. True scientists, consequently, actively seek out data that might prove scientific theories wrong. Unfortunately, being human, many working in science seek out data that confirms what they want to believe rather than objectively test those theories they have a vested interest in. (Thomas S Kuhn’s marvellous book The Structure of Scientific Revolutions gives some great historical evidence of this tendency to lock in scientific beliefs in the face of countervailing evidence.)
The most common misunderstanding about science is that scientists seek and find truth. They don’t. By and large they make and test models. Kepler provided a model that more accurately described the motion of the planets than his predecessors. His predictions were quite accurate. But the laws of planetary motion were subsequently improved by Newton and then later on by Einstein.
Building models is very different from proclaiming truths. It is a never ending process of discovery and refinement. Uncertainty is intrinsic to the process of finding out what you don’t know, not a weakness to avoid.
Uncertainty about the world is not only a feature of science but it is also a feature of mathematics.
Twenty years after becoming acquainted with Heisenberg’s famous theorem, I happened across a fabulous, imaginative work by the American scholar and polymath, Douglas Hofstadter, for which he won the Pulitzer Prize for non-fiction. The book was titled Gödel, Escher, Bach: an Eternal Golden Braid. In it he explored common themes in the lives and works of logician Kurt Gödel, artist M C Esher and composer Johann Sebastian Bach.
From this rather esoteric source, I first learnt about Gödel’s Incompleteness Theorem. Hofstadter’s literal translation of Gödel’s mathematically expressed proposition went something like this:
All complete axiomatic formulations of number theory contain undecidable propositions.
Hofstadter explained that according to Gödel if we wanted a complete theory some propositions would be unprovable. Or if we wanted to ensure all propositions outlined could be proved, then the theory, of necessity, could not be all encompassing. Just as in Physics in the Uncertainty Theory we could either know the position but not the momentum of a particle, in Mathematics we have to choose between completeness and provability.
Now, when I was younger I always understood that humans never had the capacity to fully understand the universe. At that time I thought our limitation was one of capacity. That is, how was it possible for something that was merely a minuscule part of the universe (a human brain)to ever have the capacity to understand the whole shebang!
But the findings I have reported above point to something much deeper. There is something inherent in the fundamental nature of the universe (cosmologist Paul Davies might have attributed it metaphorically to The Mind of God) that renders it somehow inscrutable to the full understanding of humans.
We have subsequently come to understand (contrary to conventional wisdom) that uncertainty is a central component of what make science successful. Being able to quantify uncertainty and incorporate it into scientific models is what makes science quantitative rather than qualitative. Indeed no number, no measurement, no observation in science is exact.
Physicist, Lawrence Krauss has written:
Quoting numbers without attaching an uncertainty to them implies that they have, in essence, no meaning.
The notion of uncertainty is perhaps the least well understood concept in science. Paradoxically, in the public eye, uncertainty is often portrayed as a bad thing implying a lack of rigour and predictability.
In quantum physics uncertainty predominates. Nothing is assumed absolutely determinate but is assigned a probability.
Making sense of anything means making models that can predict outcomes and accommodate observations. At any level of complexity those models don’t deal with certainties but with probabilities. After some consideration we can’t avoid the conclusion that the world is not determinate but must be described in terms of probabilities, i.e. that it is inherently uncertain.
Concomitant with our need to understand the world through models are uncertainties provided by the limitations of models themselves.
Alfred Korzybski the founder of the field of knowledge known as general semantics reminded us of those limitations with his famous quote:
The map is not the territory.
He reminds us that our perception of reality is not reality itself but our own version of it, or our own “map”.
We know that all such models (“maps”) are limited and as a result their predictions will always have room for error. In the physical world, the only way to know tomorrow’s weather in detail is to wait twenty four hours and see. The universe is computing tomorrow’s weather as rapidly and efficiently as possible; any smaller model is inaccurate, and the smallest error can be amplified to large effects.
Mathematician, Rudy Rucker, has written:
It’s a waste of time to chase the pipe dream of a magical tiny theory that allows us to make quick and detailed calculations about the future. We can’t predict and we can’t control. To accept this can be a source of liberation and inner peace. We’re part of the unfolding world, surfing the chaotic waves.
But human beings crave certainty. Our brains revolt at the idea of randomness. We have evolved as a species to become exquisite pattern finders. Our minds automatically try to place data in a framework that allows us to make sense of our observations and use them to predict and understand events.
We resort to various subterfuges to help us explain unexplainable events. Popular sentiment would have us believe that we get “runs of luck”, or that bad things happen in threes. Others turn to Astrology to help explain what are normal, random events.
But many events are not fully predictable or explicable. Paradoxically when we aggregate large numbers of such events we unearth a surprising predictability. Charles Seife, science journalist and author of The Dark Arts of Mathematical Deception writes:
The law of large numbers is a mathematical theorem that dictates that repeated independent random events converge with pinpoint accuracy upon a predictable average behaviour. Another powerful mathematical tool, the central limit theorem, tells you exactly how far off that average a given collection of events is likely to be. With these tools, no matter how chaotic, how strange a random behaviour might be in the short run, we can turn that behaviour into stable, accurate predictions in the long run.
The rules of randomness are so powerful that they have given physics some of its most sacrosanct and immutable laws. Though the atoms in a container full of gas are moving at random, their collective behaviour is described by a simple set of deterministic equations. Even the laws of thermodynamics derive their power from the predictability of large numbers of random events. They are indisputable only because the rules of randomness are so absolute.
So what is the overall lesson here? The universe is under-laid with uncertainty. Essentially, as a result, even those scientific theories that provide the best maps of the world for us, can never be definitively proven. They may however be disproven. We are given some underlying support in interpreting this indefinite world by the use of probability theory. But in essence it would appear that the laws underpinning both physics and mathematics have contrived to create a world that will always be beyond our complete understanding.