SUBHEAD: Americans who can’t afford health care or heating fuel in the winter still have cell phones and internet access.
The attempt to conquer nature—in less metaphorical terms, to render the nonhuman world completely transparent to the human intellect and just as completely subject to the human will—was industrial civilization’s defining project. It’s hard to think of any aspect of culture in the modern industrial West that hasn’t been subordinated to the conquest of nature, and the imminent failure of that project thus marks a watershed in our cultural life as well as our history.
The overwhelming power that science and technology gave to the civil religion of progress, though, was made possible by the fantastic energy surplus provided by cheap and highly concentrated fossil fuels. That’s the unmentioned reality behind all that pompous drivel about humanity’s dominion over nature: we figured out how to break into planetary reserves of fossil sunlight laid down over half a billion years of geological time, burnt through most of it in three centuries of thoughtless extravagance, and credited the resulting boom to our own supposed greatness.
Lacking that treasure of concentrated energy, which humanity did nothing to create, the dream of conquering nature might never have gotten traction at all; as the modern western world’s age of reason dawned, there were other ideologies and nascent civil religions in the running to replace Christianity, and it was only the immense economic and military payoffs made possible by a fossil-fueled industrial revolution that allowed the civil religion of progress to elbow aside the competition and rise to its present dominance.
As fossil fuel reserves deplete at an ever more rapid pace, and have to be replaced by more costly and less abundant substitutes, the most basic precondition for faith in progress is going away. These days, ongoing development in a handful of fields has to be balanced against stagnation in most others and, more crucially still, against an accelerating curve of economic decline that is making the products of science and technology increasingly inaccessible to those outside the narrowing circle of the well-to-do.
It’s indicative that while the media babbles about the latest strides in space tourism for the very rich, rural counties across the United States are letting their roads revert to gravel because the price of asphalt has soared so high that the funds to pay for paving simply aren’t there any more.
In that contrast, the shape of our future comes into sight. As the torrents of cheap energy that powered industrial society’s heyday slow to a trickle, the arrangements that once put the products of science and technology in ordinary households are coming apart.
That’s not a fast process, or a straightforward one; different technologies are being affected at different rates, so that (for example) plenty of Americans who can’t afford health care or heating fuel in the winter still have cell phones and internet access; still, as the struggle to maintain fossil fuel production consumes a growing fraction of the industrial world’s resources and capital, more and more of what used to count as a normal lifestyle in the industrial world is becoming less and less accessible to more and more people.
In the process, the collective consensus that once directed prestige and funds to scientific research is slowly trickling away.
That will almost certainly mean the end of institutional science as it presently exists. It need not mean the end of science, and a weighty volume published to much fanfare and even more incomprehension a little more than a decade ago may just point to a way ahead.
I’m not sure how many of my readers were paying attention when archetypal computer geek Stephen Wolfram published his 1,264-page opus A New Kind of Science back in 2002. In the 1980s, Wolfram published a series of papers about the behavior of cellular automata—computer programs that produce visual patterns based on a set of very simple rules.
Then the papers stopped appearing, but rumors spread through odd corners of the computer science world that he was working on some vast project along the same lines.
The rumors proved to be true; the vast project, the book just named, appeared on bookstore shelves all over the country; reviews covered the entire spectrum from rapturous praise to condemnation, though most of them also gave the distinct impression that their authors really didn’t quite understand what Wolfram was talking about.
Shortly thereafter, the entire affair was elbowed out of the headlines by something else, and Wolfram’s book sank back out of public view—though I understand that it’s still much read in those rarefied academic circles in which cellular automata are objects of high importance.
Wolfram’s book, though, was not aimed at rarefied academic circles. It was trying to communicate a discovery that, so Wolfram believed, has the potential to revolutionize a great many fields of science, philosophy, and culture.
Whether he was right is a complex issue—I tend to think he’s on to something of huge importance, for reasons I’ll explain in a bit—but it’s actually less important than the method that he used to get there. With a clarity unfortunately rare in the sciences these days, he spelled out the key to his method early on in his book:
In our everyday experience with computers, the programs that we encounter are normally set up to perform very definite tasks. But the key idea I had nearly twenty years ago—and that eventually led to the whole new kind of science in this book—was to ask what happens if one instead just looks at simple arbitrarily chosen programs, created without any specific task in mind. How do such programs typically behave? (Wolfram 2002, p. 23)
Notice the distinction here. Ordinarily, computer programs are designed to obey some human desire, whether that desire involves editing a document, sending an email, viewing pictures of people with their clothes off, snooping on people who are viewing pictures of people with their clothes off, or what have you.
That’s the heritage of science as a quest for power over nature: like all other machines, computers are there to do what human beings tell them to do, and so computer science tends to focus on finding ways to make computers do more things that human beings want them to do.
That same logic pervades many fields of contemporary science. The central role of experiment in scientific practice tends to foster that, by directing attention away from what whole systems do when they’re left alone, and toward what they do when experimenters tinker with them.
Too often, the result is that scientists end up studying the effects of their own manipulations to the exclusion of anything else. Consider Skinnerian behaviorism, an immensely detailed theory that can successfully predict the behavior of rats in the wholly arbitrary setting of a Skinner box and essentially nothing else!
The alternative is to observe whole systems on their own terms—to study what they do, not in response to a controlled experimental stimulus, but in response to the normal interplay between their internal dynamics and the environment around them. That’s what Wolfram did. He ran cellular automata, not to try to make them do this thing or that, but to understand the internal logic that determines what they do when left to themselves.
What he discovered, to summarize well over a thousand pages of text in a brief phrase, is that cellular automata with extremely simple operating rules are capable of generating patterns as complex, richly textured, and blended of apparent order and apparent randomness, as the world of nature itself. Wolfram explains the relevance of that discovery:
Three centuries ago science was transformed by the dramatic new idea that rules based on mathematical equations could be used to describe the natural world. My purpose in this book is to initiate another such transformation, and to introduce a new kind of science that is based on the much more general types of rules that can be embodied in simple computer programs. (Wolfram 2002, p. 1)
One crucial point here, to my mind, is the recognition that mathematical equations in science are simply models used to approximate natural processes. There’s been an enormous amount of confusion around that point, going all the way back to the ancient Pythagoreans, whose discoveries of the mathematical structures within musical tones, the movement of the planets, and the like led them to postulate that numbers comprised the arche, the enduring reality of which the changing world of our experience is but a transitory reflection.
This confusion between the model and the thing modeled, between the symbol and the symbolized, is pandemic in modern thinking. Consider all the handwaving around the way that light seems to behave like a particle when subjected to one set of experiments, and like a wave when put through a different set. Plenty of people who should know better treat this as a paradox, when it’s nothing of the kind.
Light isn’t a wave or a particle, any more than the elephant investigated by the blind men in the famous story is a wall, a pillar, a rope, or what have you; “particle” and “wave” are models derived from human sensory experience that we apply to fit our minds around some aspects of the way that light behaves, and that’s all they are. They’re useful, in other words, rather than true.
Thus mathematical equations provide one set of models that can be used to fit our minds around some of the ways the universe behaves. Wolfram’s discovery is that another set of models can be derived from very simple rule-based processes of the kind that make cellular automata work.
This additional set of models makes sense of features of the universe that mathematical models don’t handle well—for example, the generation of complexity from very simple initial rules and conditions. The effectiveness of Wolfram’s models doesn’t show that the universe is composed of cellular automata, any more than the effectiveness of mathematical models shows that the Pythagoreans were right and the cosmos is actually made out of numbers.
Rather, cellular automata and mathematical equations relate to nature the way that particles and waves relate to light: two sets of mental models that allow the brains of some far from omniscient social primates to make sense of the behavior of different aspects of a phenomenon complex enough to transcend all models.
It requires an unfashionable degree of intellectual modesty to accept that the map is not the territory, that the scientific model is merely a representation of some aspects of the reality it tries to describe.
It takes even more of the same unpopular quality to back off a bit from trying to understand nature by trying to force it to jump through hoops, in the manner of too much contemporary experimentation, and turn more attention instead to the systematic observation of what whole systems do on their own terms, in their own normal environments, along the lines of Wolfram’s work.
Still, I’d like to suggest that both those steps are crucial to any attempt to keep science going as a living tradition in a future when the attempt to conquer nature will have ended in nature’s unconditional victory.
A huge proportion of the failures of our age, after all, unfold precisely from the inability of most modern thinkers to pay attention to
what actually happens when that conflicts with cherished fantasies of human entitlement and importance.
It’s because so much modern economic thought fixates on what people would like to believe about money and the exchange of wealth, rather than paying attention to what happens in the real world that includes these things, that predictions by economists generally amount to bad jokes at society’s expense; it’s because next to nobody thinks through the implications of the laws of thermodynamics, the power laws that apply to fossil fuel deposits, and the energy cost of extracting energy from any source, that so much meretricious twaddle about “limitless new energy resources” gets splashed around so freely by people who ought to know better.