SUBHEAD: It's vital for humans understand the story in which we play our small but significant part.
By John Michael Greer on 21 September 2017 in Resilience -
(http://www.resilience.org/stories/2017-09-21/terror-deep-time/)
Image above: The Andromeda galaxy behind a silhouette of mountains. From original article.
Back in the 1950s, sociologist C. Wright Mills wrote cogently about what he called “crackpot realism”—the use of rational, scientific, utilitarian means to pursue irrational, unscientific, or floridly delusional goals. It was a massive feature of American life in Mills’ time, and if anything, it’s become more common since then.
Since it plays a central role in the corner of contemporary culture I want to discuss this week, I want to put a few moments into discussing where crackpot realism comes from, and how it wriggles its way into the apple barrel of modern life and rots the apples from skin to core.
Let’s start with the concept of the division of labor.
One of the great distinctions between a modern industrial society and other modes of human social organization is that in the former, very few activities are taken from beginning to end by the same person.
A woman in a hunter-gatherer community, as she is getting ready for the autumn tuber-digging season, chooses a piece of wood, cuts it, shapes it into a digging stick, carefully hardens the business end in hot coals, and then puts it to work getting tubers out of the ground.
Once she carries the tubers back to camp, what’s more, she’s far more likely than not to take part in cleaning them, roasting them, and sharing them out to the members of the band.
A woman in a modern industrial society who wants to have potatoes for dinner, by contrast, may do no more of the total labor involved in that process than sticking a package in the microwave.
Even if she has potatoes growing in a container garden out back, say, and serves up potatoes she grew, harvested, and cooked herself, odds are she didn’t make the gardening tools, the cookware, or the stove she uses.
That’s division of labor: the social process by which most members of an industrial society specialize in one or another narrow economic niche, and use the money they earn from their work in that niche to buy the products of other economic niches.
Let’s say it up front: there are huge advantages to the division of labor. It’s more efficient in almost every sense, whether you’re measuring efficiency in terms of output per person per hour, skill level per dollar invested in education, or what have you.
What’s more, when it’s combined with a social structure that isn’t too rigidly deterministic, it’s at least possible for people to find their way to occupational specialties for which they’re actually suited, and in which they will be more productive than otherwise.
Yet it bears recalling that every good thing has its downsides, especially when it’s pushed to extremes, and the division of labor is no exception.
Crackpot realism is one of the downsides of the division of labor. It emerges reliably whenever two conditions are in effect.
The first condition is that the task of choosing goals for an activity is assigned to one group of people and the task of finding means to achieve those goals is left to a different group of people.
The second condition is that the first group needs to be enough higher in social status than the second group that members of the first group need pay no attention to the concerns of the second group.
Consider, as an example, the plight of a team of engineers tasked with designing a flying car. People have been trying to do this for more than a century now, and the results are in: it’s a really dumb idea.
It so happens that a great many of the engineering features that make a good car make a bad aircraft, and vice versa; for instance, an auto engine needs to be optimized for torque rather than speed, while an aircraft engine needs to be optimized for speed rather than torque.
Thus every flying car ever built—and there have been plenty of them—performed just as poorly as a car as it did as a plane, and cost so much that for the same price you could buy a good car, a good airplane, and enough fuel to keep both of them running for a good long time.
Engineers know this.
Still, if you’re an engineer and you’ve been hired by some clueless tech-industry godzillionaire who wants a flying car, you probably don’t have the option of telling your employer the truth about his pet project—that is, that no matter how much of his money he plows into the project, he’s going to get a clunker of a vehicle that won’t be any good at either of its two incompatible roles—because he’ll simply fire you and hire someone who will tell him what he wants to hear.
Nor do you have the option of sitting him down and getting him to face what’s behind his own unexamined desires and expectations, so that he might notice that his fixation on having a flying car is an emotionally charged hangover from age eight, when he daydreamed about having one to help him cope with the miserable, bully-ridden public school system in which he was trapped for so many wretched years.
So you devote your working hours to finding the most rational, scientific, and utilitarian means to accomplish a pointless, useless, and self-defeating end. That’s crackpot realism.
You can make a great party game out of identifying crackpot realism—try it sometime—but I’ll leave that to my more enterprising readers.
What I want to talk about right now is one of the most glaring examples of crackpot realism in contemporary industrial society. Yes, we’re going to talk about space travel again.
No question, a fantastic amount of scientific, technological, and engineering brilliance went into the quest to insert a handful of human beings for a little while into the lethal environment of deep space and bring them back alive.
Visit one of the handful of places on the planet where you can get a sense of the sheer scale of a Saturn V rocket, and the raw immensity of the effort that put a small number of human bootprints on the Moon is hard to miss. What’s much easier to miss is the whopping irrationality of the project itself.
(I probably need to insert a parenthetical note here. Every time I blog about the space program, I can count on fielding at least one comment from some troll who insists that the Moon landings never happened.
It so happens that I’ve known quite a few people who worked on the Apollo project; some of them have told me their stories and shown me memorabilia from what was one of the proudest times of their lives; and given a choice between believing them, and believing some troll who uses a pseudonym to hide his identity but can’t hide his ignorance of basic historical and scientific facts, well, let’s just say the troll isn’t going to come in first place. Nor is his comment going to go anywhere but the trash. ‘Nuf said.)
Outer space simply isn’t an environment where human beings can survive for long.
It’s near-perfect vacuum at a temperature a few degrees above absolute zero; it’s full of hard radiation streaming out from the huge unshielded fusion reactor at the center of our solar system; it’s also got chunks of rock, lots of them, whizzing through it at better than rifle-bullet speeds; and the human body is the product of two billion years of evolutionary adaptation to environments that have the gravity, atmospheric pressure, temperature ranges, and other features that are found on the Earth’s surface and, as far as we know, nowhere else in the universe.
A simple thought experiment will show how irrational the dream of human expansion into space really is.
Consider the harshest natural environments on this planet—the stark summits of the Himalayas; the middle of the East Antarctic ice sheet in winter; the bleak Takla Makan desert of central Asia, the place caravans go to die; the bottom of the Marianas Trench, where the water pressure will reduce a human body to paste in seconds.
Nowhere in the solar system, or on any of the exoplanets yet discovered by astronomers, is there a place that’s even as well suited to human life as the places I’ve just named.
Logically speaking, before we try to settle the distant, airless, radiation-blasted deserts of Mars or the Moon, wouldn’t it make sense first to build cities on the Antarctic ice or in the lightless depths of the ocean?
With one exception, in fact, every one of the arguments that has been trotted out to try to justify the settlement of Mars can be applied with even more force to the project of settling Antarctica.
In both cases, you’ve got a great deal of empty real estate amply stocked with mineral wealth, right? Antarctica, though, has a much more comfortable climate than Mars, not to mention abundant supplies of water and a breathable atmosphere, both of which Mars lacks.
Furthermore, it costs a lot less to get your colonists to Antarctica, they won’t face lethal irradiation on the way there, and there’s at least a chance that you can rescue them if things go very wrong.
If in fact it made any kind of sense to settle Mars, the case for settling Antarctica would be far stronger.
So where are the grand plans, lavishly funded by clueless tech-industry godzillionaires, to settle Antarctica? Their absence shows the one hard fact about settling outer space that next to nobody is willing to think about: it simply doesn’t make sense.
The immense financial and emotional investments we’ve made in the notion of settling human beings on other planets or in outer space itself would be Exhibit A in a museum of crackpot realism.
This is where the one exception I mentioned above comes in—the one argument for settling Mars that can’t also be made for settling Antarctica. This is the argument that a Martian colony is an insurance policy for our species.
If something goes really wrong on Earth, the claim goes, and human beings die out here, having a settlement on Mars gives our species a shot at survival.
Inevitably, given the present tenor of popular culture, you can expect to hear this sort of logic backed up by embarrassingly bad arguments. I’m thinking, for example, of a rant by science promoter Neil DeGrasse Tyson, who likes to insist that dinosaurs are extinct today because they didn’t have a space program.
We’ll assume charitably that Tyson spent long nights stargazing in his teen years, and so tended to doze off during his high school biology classes; no doubt that’s why he missed three very obvious facts about dinosaurs.
The first is that they were the dominant life forms on land for well over a hundred million years, which is a good bit longer than our species shows any likelihood of being able to hang on; the second is that the vast majority of dinosaur species went extinct for ordinary reasons—there were only a very modest number of dinosaur species around when the Chicxulub meteorite came screaming down out of space to end the Cretaceous Period; and the third is that dinosaurs aren’t extinct—we call them birds nowadays, and in terms of number of species, rates of speciation, and other standard measures of evolutionary vigor, they’re doing quite a bit better than mammals just now.
Set aside the bad logic and the sloppy paleontology, though, and the argument just named casts a ruthlessly clear light on certain otherwise obscure factors in our contemporary mindset.
The notion that space travel gets its value as a way to avoid human extinction goes back a long ways. I recall a book by Italian journalist Oriana Falacci, compiling her interviews with leading figures in the space program during its glory days; she titled it If The Sun Dies, after the passionate comment along these lines by one of her interviewees.
Behind this, in turn, lies one of the profound and usually unmentioned fears that shapes the modern mind: the terror of deep time.
There’s a profound irony in the fact that the geologists who first began to figure out the true age of the Earth lived in western Europe in the early nineteenth century, when most people believed that the world was only some six thousand years old.
There have been plenty of cultures in recorded history that had a vision of time expansive enough to fit the facts of geological history, but the cultures of western Europe and its diaspora in the Americas and Australasia were not among them.
Wedded to literalist interpretations of the Book of Genesis, and more broadly to a set of beliefs that assigned unique importance to human beings, the people who faced the first dim adumbrations of the vastness of Earth’s long history were utterly unprepared for the shock, and even less ready to have the first unnerving guesses that the Earth might be millions of years old replaced by increasingly precise measurements that gave its age in the billions of years, and that of the universe in the trillions.
The brutal nature of the shock that resulted shouldn’t be underestimated.
A society that had come to think of humanity as creation’s darlings, dwelling in a universe with a human timescale, found itself slammed facefirst into an unwanted encounter with the vast immensities of past and future time. That encounter had a great many awkward moments.
The self-defeating fixation of evangelical Christians on young-Earth creationism can be seen in part as an attempt to back away from the unwelcome vista of deep time; so is the insistence, as common outside Christian churches as within them, that the world really will end sometime very soon and spare us the stress of having to deal with the immensity of the future.
For that matter, I’m not sure how many of my readers know how stunningly unwelcome the concept of extinction was when it was first proposed: if the universe was created for the benefit of human beings, as a great many people seriously argued in those days, how could there have been so many thousands of species that lived and died long ages before the first human being walked the planet?
Worse, the suspicion began to spread that the future waiting for humanity might not be an endless progression toward bigger and better things, as believers in progress insisted, or the end of the world followed by an eternity of bliss for the winning team, as believers in Christianity insisted, but extinction: the same fate as all those vanished species whose bones kept surfacing in geological deposits.
It’s in the nineteenth century that the first stories of human extinction appear on the far end of late Romanticism, just as the same era saw the first tales that imagined the history of modern civilization ending in decline and fall.
People read The Black Cloud and After London for the same rush of fascinated horror that they got from Frankenstein and Dracula, and with the same comfortable disbelief once the last page turned—but the same scientific advances that made the two latter books increasingly less believable made tales of humanity’s twilight increasingly more so.
It became fashionable in many circles to dismiss such ideas as mere misanthropy, and that charge still gets flung at anyone who questions current notions of humanity’s supposed future in space. It’s a curious fact that I tend to field such comments from science fiction writers, more than from anyone else just now.
A few years ago, when I sketched out a fictive history of the next ten billion years that included human extinction millions of years from now, SF writer David Brin took time out of his busy schedule to denounce it as “an infuriating paean to despair.” Last month’s post on the worlds that never were, similarly, fielded a spluttering denunciation by S.M. Stirling.
It was mostly a forgettable rehash of the standard arguments for an interstellar future—arguments, by the way, that could be used equally well to justify continued faith in perpetual motion—but the point I want to raise here is that Stirling’s sole reaction to Aurora, Kim Stanley Robinson’s brilliant fictional critique of the interstellar-travel mythos, was to claim dismissively that Robinson must have suffered an attack of misanthropy.
Some of my readers may remember Verruca Salt, the archetypal spoiled brat in Willy Wonka and the Chocolate Factory.
When her father didn’t give her whatever she happened to want, her typical response was to shriek, “You don’t love me!” I think of that whenever somebody trots out the accusation of misanthropy in response to any realistic treatment of the limits that will shape the human future.
It’s not misanthropy to point out that humanity isn’t going to outlast the sun or leap breezily from star to star; it’s simple realism, just as reminding someone that they will inevitably die is an expression not of hatred but of common sense.
You, dear reader, will die someday. So will I, and so will every other human being.
That fact doesn’t make our lives meaningless; quite the contrary, it’s when we come to grips with the fact of our own mortality that we have our best shot at achieving not only basic maturity, but that condition of reflective attention to meaning that goes by the name of wisdom.
In exactly the same way, recognizing that humanity will not last forever—that the same Earth that existed and flourished long before our species came on the scene will exist and flourish long after our species is gone—might just provide enough of a boost of wisdom to help us back away from at least some of the more obviously pigheaded ways we’re damaging the biosphere of the only planet on which we can actually live.
There’s something else to be found in the acceptance of our collective mortality, though, and I’m considering exploring it in detail over the months ahead.
Grasp the fact that our species is a temporary yet integral part of the whole system we call the biosphere of the Earth, and it becomes a good deal easier to see that we are part of a story that didn’t begin with us, won’t end with us, and doesn’t happen to assign us an overwhelmingly important role.
Traumatic though this may be for the Verruca Saltish end of humanity, with its distinctly overinflated sense of importance, there’s much to be gained by ditching the tantrums, coming to terms with our decidedly modest place in the cosmos, and coming to understand the story in which we play our small but significant part.
.
By John Michael Greer on 21 September 2017 in Resilience -
(http://www.resilience.org/stories/2017-09-21/terror-deep-time/)
Image above: The Andromeda galaxy behind a silhouette of mountains. From original article.
Back in the 1950s, sociologist C. Wright Mills wrote cogently about what he called “crackpot realism”—the use of rational, scientific, utilitarian means to pursue irrational, unscientific, or floridly delusional goals. It was a massive feature of American life in Mills’ time, and if anything, it’s become more common since then.
Since it plays a central role in the corner of contemporary culture I want to discuss this week, I want to put a few moments into discussing where crackpot realism comes from, and how it wriggles its way into the apple barrel of modern life and rots the apples from skin to core.
Let’s start with the concept of the division of labor.
One of the great distinctions between a modern industrial society and other modes of human social organization is that in the former, very few activities are taken from beginning to end by the same person.
A woman in a hunter-gatherer community, as she is getting ready for the autumn tuber-digging season, chooses a piece of wood, cuts it, shapes it into a digging stick, carefully hardens the business end in hot coals, and then puts it to work getting tubers out of the ground.
Once she carries the tubers back to camp, what’s more, she’s far more likely than not to take part in cleaning them, roasting them, and sharing them out to the members of the band.
A woman in a modern industrial society who wants to have potatoes for dinner, by contrast, may do no more of the total labor involved in that process than sticking a package in the microwave.
Even if she has potatoes growing in a container garden out back, say, and serves up potatoes she grew, harvested, and cooked herself, odds are she didn’t make the gardening tools, the cookware, or the stove she uses.
That’s division of labor: the social process by which most members of an industrial society specialize in one or another narrow economic niche, and use the money they earn from their work in that niche to buy the products of other economic niches.
Let’s say it up front: there are huge advantages to the division of labor. It’s more efficient in almost every sense, whether you’re measuring efficiency in terms of output per person per hour, skill level per dollar invested in education, or what have you.
What’s more, when it’s combined with a social structure that isn’t too rigidly deterministic, it’s at least possible for people to find their way to occupational specialties for which they’re actually suited, and in which they will be more productive than otherwise.
Yet it bears recalling that every good thing has its downsides, especially when it’s pushed to extremes, and the division of labor is no exception.
Crackpot realism is one of the downsides of the division of labor. It emerges reliably whenever two conditions are in effect.
The first condition is that the task of choosing goals for an activity is assigned to one group of people and the task of finding means to achieve those goals is left to a different group of people.
The second condition is that the first group needs to be enough higher in social status than the second group that members of the first group need pay no attention to the concerns of the second group.
Consider, as an example, the plight of a team of engineers tasked with designing a flying car. People have been trying to do this for more than a century now, and the results are in: it’s a really dumb idea.
It so happens that a great many of the engineering features that make a good car make a bad aircraft, and vice versa; for instance, an auto engine needs to be optimized for torque rather than speed, while an aircraft engine needs to be optimized for speed rather than torque.
Thus every flying car ever built—and there have been plenty of them—performed just as poorly as a car as it did as a plane, and cost so much that for the same price you could buy a good car, a good airplane, and enough fuel to keep both of them running for a good long time.
Engineers know this.
Still, if you’re an engineer and you’ve been hired by some clueless tech-industry godzillionaire who wants a flying car, you probably don’t have the option of telling your employer the truth about his pet project—that is, that no matter how much of his money he plows into the project, he’s going to get a clunker of a vehicle that won’t be any good at either of its two incompatible roles—because he’ll simply fire you and hire someone who will tell him what he wants to hear.
Nor do you have the option of sitting him down and getting him to face what’s behind his own unexamined desires and expectations, so that he might notice that his fixation on having a flying car is an emotionally charged hangover from age eight, when he daydreamed about having one to help him cope with the miserable, bully-ridden public school system in which he was trapped for so many wretched years.
So you devote your working hours to finding the most rational, scientific, and utilitarian means to accomplish a pointless, useless, and self-defeating end. That’s crackpot realism.
You can make a great party game out of identifying crackpot realism—try it sometime—but I’ll leave that to my more enterprising readers.
What I want to talk about right now is one of the most glaring examples of crackpot realism in contemporary industrial society. Yes, we’re going to talk about space travel again.
No question, a fantastic amount of scientific, technological, and engineering brilliance went into the quest to insert a handful of human beings for a little while into the lethal environment of deep space and bring them back alive.
Visit one of the handful of places on the planet where you can get a sense of the sheer scale of a Saturn V rocket, and the raw immensity of the effort that put a small number of human bootprints on the Moon is hard to miss. What’s much easier to miss is the whopping irrationality of the project itself.
(I probably need to insert a parenthetical note here. Every time I blog about the space program, I can count on fielding at least one comment from some troll who insists that the Moon landings never happened.
It so happens that I’ve known quite a few people who worked on the Apollo project; some of them have told me their stories and shown me memorabilia from what was one of the proudest times of their lives; and given a choice between believing them, and believing some troll who uses a pseudonym to hide his identity but can’t hide his ignorance of basic historical and scientific facts, well, let’s just say the troll isn’t going to come in first place. Nor is his comment going to go anywhere but the trash. ‘Nuf said.)
Outer space simply isn’t an environment where human beings can survive for long.
It’s near-perfect vacuum at a temperature a few degrees above absolute zero; it’s full of hard radiation streaming out from the huge unshielded fusion reactor at the center of our solar system; it’s also got chunks of rock, lots of them, whizzing through it at better than rifle-bullet speeds; and the human body is the product of two billion years of evolutionary adaptation to environments that have the gravity, atmospheric pressure, temperature ranges, and other features that are found on the Earth’s surface and, as far as we know, nowhere else in the universe.
A simple thought experiment will show how irrational the dream of human expansion into space really is.
Consider the harshest natural environments on this planet—the stark summits of the Himalayas; the middle of the East Antarctic ice sheet in winter; the bleak Takla Makan desert of central Asia, the place caravans go to die; the bottom of the Marianas Trench, where the water pressure will reduce a human body to paste in seconds.
Nowhere in the solar system, or on any of the exoplanets yet discovered by astronomers, is there a place that’s even as well suited to human life as the places I’ve just named.
Logically speaking, before we try to settle the distant, airless, radiation-blasted deserts of Mars or the Moon, wouldn’t it make sense first to build cities on the Antarctic ice or in the lightless depths of the ocean?
With one exception, in fact, every one of the arguments that has been trotted out to try to justify the settlement of Mars can be applied with even more force to the project of settling Antarctica.
In both cases, you’ve got a great deal of empty real estate amply stocked with mineral wealth, right? Antarctica, though, has a much more comfortable climate than Mars, not to mention abundant supplies of water and a breathable atmosphere, both of which Mars lacks.
Furthermore, it costs a lot less to get your colonists to Antarctica, they won’t face lethal irradiation on the way there, and there’s at least a chance that you can rescue them if things go very wrong.
If in fact it made any kind of sense to settle Mars, the case for settling Antarctica would be far stronger.
So where are the grand plans, lavishly funded by clueless tech-industry godzillionaires, to settle Antarctica? Their absence shows the one hard fact about settling outer space that next to nobody is willing to think about: it simply doesn’t make sense.
The immense financial and emotional investments we’ve made in the notion of settling human beings on other planets or in outer space itself would be Exhibit A in a museum of crackpot realism.
This is where the one exception I mentioned above comes in—the one argument for settling Mars that can’t also be made for settling Antarctica. This is the argument that a Martian colony is an insurance policy for our species.
If something goes really wrong on Earth, the claim goes, and human beings die out here, having a settlement on Mars gives our species a shot at survival.
Inevitably, given the present tenor of popular culture, you can expect to hear this sort of logic backed up by embarrassingly bad arguments. I’m thinking, for example, of a rant by science promoter Neil DeGrasse Tyson, who likes to insist that dinosaurs are extinct today because they didn’t have a space program.
We’ll assume charitably that Tyson spent long nights stargazing in his teen years, and so tended to doze off during his high school biology classes; no doubt that’s why he missed three very obvious facts about dinosaurs.
The first is that they were the dominant life forms on land for well over a hundred million years, which is a good bit longer than our species shows any likelihood of being able to hang on; the second is that the vast majority of dinosaur species went extinct for ordinary reasons—there were only a very modest number of dinosaur species around when the Chicxulub meteorite came screaming down out of space to end the Cretaceous Period; and the third is that dinosaurs aren’t extinct—we call them birds nowadays, and in terms of number of species, rates of speciation, and other standard measures of evolutionary vigor, they’re doing quite a bit better than mammals just now.
Set aside the bad logic and the sloppy paleontology, though, and the argument just named casts a ruthlessly clear light on certain otherwise obscure factors in our contemporary mindset.
The notion that space travel gets its value as a way to avoid human extinction goes back a long ways. I recall a book by Italian journalist Oriana Falacci, compiling her interviews with leading figures in the space program during its glory days; she titled it If The Sun Dies, after the passionate comment along these lines by one of her interviewees.
Behind this, in turn, lies one of the profound and usually unmentioned fears that shapes the modern mind: the terror of deep time.
There’s a profound irony in the fact that the geologists who first began to figure out the true age of the Earth lived in western Europe in the early nineteenth century, when most people believed that the world was only some six thousand years old.
There have been plenty of cultures in recorded history that had a vision of time expansive enough to fit the facts of geological history, but the cultures of western Europe and its diaspora in the Americas and Australasia were not among them.
Wedded to literalist interpretations of the Book of Genesis, and more broadly to a set of beliefs that assigned unique importance to human beings, the people who faced the first dim adumbrations of the vastness of Earth’s long history were utterly unprepared for the shock, and even less ready to have the first unnerving guesses that the Earth might be millions of years old replaced by increasingly precise measurements that gave its age in the billions of years, and that of the universe in the trillions.
The brutal nature of the shock that resulted shouldn’t be underestimated.
A society that had come to think of humanity as creation’s darlings, dwelling in a universe with a human timescale, found itself slammed facefirst into an unwanted encounter with the vast immensities of past and future time. That encounter had a great many awkward moments.
The self-defeating fixation of evangelical Christians on young-Earth creationism can be seen in part as an attempt to back away from the unwelcome vista of deep time; so is the insistence, as common outside Christian churches as within them, that the world really will end sometime very soon and spare us the stress of having to deal with the immensity of the future.
For that matter, I’m not sure how many of my readers know how stunningly unwelcome the concept of extinction was when it was first proposed: if the universe was created for the benefit of human beings, as a great many people seriously argued in those days, how could there have been so many thousands of species that lived and died long ages before the first human being walked the planet?
Worse, the suspicion began to spread that the future waiting for humanity might not be an endless progression toward bigger and better things, as believers in progress insisted, or the end of the world followed by an eternity of bliss for the winning team, as believers in Christianity insisted, but extinction: the same fate as all those vanished species whose bones kept surfacing in geological deposits.
It’s in the nineteenth century that the first stories of human extinction appear on the far end of late Romanticism, just as the same era saw the first tales that imagined the history of modern civilization ending in decline and fall.
People read The Black Cloud and After London for the same rush of fascinated horror that they got from Frankenstein and Dracula, and with the same comfortable disbelief once the last page turned—but the same scientific advances that made the two latter books increasingly less believable made tales of humanity’s twilight increasingly more so.
It became fashionable in many circles to dismiss such ideas as mere misanthropy, and that charge still gets flung at anyone who questions current notions of humanity’s supposed future in space. It’s a curious fact that I tend to field such comments from science fiction writers, more than from anyone else just now.
A few years ago, when I sketched out a fictive history of the next ten billion years that included human extinction millions of years from now, SF writer David Brin took time out of his busy schedule to denounce it as “an infuriating paean to despair.” Last month’s post on the worlds that never were, similarly, fielded a spluttering denunciation by S.M. Stirling.
It was mostly a forgettable rehash of the standard arguments for an interstellar future—arguments, by the way, that could be used equally well to justify continued faith in perpetual motion—but the point I want to raise here is that Stirling’s sole reaction to Aurora, Kim Stanley Robinson’s brilliant fictional critique of the interstellar-travel mythos, was to claim dismissively that Robinson must have suffered an attack of misanthropy.
Some of my readers may remember Verruca Salt, the archetypal spoiled brat in Willy Wonka and the Chocolate Factory.
When her father didn’t give her whatever she happened to want, her typical response was to shriek, “You don’t love me!” I think of that whenever somebody trots out the accusation of misanthropy in response to any realistic treatment of the limits that will shape the human future.
It’s not misanthropy to point out that humanity isn’t going to outlast the sun or leap breezily from star to star; it’s simple realism, just as reminding someone that they will inevitably die is an expression not of hatred but of common sense.
You, dear reader, will die someday. So will I, and so will every other human being.
That fact doesn’t make our lives meaningless; quite the contrary, it’s when we come to grips with the fact of our own mortality that we have our best shot at achieving not only basic maturity, but that condition of reflective attention to meaning that goes by the name of wisdom.
In exactly the same way, recognizing that humanity will not last forever—that the same Earth that existed and flourished long before our species came on the scene will exist and flourish long after our species is gone—might just provide enough of a boost of wisdom to help us back away from at least some of the more obviously pigheaded ways we’re damaging the biosphere of the only planet on which we can actually live.
There’s something else to be found in the acceptance of our collective mortality, though, and I’m considering exploring it in detail over the months ahead.
Grasp the fact that our species is a temporary yet integral part of the whole system we call the biosphere of the Earth, and it becomes a good deal easier to see that we are part of a story that didn’t begin with us, won’t end with us, and doesn’t happen to assign us an overwhelmingly important role.
Traumatic though this may be for the Verruca Saltish end of humanity, with its distinctly overinflated sense of importance, there’s much to be gained by ditching the tantrums, coming to terms with our decidedly modest place in the cosmos, and coming to understand the story in which we play our small but significant part.
.
No comments :
Post a Comment