Showing posts with label Salvaging. Show all posts
Showing posts with label Salvaging. Show all posts

The Future is a Foreign Country

SUBHEAD: A mobile phone signal in Somalian does not equal a sustainable technological economy.

By John Michael Greer on 30 October 2013 for thr Archdruid Report -
(http://thearchdruidreport.blogspot.com/2013/10/the-future-is-foreign-country.html)

[IB Publisher's note: As an architect I am familiar with the term "vomitorium" as the enrance/exit passageway in an arena. They vomit people. I cannot find historical references to them as rooms in which the over indulgent relieved themselves. This point was picked up in comments in the original post and acknowledged by the auther subsequently.]


Image above: In Mogadishu, Somalia, the mobile phone business is booming amongst poverty. From (http://www.somalinet.com/forums/viewtopic.php?f=18&t=229630).

I’m pleased to note that the conversation about ephemeralization and catabolic collapse launched a few weeks back by futurist Kevin Carson, and continued in several blogs since then, has taken a promising new turn. The vehicle for that sudden swerve was a essay by Lakis Polycarpou, titled Catabolic Ephemeralization: Carson versus Greer, which set out to find common ground between Carson’s standpoint and mine. In the process, to his credit, Polycarpou touched on a crucial point that has been too often neglected in recent discussions about the future.

That’s not to say that his take on the future couldn’t use some serious second thoughts. I noted in my original response to Carson’s post the nearly visceral inability to think in terms of whole systems that pervades today’s geek culture, and that curious blindness is well represented in Polycarpou’s essay.

He argues, for example, that since a small part of Somalia has cell phone service, and cell phone service is more widely available today than grid electricity or clean drinking water, cutting-edge technology ought to be viable in a postpetroleum world. “If Greer is right that modern telecommunications is full of hidden embodied energy and capital costs,” he wonders aloud, “how is this possible?”

As it happens, that’s an easy question to answer. Somalia, even in its present turbulent condition, is part of a global economy fueled by the recklessly rapid extraction of half a billion years of fossil sunlight and equally unsustainable amounts of other irreplaceable natural resources.

It speaks well of the resourcefulness of the Somalian people that they’ve been able to tap into some of those resource flows, in the teeth of a global economy that’s so heavily tilted against them; that said, it bears remembering that the cell phone towers in Somalia are not being manufactured in Somalian factories from Somalian resources using Somalian energy sources.

A sprawling global industrial network of immensely complex manufacturing facilities and world-spanning supply chains forms the whole system that lies behind those towers, and without that network or some equivalent capable of mobilizing equivalent resources and maintaining comparable facilities, those towers would not exist.

It’s easy to make dubious generalizations based on cell phone service, mind you, because all that’s being measured by that metric is whether a given group of people are within range of a bit of stray microwave radiation—not whether they have access to cell phones, or whether the infrastructure could handle the traffic if they did. That’s the kind of blindness to whole systems that pervades so much contemporary thinking.

A microwave signal fluttering through the air above an impoverished Somalian neighborhood does not equal a sustainable technological economy; only when you can account for every requirement of the whole system that produces that signal can you begin to talk about whether that system can be preserved in working order through a harsh era of global economic contraction and political turmoil.

Polycarpou dodges this, and several other awkward points of this nature. He insists, for example, that nobody actually knows whether the early 19th century technology needed to lay and operate undersea cables is really less complex than a space program capable of building, orbiting, and operating communications satellites. Since the technologies in question are a matter of public knowledge, and a few minutes of online research is all that’s needed to put them side by side, this is breathtakingly ingenuous.

Still, I’d encourage my readers to keep reading past this bit, and also past the ad hominem handwaving about the energy costs of the internet that follows it. It’s in the last part of Polycarpou’s essay, where he begins to talk about alternatives and the broader shape of the future, that he begins to speak in a language familiar to regular readers of The Archdruid Report.

What he’s suggesting in this final part of his essay, if I’m reading it correctly, is that the infrastructure of the modern industrial world is unsustainable, and will have to be replaced by local production of essential goods and services on a scale that will seem impoverished by modern standards. With this claim I have no disagreements at all, and indeed it’s what I’ve been suggesting here on The Archdruid Report for the last seven and a half years.

The points at issue between my view of the future and Polycarpou’s are what technologies will be best suited to the deindustrial world, and just how much more impoverished things are going to be by the time we finish the transition. These are questions of detail, not of substance.

Furthermore, they’re not questions that can be settled conclusively in advance. Mind you, it’s almost certainly a safe assumption that the kind of computer hardware we use today will no longer be manufactured once today’s industrial infrastructure stops being a paying proposition economically; current integrated-circuit technology requires a suite of extraordinarily complex technologies and a dizzying assortment of raw materials from the far corners of the globe, which will not be available to village-scale workshops dependent on local economies.

The point that too rarely gets noticed is that the kind of information processing technology we have now isn’t necessarily the only way that the same principles can be put to work. I’ve fielded claims here several times that mechanical computers capable of tolerably complex calculations can be made of such simple materials as plywood disks; I have yet to see a working example, but I’m open to the possibility that something of the sort could be done.

Polycarpou comments, along the same lines, that people in a variety of countries these days are setting up parallel internets using rooftop wifi antennas, and he suggests that this is one direction in which a future internet might run, at least in the short term. He’s almost certainly right, provided that those last six words are kept in mind.

It’s vanishingly unlikely that anybody will be able to keep manufacturing the necessary hardware for wifi systems through the twilight years of the industrial age, but while the hardware exists, it will certainly be used, and it might buy enough time for something else, something that can be locally manufactured from local resources, to be invented and deployed. My guess is that it’ll look much more like a ham radio message net than the internet as we currently know it, but that’s a question the future will have to settle.

The same point can be made—and has been made here more than once—about solar photovoltaic technology. Lose track of whole systems and it’s easy to claim, as Polycarpou does, that because solar cells have become less expensive recently, vast acreages of solar photovoltaic cells will surely bail us out of the consequences of fossil fuel depletion.

All you have to do is forget that the drop in PV cell costs has much less to do with the production and resource costs of the technology than with China’s familiar practice of undercutting its competitors to seize control of export markets, and pay no attention at all to the complex and finicky technical basis for modern PV cell manufacture or the sheer scale of the supply chains needed to keep chip plants stocked with raw materials, spare parts, solvents, and all the other requirements of the manufacturing process.

Does this mean that solar PV power is useless? Not at all. Buy and install PV panels now, while Chinese trade policy and an inflated dollar make them cheap, and you’ll still have electricity coming out of them decades from now, when they will be hugely expensive if they can be purchased at all.

Anyone who’s actually lived with a homescale PV system can tell you that the trickle of electricity you can get that way is no substitute for 120 volts of grid power from big central power plants, but once expectations nurtured by the grid get replaced by a less extravagant sense of how electricity ought to be used, that trickle of electricity can be put to many good uses.

Meanwhile, in the window of opportunity opened up by those solar panels, other ways of producing modest amounts of electricity by way of sunlight, wind, and other renewable sources can be tested and deployed. My guess is that thermoelectric generators heated by parabolic mirrors will turn out to be the wave of the future, keeping the shortwave radios, refrigerators, and closed-loop solar water heaters of the ecotechnic future supplied with power; still, that’s just a guess.

There are many ways to produce modest amounts of direct-current electricity with very simple technologies, and highly useful electrical and electronic equipment can readily be made with locally available materials and hand tools. The result won’t be anything you would expect to see in a high-tech geek future, granted, but it’s equally a far cry from the Middle Ages.

This last detail is the crucial point that Polycarpou grasps at the end of his essay, and his comment is important enough that it deserves quotation in full:

“Putting these and other elements together – hi-tech, distributed communications, distributed energy and manufacturing, local sustainable food systems, appropriate technology and tactical urbanism among others – sets the stage for a future that looks quite a bit different than the present one. One might describe it as a kind of postmodern pastiche that looks neither like the antiquated futurisms we once imagined nor an idyllic return to preindustrial peasant society.”

The future, in other words, is not going to be a linear extrapolation from the present—that’s the source of the “antiquated futurisms” he rightly criticizes—or a simple rehash of the past. The future is a foreign country, and things are different there.

That realization is the specter that haunts contemporary industrial society. For all our civilization’s vaunted openness to change, the only changes most people nowadays are willing to contemplate are those that take us further in the direction we’re already going.

We’ve got fast transportation today, so there has to be something even faster tomorrow—that’s basically the justification Elon Musk gave for the Hyperloop, his own venture into antiquated futurism; we’ve got the internet today, so we’ve got to have some kind of uber-internet tomorrow.

It’s a peculiar sort of blindness, and one that civilizations of the past don’t seem to have shared; as far as I know, for example, the designers of ancient Roman vomitoriums didn’t insist that their technology was the wave of the future, and claim that future societies would inevitably build bigger and better places to throw up after banquets. (Those of my readers who find this comparison questionable might want to take a closer look at internet content.)

The future is a foreign country, and people do things differently there. It’s hard to think of anything that flies so comprehensively in the face of today’s conventional wisdom, or contradicts so many of the unquestioned assumptions of our time; thus it’s not surprising that Polycarpou, in suggesting it, seems to think that he’s disagreeing with me.

Quite the contrary; there’s a reason why my most popular peak oil book is titled The Ecotechnic Future, rather than The Idyllic Peasant Future or some such twaddle. For that matter, I’m not at all sure that he realizes I would agree with his characterization of the near- to mid-range future as a “postmodern pastiche;” I’d suggest that the distributed communication will likely be much less high-tech than he thinks, and that hand tools and simple machinery will play a much larger role in the distributed manufacturing than 3D printers, but again, those are matters of detail.

It’s in the longer run, I suspect, that our visions of the future diverge most sharply. Technological pastiche and bricolage, the piecing together of jerry-rigged systems out of scraps of surviving equipment and lore, are common features of ages of decline; it’s as the decline nears bottom that the first steps get taken toward a new synthesis, one that inevitably rejects many of the legacy technologies of the past and begins working on its own distinct projects.

Vomitoriums weren’t the only familiar technology to land in history’s compost heap in the post-Roman dark ages; chariots dropped out of use, too, along with a great many more elements of everyday Roman life.

New values and new ideologies directed collective effort toward goals no Roman would have understood, and the harsh limits on resource availability in the radically relocalized post-Roman world also left their mark.

What often gets forgotten in reviewing the dark ages of the past is that they were not lapses into the past but gropings forward into an unknown future.

There was a dark age before the Roman world and a dark age after it; the two had plenty of parallels, some of them remarkably exact, but the technologies were not the same, and Greek and Roman innovations in information processing and storage—classical logic and philosophy, widespread literacy, and the use of parchment as a readily available and reusable writing medium—were preserved and transmitted in various forms, opening up possibilities in the post-Roman dark ages that were absent in the centuries that followed the fall of Mycenae.

In the same way, the deindustrial future ahead of us will not be a rehash of the past, any more than it will be a linear extrapolation of the present. I’ve suggested, for reasons I’ve covered in a good many previous posts here, that we face a Long Descent of one to three centuries followed by a dark age very broadly parallel to the ones that followed Rome, Mycenae, and so many other dead civilizations of the past.

That’s the normal result when catabolic collapse hits a society dependent on nonrenewable resources, but the way the process unfolds is powerfully shaped by contextual and historical factors, and no two passes through that process are identical.

That’s common enough in the universe of human experience. For example, it’s tolerably likely that you, dear reader, will have the experience of growing old, if you haven’t done so already. It’s likely that at least some of your grandparents did exactly the same thing—but the parallel doesn’t mean that growing old will somehow transport you back to their era, much less to their lifestyles.

Nor, I trust, would you be likely to believe somebody who claimed that getting old was by definition a matter of going back in time to your grandparents’ day and trading in your hybrid car for a Model T.

Some dimensions of growing old are hardwired into the experience itself—the wrinkles, the graying hair, and the slow buildup of physical dysfunctions with their inevitable end are among them. Other dimensions are up to you. In the same way, some of what happens when a civilization tips over in decline are reliable consequences of the mechanisms of catabolic collapse, or of the way those mechanisms interact with the ordinary workings of human collective psychology.

The stairstep rhythm of crisis, stabilization, partial recovery, and renewed crisis, the spiral of conflict between centralizing and decentralizing forces, which eventually ends in the latter’s triumph; the rise of warband culture in the no man’s land outside the increasingly baroque and ineffective frontier defenses—you could set your watch by these, if its hands tracked decades, centuries and millennia.

Other aspects of the process of decline and fall are far less predictable. The radical relocalization that’s standard in eras of contraction and collapse means, among other things, that dark ages aren’t evenly distributed in space or time, and the disintegration of large-scale systems means, among other things, that minor twists of fate and individual decisions very often have much more dramatic consequences in dark ages than they do when the settled habits of a mature civilization constrain the impact of single events.

Furthermore, the cultural, historical, and technological legacies of the former civilization always have a massive impact—it’s entirely possible, for example, that the dark age societies of deindustrial America will have such things as radio communication, solar water heaters, offroad bicycles, and ultralight aircraft—and so do the values and belief systems that reliably emerge as a civilization crashes slowly into ruin, and those who witness each stage of the process try to understand the experience and learn the lessons of its fall.

This is why I’ve spent most of the last year exploring the civil religion of progress, the core ideology of contemporary industrial society, and sketching out some of the ways it distorts our view of history and the future. There’s a pleasant irony in the way that Polycarpou ends his essay with the standard ritual invocation of progress, insisting that even though the future will be impoverished by our standards, it will still be better according to some other measure.

That sort of apologetic rhetoric will no doubt see plenty of use in the years ahead: as progress fails to happen on schedule, it’ll be tempting to keep on moving the goalposts so that the failure is a little less visible and the faithful can continue to believe.

Eventually, though, such exercises will be recognized as the charades they are. As the worship of progress loses its grip on the imagination of our age, we’ll see sweeping changes in what people value, what they want to accomplish, and thus inevitably what technologies they’ll find worth preserving or developing.

The court of Charlemagne could certainly have had vomitoriums if anyone had wanted them; the technical ability was there, but the values of the age had shifted away from anything that made vomitoriums make sense. In the same way, even if our descendants have the technical ability to produce something like today’s internet, it’s entirely possible that they’ll find other uses for those technologies, or simply shake their heads, wonder why anybody would ever have wanted something like that, and put resources available to them into some completely different project.

How that unfolds is a matter for the far future, and thus nothing we need to worry about now. As I wind up this sequence of posts, I want to talk instead about the roles that religion is likely to play in the near and middle future as the next round of catabolic collapse begins to bite. We’ll discuss that next week.

See also:
Ea O Ka Aina: Reinventing Square Wheels 10/23/13

.

What Peak Oil Looks Like

SUBHEAD: With oil above $100 a barrel, world economies are mired in a paper “recovery” worse than most recessions. By John Michael Greer on 7 December 2011 for the Archdruid Report - (http://thearchdruidreport.blogspot.com/2011/12/what-peak-oil-looks-like.html) Image above: The virgin Mary cradles a couple of gallons of Shell gasoline. Still from video below. There are times when the unraveling of a civilization stands out in sharp relief, but more often that process makes itself seen only in the sort of scattered facts and figures that take a sharp eye to notice and assemble into a meaningful picture. How often, I wonder, did the prefects of imperial Rome look up from the daily business of mustering legions and collecting tribute to notice the crumbling of the foundations on which their whole society rested? Nowadays, certainly, that broader vision is hard to find. It’s symptomatic that in the last few weeks I’ve fielded a fair number of emails insisting that the peak oil theory—of course it’s not a theory at all; it’s a hard fact that the extraction of a finite oil supply in the ground will sooner or later reach a peak and begin to decline—has been rendered obsolete by the latest flurry of enthusiastic claims about shale oil and the like. Enthusiastic claims about the latest hot new oil prospect are hardly new, and indeed they’ve been central to cornucopian rhetoric since M. King Hubbert’s time. A decade ago, it was the Caspian Sea oilfields that were being invoked as supposedly conclusive evidence that a peak in global conventional petroleum production wouldn’t arrive in our lifetimes. Compare the grand claims made for the Caspian fields back then, and the trickle of production that actually resulted from those fields, and you get a useful reality check on the equally sweeping claims now being made for the Bakken shale, but that’s not a comparison many people want to make just now. On the other side of the energy spectrum, those who insist that we can power some equivalent of our present industrial system on sun, wind, and other diffuse renewable sources have been equally vocal, and those of us who raise reasonable doubts about that insistence can count on being castigated as “doomers.” It’s probably not accidental that this particular chorus seems to go up in volume with every ethanol refinery or solar panel manufacturer that goes broke and every study showing that the numbers put forth to back some renewable energy scheme simply don’t add up. It’s no more likely to be accidental that the rhetoric surrounding the latest fashionable fossil fuel play heats up steadily as production at the world’s supergiant fields slides remorselessly down the curve of depletion. The point of such rhetoric, as I suggested in a post a while back, isn’t to deal with the realities of our situation; it’s to pretend that those realities don’t exist, so that the party can go on and the hard choices can be postponed just a little longer. Thus our civilization has entered what John Kenneth Galbraith called “the twilight of illusion,” the point at which the end of a historical process would be clearly visible if everybody wasn’t so busy finding reasons to look somewhere else. A decade ago, those few of us who were paying attention to peak oil were pointing out that if the peak of global conventional petroleum production arrived before any meaningful steps were taken, the price of oil would rise to previously unimagined heights, crippling the global economy and pushing political systems across the industrial world into a rising spiral of dysfunction and internal conflict. With most grades of oil above $100 a barrel, economies around the world mired in a paper “recovery” worse than most recessions, and the United States and European Union both frozen in political stalemates between regional and cultural blocs with radically irreconcilable agendas, that prophecy has turned out to be pretty much square on the money, but you won’t hear many people mention that these days. The point that has to be grasped just now, it seems to me, is that this is what peak oil looks like. Get past the fantasies of sudden collapse on the one hand, and the fantasies of limitless progress on the other, and what you get is what we’re getting—a long ragged slope of rising energy prices, economic contraction, and political failure, punctuated with a crisis here, a local or regional catastrophe there, a war somewhere else—all against a backdrop of disintegrating infrastructure, declining living standards, decreasing access to health care and similar services, and the like, which of course has been happening here in the United States for some years already. A detached observer with an Olympian view of the country would be able to watch things unravel, as such an observer could have done up to now, but none of us have been or will be detached observers; at each point on the downward trajectory, those of us who still have jobs will be struggling to hang onto them, those who have lost their jobs will be struggling to stay fed and clothed and housed, and those crises and catastrophes and wars, not to mention the human cost of the broader background of decline, will throw enough smoke in the air to make a clear view of the situation uncommonly difficult to obtain. Meanwhile those who do have the opportunity to get something approaching a clear view of the situation will by and large have every reason not to say a word about what they see. Politicians and the talking heads of the media will have nothing to gain from admitting the reality and pace of our national decline, and there will be a certain wry amusement to be had in watching them scramble for reasons to insist that things are actually getting better and a little patience or a change of government will bring good times back again. There will doubtless be plenty of of the sort of overt statistical dishonesty that insists, for example, that people who no longer get unemployment benefits are no longer unemployed—that’s been standard practice in the United States for decades now, you know. It’s standard for governments that can no longer shape the course of events to fixate on appearances, and try to prop up the imagery of the power and prosperity they once had, long after the substance has slipped away. It’s no longer necessary to speculate, then, about what kind of future the end of the age of cheap abundant energy will bring to the industrial world. That package has already been delivered, and the economic rigor mortis and political gridlock that have tightened its grip on this and so many other countries in the industrial world are, depending on your choice of metaphor, either part of the package or part of the packing material, scattered across the landscape like so much bubble wrap. Now that the future is here, abstract considerations and daydreaming about might-have-beens need to take a back seat to the quest to understand what’s happening, and work out coping strategies to deal with the Long Descent now that it’s upon us. Here again, those scattered facts and figures I mentioned back at the beginning of this week’s post are a better guide than any number of comforting assurances, and the facts I have in mind just at the moment were brought into focus by an intriguing essay by ecological economist Herman Daly. In the murky firmament of today’s economics, Daly is one of the few genuinely bright stars. A former World Bank official as well as a tenured academic, Daly has earned a reputation as one of the very few economic thinkers to challenge the dogma of perpetual growth, arguing forcefully for a steady state economic system as the only kind capable of functioning sustainably on a finite planet. The essay of his that I cited above, which I understand is scheduled to be published in an expanded form in the journal Ecological Economics, covers quite a bit of ground, but the detail I want to use here as the starting point for an unwelcome glimpse at the constraints bearing down on our future appears in the first few paragraphs. In his training as an economist, Daly was taught, as most budding economists are still taught today, that inadequate capital is the most common barrier to the development of the so-called "developing" (that is, nonindustrial, and never-going-to-develop) nations. His experience in the World Bank, though, taught him that this was almost universally incorrect. The World Bank had plenty of capital to lend; the problem was a shortage of "bankable projects"—that is, projects that, when funded by a World Bank loan, would produce the returns of ten per cent a year or so that would be needed to pay off the loan and and also contribute to the accumulation of capital within the country. It takes a familiarity with the last half dozen decades of economic literature to grasp just how sharply Daly’s experience flies in the face of the conventional thinking of our time. Theories of economic development by and large assume that every nonindustrial nation will naturally follow the same trajectory of development as today’s industrial nations did in the past, building the factories, hiring the workers, providing the services, and in the process generating the same ample profits that made the industrialization of Britain, America, and other nations a self-sustaining process. Now of course Britain, America, and other nations that succeeded in industrializing each did so behind a wall of protective tariffs and predatory trade policies that sheltered industries at home against competition, a detail that gets discussed next to nowhere in the literature on development and was ignored in the World Bank’s purblind enthusiasm for free trade. Still, there’s more going on here. In The Power of the Machine, Alf Hornborg has pointed out trenchantly that the industrial economy is at least as much a means of wealth concentration as it is one of wealth production. In the early days of the Industrial Revolution, when the hundreds of thousands of independent spinners and weavers who had been the backbone of Britain’s textile industry were driven out of business by the mills of the English Midlands, the income that used to be spread among the latter went to a few mill owners and investors instead, with a tiny fraction reserved for the mill workers who tended the new machines at starvation wages. That same pattern expanded past a continental scale as spinners and weavers across much of the world were forced out of work by Britain’s immense cloth export industry, and money that might have stayed in circulation in countries around the globe went instead into the pockets of English magnates. Throughout the history of the industrial age, that was the pattern that drove industrialism: from 18th century Britain to post-World War II Japan, a body of wealthy men in a country with a technological edge and ample supplies of cheap labor could build factories, export products, tilt the world’s economy in their favor, and make immense profits. In the language of Daly’s essay, industrial development in such a context was a bankable project, capable of producing much more than ten per cent returns. What has tended to be misplaced in current thinking about industrial development, though, is that at least two conditions had to be met for that to happen. The first of them, as already mentioned, is exactly the sort of protective trade policies that the World Bank and the current economic consensus generally are unwilling to contemplate, or even to mention. The second, however, cuts far closer to the heart of our current predicament. The industrial economy as it evolved from the 18th century onward depended utterly on the ability to replace relatively expensive human labor with cheap fossil fuel energy. The mills of the English Midlands mentioned above were able to destroy the livelihoods of hundreds of thousands of independent spinners and weavers because, all things considered, it was far cheaper to build a spinning jenny or a power loom and fuel it with coal than it was to pay for the skilled craftsmen and craftswomen who did the same work in an earlier day. In economic terms, in other words, industrialism is a system of arbitrage. Those of my readers who aren’t fluent in economic jargon deserve a quick definition of that last term. Arbitrage is the fine art of profiting off the difference in price between the same good in two or more markets. The carry trade, one of the foundations of the global economic system that came apart at the seams in 2008, was a classic example of arbitrage. In the carry trade, financiers borrowed money in Japan, where they could get it at an interest rate of one or two per cent per year, and then lent it at some higher interest rate elsewhere in the world. The difference between interest paid and interest received was pure profit. What sets industrialism apart from other arbitrage schemes was that it arbitraged the price difference between different forms of energy. Concentrated heat energy, in the form of burning fossil fuel, was cheap; mechanical energy, in the form of complex movements performed by the hands of spinners and weavers, was expensive. The steam engine and the machines it powered, such as the spinning jenny and power loom, turned concentrated heat into mechanical energy, and opened the door to what must have been the most profitable arbitrage operation of all time. The gargantuan profits yielded by this scheme provided the startup capital for further rounds of industrialization and thus made possible the immense economic transformations of the industrial age. That arbitrage, however, depended—as all arbitrage schemes do—on the price difference between the markets in question. In the case of industrialism, the difference was always fated to be temporary, because the low price of concentrated heat was purely a function of the existence of vast, unexploited reserves of fossil fuels that could easily be accessed by human beings. For obvious reasons, the most readily accessible reserves were mined or drilled first, and so as time passed, production costs for fossil fuels—not to mention the many other natural materials needed for industrial projects, and thus necessary for the arbitrage operation to continue—went up, slowly at first, and more dramatically in the last decade or so. I suspect that the shortage of bankable projects in the nonindustrial world that Herman Daly noted was an early symptom of that last process. Since nonindustrial nations in the 1990s were held (where necessary, at gunpoint) to the free trade dogma fashionable just then, the first condition for successful industrialization—a protected domestic market in which new industries could be sheltered from competition—was nowhere to be seen. At the same time, the systemic imbalances between rich and poor countries—themselves partly a function of industrial systems in the rich countries, which pumped wealth out of the poor countries and into corner offices in Wall Street and elsewhere—meant that human labor simply wasn’t that much more expensive than fossil fuel energy. That was what drove the "globalization" fad of the 1990s, after all: another round of arbitrage, in which huge profits were reaped off the difference between labor costs in industrial and nonindustrial countries. Very few people seem to have noticed that globalization involved a radical reversal of the movement toward greater automation—that is, the use of fossil fuel energy to replace human labor. When the cost of hiring a sweatshop laborer became less than the cost of paying for an equivalent amount of productive capacity in mechanical form, the arbitrage shifted into reverse; only the steep differentials in wage costs between the Third World and the industrial nations, and a vast amount of very cheap transport fuel, made it possible for the arbitrage to continue. Still, at this point the same lack of bankable projects has come home to roost. A series of lavish Fed money printing operations (the euphemism du jour is "quantitative easing") flooded the banking system in the United States with immense amounts of cheap cash, in an attempt to make up for the equally immense losses the banking system suffered in the aftermath of the 2005-2008 real estate bubble. ndits insisted, at least at first, that the result would be a flood of new loans to buoy the economy out of its doldrums, but nothing of the kind happened. There are plenty of reasons why it didn’t happen, but a core reason was simply that there aren’t that many business propositions in the industrial world just now that are in a position to earn enough money to pay back loans. Among the few businesses that do promise a decent return on investment are the ones involved in fossil fuel extraction, and so companies drilling for oil and natural gas in shale deposits—the latest fad in the fossil fuel field—have more capital than they know what to do with. The oil boomtowns in North Dakota and the fracking projects stirring up controversy in various corners of the Northeast are among the results. Elsewhere in the American economy, however, good investments are increasingly scarce. For decades now, profits from the financial industry and speculation have eclipsed profits from the manufacture of goods—before the 2008 crash, it bears remembering, General Motors made far more profit from its financing arm than it did from building cars—and that reshaping of the economy seems to be approaching its logical endpoint, the point at which it’s no longer profitable for the industrial economy to manufacture anything at all. I have begun to suspect that this will turn out to be one of the most crucial downsides of the arrival of peak oil. If the industrial economy, as I’ve suggested, was basically an arbitrage scheme profiting off the difference in cost between energy from fossil fuels and energy from human laborers, the rising cost of fossil fuels and other inputs needed to run an industrial economy will sooner or later collide with the declining cost of labor in an impoverished and overcrowded society. As we get closer to that point, it seems to me that we may begin to see the entire industrial project unravel, as the profits needed to make industrialism make sense dry up. If that’s the unspoken subtext behind the widening spiral of economic dysfunction that seems to be gripping so much of the industrial world today, then what we’ve seen so far of what peak oil looks like may be a prologue to a series of wrenching economic transformations that will leave few lives untouched. Video above: Punk band the Newton Neurotics sing "When the Oil Runs Out" in 1980. From (http://youtu.be/fnsBtUx75-w). .

Salvaging Health

SUBHEAD: A deindustrial future will have to evolve ways that combine the best of mainstream and alternative healing.  

 By John Michael Greer on 10 August 2011 for the Archdruid Report - (http://thearchdruidreport.blogspot.com/2011/08/salvaging-health.html)
 
Image above: Medicine as practiced in ancient Egypt. From (http://dodd.cmcvellore.ac.in/hom/01%20-%20Medicine%20in%20Ancient%20Egypt.html).

The old chestnut about living in interesting times may not actually be a Chinese curse, as today’s urban folklore claims, but it certainly comes to mind when glancing back over the smoldering wreckage of the past week. In the wake of a political crisis here in America that left both sides looking more than ever like cranky six-year-olds, a long-overdue downgrade of America’s unpayable debt, and yet another round of fiscal crisis in the Eurozone, stock and commodity markets around the globe roared into a power dive from which, as I write this, they show no sign of recovering any time soon.

In England, meanwhile, one of those incidents Americans learned to dread in the long hot summers of the Sixties—a traffic stop in a poor minority neighborhood, a black man shot dead by police under dubious circumstances—has triggered four nights of looting and rioting, as mobs in London and elsewhere organized via text messages and social media, brushed aside an ineffectual police presence, plundered shops and torched police stations, and ripped gaping holes in their nation’s already shredding social fabric.

It seems that “Tottenham” is how the English pronounce “Watts,” except that the fire this time is being spread rather more efficiently with the aid of Blackberries and flashmobs. Government officials denounced the riots as “mindless thuggery,” but it’s considerably more than that. As one looter cited in the media said, “this is my banker’s bonus”—the response of the bottom of the social pyramid, that is, to a culture of nearly limitless corruption further up.

 It bears remembering that the risings earlier this year in Tunisia, Egypt, and elsewhere began with exactly this sort of inchoate explosion of rage against governments that responded to economic crisis by tightening the screws on the poor; it was only when the riots showed the weakness of the existing order that more organized and ambitious movements took shape amid the chaos.

It’s thus not outside the bounds of possibility, if the British government keeps on managing the situation as hamhandedly as it’s done so far, that the much-ballyhooed Arab Spring may be followed by an English Summer—and just possibly thereafter by a European Autumn. One way or another, this is what history looks like as it’s happening.

Those of my readers who have been following along for a year or two, and have made at least a decent fraction of the preparations I’ve suggested, are probably as well prepared for the unfolding mess as anyone is likely to be. Those who have just joined the conversation, or were putting aside preparations for some later date—well, once the rubble stops bouncing and the smoke clears, you’ll have the chance to assess what possibilities are still open and what you have the resources to accomplish. In the meantime,

I want to continue the sequence of posts already under way, and discuss another of the things that’s going to have to be salvaged as the current system grinds awkwardly to a halt. The theme of this week’s discussion, I’m sorry to say, is another issue split down the middle by the nearly Gnostic dualisms that bedevil contemporary American society. Just as Democrats and Republicans denounce each other in incandescent fury, and fundamentalist atheists compete with fundamentalist Christians in some sort of Olympics of ideological intolerance, the issues surrounding health care in America these days have morphed unhelpfully into a bitter opposition between the partisans of mainstream medicine and the proponents of alternative healing.

The radicals on both sides dismiss the other side as a bunch of murderous quacks, while even those with more moderate views tend to regard the other end of the spectrum through a haze of suspicion tinged with bad experiences and limited knowledge. I stay out of such debates as often as I can, but this one hasn’t given me that choice. Ironically, that’s because I’ve experienced both sides of the issue. On the one hand, I’m alive today because of modern medicine. At the age of seven, I came down with a serious case of scarlet fever.

That’s a disease that used to kill children quite regularly, and in a premodern setting, it almost certainly would have killed me. As it was, I spent two weeks flat on my back, and pulled through mostly because of horse doctor’s doses of penicillin, administered first with syringes that to my seven-year-old eyes looked better suited for young elephants, and thereafter in oral form, made palatable with an imitation banana flavoring I can still call instantly to mind. Then there’s the other side of the balance.

My wife has lifelong birth defects in her legs and feet, because her mother’s obstetrician prescribed a drug that was contraindicated for pregnant women because it causes abnormalities in fetal limb development.

My only child died at birth because my wife’s obstetrician did exactly the same thing, this time with a drug that was well known to cause fatal lung abnormalities. Several years later we found out by way of a media exposé that the latter doctor had done the same thing to quite a few other women, leaving a string of dead babies in his wake. The response of the medical board, once the media exposure forced them to do something, was quite standard; they administered a mild reprimand. If this reminds you of the Vatican’s handling of pedophile priests, well, let’s just say the comparison has occurred to me as well. Deaths directly caused by American health care are appallingly common.

A widely cited 2000 study by public health specialist Dr. Barbara Starwood presented evidence that bad medical care kills more Americans every year than anything but heart disease and cancer, with adverse drug effects and nosocomial (hospital- and clinic-spread) infections the most common culprits.

 A more comprehensive study prepared outside the medical mainstream, but based entirely on data from peer-reviewed medical journals, argued that the actual rate was much higher—higher, in fact, than any other single cause. That’s part of what makes the controversies over American health care so challenging; mainstream medical care saves a lot of lives in America, but because of the pressures of the profit motive, and the extent to which institutional barriers protect incompetent practitioners and dangerous and ineffective remedies, it also costs a lot of lives as well. Even so, if I could find a competent, affordable general practitioner to give me annual checkups and help me deal with the ordinary health issues middle-aged men tend to encounter, I’d be happy to do so.

The catch here is that little word "affordable." Along with those birth defects, my wife has celiac disease, a couple of food allergies, and a family history with some chronic health problems in it; for that matter, my family history is by no means squeaky clean; we’re both self-employed, and so health insurance would cost us substantially more than our mortgage. That’s money we simply don’t have.

Like a large and growing fraction of Americans, therefore, we’ve turned to alternative medicine for our health care. The more dogmatic end of the mainstream medical industry tends to dismiss all alternative healing methods as ineffective by definition. That’s self-serving nonsense; the core alternative healing modalities, after all, are precisely the methods of health care that were known and practiced in the late 19th century, before today’s chemical and surgical medicine came on the scene, and they embody decades or centuries of careful study of health and illness. There are things that alternative health methods can’t treat as effectively as the current mainstream, of course, but the reverse is also true. Still, behind the rhetoric of the medical industry lies a fact worth noting: alternative medical methods are almost all much less intensive than today’s chemical and surgical medicine.

The best way to grasp the difference is to compare it to other differences between life in the late 19th century and life today—say, the difference between walking and driving a car. Like alternative medicine, walking is much slower, it requires more personal effort, and there are destinations that, realistically speaking, are out of its reach; on the other hand, it has fewer negative side effects, costs a lot less, and dramatically cuts your risk of ending up buttered across the grill of a semi because somebody else made a mistake. Those differences mean that you can’t use alternative medicine the way you use the mainstream kind. If I neglect a winter cold, for example, I tend to end up with bacterial bronchitis.

A physician nowadays can treat that with a simple prescription of antibiotics, and unless the bacterium happens to be resistant—an issue I’ll be discussing in more detail in a bit—that’s all there is to it. If you’re using herbs, on the other hand, handling bacterial bronchitis is a more complex matter. There are very effective herbal treatments, and if you know them, you know exactly what you’re getting and what the effects will be.

 On the other hand, you can’t simply pop a pill and go on with your day; you have to combine the herbal infusions with rest and steam inhalation, and pay attention to your symptoms so you can treat for fever or other complications if they arise. You very quickly learn, also, that if you don’t want the bronchitis at all, you can’t simply ignore the first signs of an oncoming cold; you have to notice it and treat it.

Here’s another example. I practice t’ai chi, and one of the reasons is that it’s been documented via controlled studies to be effective preventive medicine for many of the chronic health problems Americans tend to get as they get old. You can treat those same problems with drugs, to be sure, if you’re willing to risk the side effects, but again, you can’t just pop a t’ai chi pill and plop yourself back down on the sofa. You’ve got to put in at least fifteen minutes of practice a day, every day, to get any serious health benefits out of it. (I do more like forty-five minutes a day, but then I’m not just practicing it for health.) It takes time and effort, and if you’ve spent a lifetime damaging your health and turn to t’ai chi when you’re already seriously ill, it’s unlikely to do the trick.

All these points are relevant to the core project of this blog, in turn, because there’s another difference between alternative health care and the medical mainstream. All the core alternative modalities were all developed before the age of cheap abundant fossil fuel energy, and require very little in the way of energy and raw material inputs.

Conventional chemical and surgical medicine is another thing entirely. It’s wholly a creation of the age of petroleum; without modern transport and communications networks, gargantuan supply chains for everything from bandages through exotic pharmaceuticals to spare parts for lab equipment, a robust electrical supply, and many other products derived from or powered by cheap fossil fuels, the modern American medical system would grind to a halt. In the age of peak oil, that level of dependency is not a survival trait, and it’s made worse by two other trends.

The first, mentioned earlier in this post, is the accelerating spread of antibiotic resistance in microbes. The penicillin that saved my life in 1969 almost certainly wouldn’t cure a case of scarlet fever today; decades of antibiotic overuse created a textbook case of evolution in action, putting ferocious selection pressure on microbes in the direction of resistance. The resulting chemical arms race is one that the microbes are winning, as efforts by the pharmaceutical industry to find new antibiotics faster than microbes can adapt to them fall further and further behind.

 Epidemiologists are seriously discussing the possibility that within a few decades, mortality rates from bacterial diseases may return to19th-century levels, when they were the leading cause of death. The second trend is economic. The United States has built an extraordinarily costly and elaborate health care system, far and away the most expensive in the world, on the twin pillars of government subsidies and employer-paid health benefits.

As we lurch further into what Paul Kennedy called "imperial overstretch"—the terminal phase of hegemony, when the costs of empire outweigh the benefits but the hegemonic power can’t or won’t draw back from its foreign entanglements—the government subsidies are going away, while health benefits on the job are being gutted by rising unemployment rates and the frantic efforts of the nation’s rentier class to maintain its standards of living at the expense of the middle classes and the poor. Requiring people who can’t afford health insurance at today’s exorbitant rates to pay for it anyway under penalty of law—the centerpiece of Obama’s health care "reform"—was a desperation move in this latter struggle, and one that risks a prodigious political backlash.

If Obama’s legislation takes effect as written in 2014, and millions of struggling American families find themselves facing a Hobson’s choice between paying a couple of thousand a month or more for health insurance they can’t afford, or paying heavy fines they can’t afford either, it’s probably a safe bet that the US will elect a Tea Party president in 2016 and repeal that—along with much else.

Whether that happens or not, it’s clear at this point that the United States can no longer afford the extraordinarily costly health care system it’s got, and the question at this point is simply what will replace it. In the best of all possible worlds, the existing medical system would come to terms with the bleak limits closing in around it, and begin building a framework that could provide basic health care at a reasonable price to the poor and working classes. It actually wouldn’t be that difficult, but it would require the medical industry to remove at least some of the barriers that restrict medical practice to a small number of very highly paid professionals, and to accept significant declines in quarterly profits, doctors’ salaries, and the like.

Maybe that could happen, but so far there doesn’t seem to be any sign of a movement in that direction. Instead, health care costs continue to rise as the economy stalls, moving us deeper into a situation where elaborate and expensive health care is available to a steadily narrowing circle of the well-to-do, while everyone outside the circle has to make do with what they can afford—which, more and more often, amounts to the 19th-century medicine provided by alternative health care. Thus I’m not especially worried about the survival of alternative healing.

Despite the fulminations of authority figures and the occasional FDA witch hunt, the alternative healing scene is alive and well, and its reliance on medicines and techniques that were viable before the age of cheap abundant fossil fuels means that it will be well equipped to deal with conditions after cheap energy of any kind is a thing of the past.

No, what concerns me is the legacy of today’s mainstream medicine—the medicine that saved my life at age seven, and continues, despite its difficulties and dysfunctions, to heal cases that the best doctors in the world a century and a quarter ago had to give up as hopeless. Even if a movement of the sort I’ve suggested above were to take place, a great deal of that would be lost or, at best, filed away for better times. The most advanced medical procedures at present require inputs that a deindustrial society simply isn’t going to be able to provide.

Still, there’s quite a bit that could be saved, if those who have access to the techniques in question were to grasp the necessity of saving them.

 As it stands, the only people who can salvage those things are the physicians who are legally authorized to use them; the rest of us can at best get a working grasp of sanitation and sterile procedure, the sort of wilderness-centered first aid training that assumes that a paramedic won’t be there in ten minutes, and the sort of home nursing skills that the Red Cross used to teach in the 1950s and 1960s—you can still find the Red Cross Home Nursing Manual in the used book market, and it’s well worth getting a copy and studying it. Other than that, it’s up to the physicians and the various institutions they staff and advise. If they step up to the plate, the deindustrial future will have the raw materials from which to evolve ways of healing that combine the best of mainstream and alternative methods. If they don’t, well, maybe enough written material will survive to enable the healers of the future to laboriously rediscover and reinvent some of today’s medical knowledge a few centuries down the road.

While the decision is being made, those of us who don’t have a voice in it have our own decisions to make: if we have the money and are willing to accept one set of risks, to make use of today’s chemical and surgical medicine while it’s still around; if we have the interest and are willing to accept another set of risks, to make use of one or more methods of alternative medicine; or if neither option seems workable or desirable, to come to terms with a reality that all of us are eventually going to have to accept anyway, which is that life and health are fragile transitory things, and that despite drugs and surgeries on the one hand, or herbs and healing practices on the other, the guy with the scythe is going to settle the matter sooner or later with the one answer every human being gets at last.

 See also:
Ea O Ka Aina: Salvaging Science 8/5/11
Ea O Ka Aina: Salvaging Energy 7/6/11
Ea O Ka Aina: Post Collapse Health Care 4/29/11 .

Salvaging Science

SUBHEAD: The time of specialization for professional scientists is about over. Once again the amateur observer to carry on the scientific tradition.

 [IB Editor's note: This is the latter portion of Greer's long current article. The first six paragraphs can be found at his website that is linked below.]  

By John Michael Greer on 5 August 2011 for the ArchDruid Report - (http://thearchdruidreport.blogspot.com/2011/08/salvaging-science.html)
   
Image above: A collection of 19th century amateur science apparatuses. From (http://www.sas.org/index.html).

It’s rarely remembered these days that until quite recently, scientific research was mostly carried on by amateurs. The word “scientist” wasn’t even coined until 1833; before then, and for some time after, the research programs that set modern science on its way were carried out by university professors in other disciplines, middle class individuals with spare time on their hands, and wealthy dilletantes for whom science was a more interesting hobby than horse racing or politics.

Isaac Newton, for example, taught mathematics at Cambridge; Gilbert White founded the science of ecology with his Natural History of Selborne in his spare time as a clergyman; Charles Darwin came from a family with a share of the Wedgwood pottery fortune, had a clergyman’s education, and paid his own way around the world on the H.M.S. Beagle.

 It took a long time for scence as a profession to catch on, because—pace a myth very widespread these days—science contributed next to nothing to the technological revolutions that swept the western world in the eighteenth and nineteenth centuries. Until late in the nineteenth century, in fact, things generally worked the other way around: engineers and basement tinkerers discovered some exotic new effect, and then scientists scrambled to figure out what made it happen.

James Clerk Maxwell, whose 1873 book Electricity and Magnetism finally got out ahead of the engineers to postulate the effects that would become the basis for radio, began the process by which science took the lead in technological innovation, but it wasn’t until the Second World War that science had matured enough to become the engine of discovery it then became. It was then that government and business investment in basic research took off, creating the institutionalized science of the present day.

Throughout the twentieth century, investment in scientific research proved to be a winning bet on the grand scale; it won wars, made fortunes, and laid the groundwork for today’s high-tech world. It’s a common belief these days that more of the same will yield more of the same—that more scentific research will make it possible to fix the world’s energy problems and, just maybe, its other problems as well. Popular as that view is, there’s good reason to doubt it. The core problem is that scientific research was necessary, but not sufficient, to create today’s industrial societies.

Cheap abundant energy was also necessary, and was arguably the key factor. In a very real sense, the role of science from the middle years of the nineteenth century on was basically figuring out new ways to use the torrents of energy that came surging out of wells and mines to power history’s most extravagant boom.

Lacking all that energy, the technological revolutions of the last few centuries very likely wouldn’t have happened at all; the steam turbine, remember, was known to the Romans, who did nothing with it because all the fuel they knew about was committed to other uses. Since the sources of fuel we’ll have after fossil fuels finish depleting are pretty much the same as the ones the Romans had, and we can also expect plenty of pressing needs for the energy sources that remain, it takes an essentially religious faith in the inevitability of progress to believe that another wave of technological innovation is right around the corner.

The end of the age of cheap abundant energy is thus also likely to be the end of the age in which science functions as a force for economic expansion. There are at least two other factors pointing in the same direction, though, and they need to be grasped to make sense of the predicament we’re in. First, science itself is well into the territory of diminishing returns, and most of the way through the normal life cycle of a human method of investigation.

What last week’s post described as abstraction, the form of intellectual activity that seeks to reduce the complexity of experience into a set of precisely formulated generalizations, always depends on such a method. Classical logic is another example, and it’s particularly useful here because it completed its life cycle long ago and so can be studied along its whole trajectory through time. Logic, like the scientific method, was originally the creation of a movement of urban intellectuals in a society emerging from a long and troubled medieval period.

Around the eighth century BCE, ancient Greece had finally worked out a stable human ecology that enabled it to finish recovering from the collapse of Mycenean society some six centuries before; olive and grapevine cultivation stabilized what was left of the fragile Greek soil and produced cash crops eagerly sought by markets around the eastern Mediterranean, bringing in a flood of wealth; the parallel with rapidly expanding European economies during the years when modern science first took shape is probably not coincidental.

Initial ventures in the direction of what would become Greek logic explored various options, some more successful than others; by the fifth century BCE, what we may as well call the logical revolution was under way, and the supreme triumphs of logical method occupied the century that followed. Arithmetic, geometry, music theory, and astronomy underwent revolutionary developments.

That’s roughly where the logical revolution ground to a halt, too, and the next dozen centuries or so saw little further progress. There were social factors at work, to be sure, but the most important factor was inherent in the method: using the principles of logic as the Greeks understood them, there’s only so far you can go.

Logical methods that had proved overwhelmingly successful against longstanding problems in mathematics worked far less well on questions about the natural world, and efforts to solve the problems of human life as though they were logical syllogisms tended to flop messily. Once the belief in the omnipotence of logic was punctured, on the other hand, it became possible to sort out what it could and couldn’t do, and—not coincidentally—to assign it a core place in the educational curriculum, a place it kept right up until the dawn of the modern world. I know it’s utter heresy even to hint at this, but I’d like to suggest that science, like logic before it, has gotten pretty close to its natural limits as a method of knowledge.

 In Darwin’s time, a century and a half ago, it was still possible to make worldshaking scientific discoveries with equipment that would be considered hopelessly inadequate for a middle school classroom nowadays; there was still a lot of low hanging fruit to be picked off the tree of knowledge. At this point, by contrast, the next round of experimental advances in particle physics depends on the Large Hadron Collider, a European project with an estimated total price tag around $5.5 billion. Many other branches of science have reached the point at which very small advances in knowledge are being made with very large investments of money, labor, and computing power.

Doubtless there will still be surprises in store, but revolutionary discoveries are very few and far between these days. Yet there’s another factor pressing against the potential advancement of science, and it’s one that very few scientists like to talk about. When science was drawn up into the heady realms of politics and business, it became vulnerable to the standard vices of those realms, and one of the consequences has been a great deal of overt scientific fraud. A study last year published in the Journal of Medical Ethics surveyed papers formally retracted between 2000 and 2010 in the health sciences.

About a quarter of them were retracted for scientific fraud, and half of these had a first author who had had another paper previously retracted for scientific fraud. Coauthors of these repeat offenders had, on average, three other papers each that had been retracted. Americans, it may be worth noting, far more often had papers retracted for fraud, and were repeat offenders, than their overseas colleagues. I don’t know how many of my readers were taught, as I was, that science is inherently self-policing and that any researcher who stooped to faking data would inevitably doom his career.

Claims like these are difficult to defend in the face of numbers of the sort just cited. Logic went through the same sort of moral collapse in its time; the English word "sophistry" commemorates the expert debaters of fourth-century Greece who could and did argue with sparkling logic for anyone who would pay them. To be fair, scientists as a class would have needed superhuman virtue to overcome the temptations of wealth, status, and influence proffered them in the post-Second World War environment, and it’s also arguably true that the average morality of scientists well exceeds that of businesspeople or politicians.

That still leaves room for a good deal of duplicity, and it’s worth noting that this has not escaped the attention of the general public. It’s an item of common knowledge these days that the court testimony or the political endorsement of a qualified scientist, supporting any view you care to name, can be had for the cost of a research grant or two.

 I’m convinced that this is the hidden subtext in the spreading popular distrust of science that is such a significant feature in our public life: a great many Americans, in particular, have come to see scientific claims as simply one more rhetorical weapon brandished by competing factions in the social and political struggles of our day.

This is unfortunate, because—like logic—the scientific method is a powerful resource; like logic, again, there are things it can do better than any other creation of the human mind, and some of those things will be needed badly in the years ahead of us. Between the dumping of excess specializations in a contracting economy, the diminishing returns of scientific research itself, and the spreading popular distrust of science as currently practiced, the likelihood that any significant fraction of today’s institutional science will squeeze through the hard times ahead is minimal at best.

What that leaves, it seems to me, is a return to the original roots of science as an amateur pursuit. There are still some corners of the sciences—typically those where there isn’t much money in play—that are open to participation by amateurs.

There are also quite a few branches of scientific work that are scarcely being done at all these days—again, because there isn’t much money in play—and their number is likely to increase as funding cuts continue.

To my mind, one of the places where these trends intersect with the needs of the future is in local natural history and ecology, the kind of close study of nature’s patterns that launched the environmental sciences, back in the day. To cite an example very nearly at random, it would take little more than a microscope, a notebook, and a camera to do some very precise studies of the effect of organic gardening methods on soil microorganisms, beneficial and harmful insects, and crop yields, or to settle once and for all the much-debated question of whether adding biochar to garden soil has any benefits in temperate climates. These are things the green wizards of the future are going to need to be able to figure out.

 With much scientific research in America moving in what looks uncomfortably like a death spiral, the only way those skills are likely to make it across the crisis ahead of us is if individuals and local groups pick them up and pass them on to others. Now is probably not too soon to get started, either.

 .

Salvaging Quality

SUBHEAD: A handsaw manufactured two generations ago has better steel than the one in Home Depot today.

 By John Michael Greer on 13 July 2011 for ArchDruid Report - (http://thearchdruidreport.blogspot.com/2011/07/salvaging-quality.html)

 
Image above: An old handsaw kept in good shape. From (http://www.woodworkforums.com/f152/identify-old-hand-saw-117740/).

It’s been a busy week for those of us who keep watch over the industrial world’s deepening tailspin, as politicians in the United States and Europe play a game of chicken using sovereign debt in the role traditionally filled by fast cars.

The issue in the United States is simple enough: the most that either side is able to offer, given its political commitments, is less than the least either side can afford to accept, and the occasional turns toward demagoguery on both sides haven’t exactly helped. It’s still possible that some last minute compromise may be hammered out, but the odds against that are starting to lengthen, and if that doesn’t happen, the financial end of the federal government will start seizing up in about three weeks.

It should be an interesting spectacle. Europe is a more complex situation. Greece, the current poster child for sovereign debt dysfunction, did what poor countries so often do, borrowed in foreign markets far beyond its ability to repay, and now can’t meet its bills. Unfortunately the normal way to resolve such problems – defaulting on the debt – would bankrupt quite a few large banks in other EU nations, and these latter have put pressure on their national governments to stave off a Greek default.

The problem here is that Greece is going to have to default sooner or later; the question is purely a matter of when The Greek government is in hock far beyond its ability to repay, and the austerity measures pushed on it by the cluelessly doctrinaire economists at the IMF have worsened the matter considerably by putting the Greek economy into a tailspin.

So it’s simply a matter of waiting for the inevitable to happen, and the credit markets to go into spasm accordingly. Mind you, the horrified utterances currently being splashed around the global media, claiming that default is unthinkable and unprecedented, are nonsense of the most blatant sort. Nations default on their debts all the time.

Russia did it in 1998, Argentina did it in 2002, and both nations survived; most European nations, for that matter, have defaulted on their debts more than once over the course of their history, and bankrupted plenty of banks in the process – that’s where we get the word bankrupt, you know. Defaults have always been one of the inescapable risks of lending to governments. EU governments could get realistic about this, let Greece do what countries with too much debt normally do, and spend their time more usefully writing letters of condolence to the bank executives who will be out of a job shortly thereafter.

 Come to think of it, it’s just possible that this is what EU governments are actually doing. The current flurry of handwaving and emergency meetings may be no more than a source of plausible deniability – we’re sorry, we did all we could, it was the fault of (fill in the blank to conform to local prejudices) that Greece crashed and took half Europe’s banks with it, blah blah blah.

For that matter, it’s not completely beyond the bounds of possibility that politicians on this side of the Atlantic are playing a similar game. The US is up to its eyeballs in unpayable debts, loaded down with entitlements and international commitments that it can’t afford but that no elected official dares to touch, and lurching toward a default as inevitable as Greece’s but on an almost unimaginably vaster scale. Nearly the only way to get out of the resulting trap with some chance of national survival would be to trigger a run on Treasury bills, now, that will force a default on the national debt in the near future, when both sides can conveniently blame it on the intransigence of the other party and the perfidiousness of foreign lenders.

It does seem unlikely that this level of public-spiritedness is at work in Congress and the White House, but I’d like to believe that it’s possible. These latest consternations, in turn, provide all the more relevance to the theme I’ve been discussing in the last couple of posts here, the possibility of shifting over here and now to the salvage economy that’s already beginning to emerge outside the narrowing circle of scarcity industrialism. This week I’d like to bring up another dimension of that shift, and talk about one of the unspoken and unspeakable realities of life in a declining industrial society: the pervasive phenomenon of stealth inflation.

By this I don’t mean inflation in the sense in which economists use the word, the decrease in the value of money driven by the expansion of the money supply relative to the supply of goods and services. That kind of inflation deserves much better press than it gets; though it’s denounced by all right-thinking people these days, it’s one of the safety valves by which a capitalist economy’s tendency to produce excess paper wealth gets brought back into step with the actual wealth in circulation, the nonfiscal goods and nonfinancial services that meet actual human needs. It thus serves exactly the same role, in a much more subtle and flexible way, as the negative-interest currencies being proposed by would-be financial reformers these days.

Stealth inflation is a good deal less laudable. It’s the process by which the price of goods and services remains the same, while the value of what’s provided for that price diminishes.

It’s sometimes done by decreasing quantities – most Americans over forty, for example, will remember the days when cans of soup and candy bars were a good deal larger than they are now – but far more often done by cutting quality. Sometimes this is a minor, even a subtle, factor; in other cases, it’s neither, and can quite easily become lethal in its effects. A good example of the first kind came my way a while back when a friend, knowing I like to cook with cast iron pans, found an elderly example in a secondhand store for some absurdly small price and gave it to me.

Because my wife has celiac disease – a severe enough case that relatively small traces of gluten can have unwelcome effects – I had to strip off the natural coating that cast iron cookwear gets when it’s well treated, and reseason the pan again, just as though it had been bought new. Even with this rough treatment, though, the old pan proved to be a much better piece of cookwear than any of the more recently manufactured cast iron pans I’d been using for a decade or so previously. Its inner surface has a much smoother finish, its metal conducts heat more evenly; this evening’s fried zucchini (fresh from the garden) was cooked in it, because no other pan I have does as good a job.

 This isn’t simply a matter of chance or a personal quirk. Ask any cast iron aficionado and dollars will get you doughnuts – perhaps these days I should say “credit swaps will get you crullers” or something like that – you’ll hear a similar story; the cast iron cookware you can buy in your local hardware store simply isn’t as good as the same products made a quarter century ago, and the difference is no small thing. I’ve heard the same thing in the very different context of craftspeople who work with old tools; the quality of the metal, they say, as well as the workmanship tends to be dramatically better in tools that are at least a quarter century old. In some cases the differences are enough to kill.

One of the nasty little secrets behind the rising toll from food poisoning in the United States and elsewhere is that a great deal of it could easily have been prevented by common sense sanitary procedures that used to be standard, but have been cut for the sake of lower per unit costs and higher quarterly profits. What makes this all the more embarrassing is that this is America’s second encounter with what happens to the safety and quality of processed food in a capitalist system under economic stress; Upton Sinclair’s The Jungle probably ought to be required reading for the pundits, and there are many of them just now, who fatuously insist that government regulation is always and everywhere a bad idea.

 The same purblind mania for gutting sensible regulation that freed the banking industry from the Glass-Steagall Act and its equivalents in other industrial nations, and at a stroke brought back the devastating bubble-and-bust economics that dominated the industrial world before the Glass-Steagall Act was originally passed, has had equivalent effects in many other sectors of economic life.

An acceleration in stealth inflation through declining quality is among the results. Still, there’s a deeper force pressing in the same direction, and it comes from the relentless mathematics of fossil fuel depletion and its impact on an economy founded on the expectation of constant growth. There has been a great deal of talk recently on the leftward side of the economic spectrum about the need to “decouple” economic growth from increases in the supply of energy. Still, as Zen masters are wont to say, talk does not cook the rice; insisting that economic growth can continue while energy supplies are stuck in a bumpy plateau does not make it so.

 The production of real, nonfiscal goods and services requires inputs of energy, as well as raw materials (which must be extracted by using energy) and labor (which in America, at least, usually uses a fair amount of energy, too). The only goods and services that can grow unchecked as energy supplies flatline are financial goods and services – that is, “goods” that consist of the essentially arbitrary tokens our society uses to allocate real wealth, on the one hand, and “services” that consist of shuffling and exchanging these tokens in more or less intricate ways, on the other.

As the cheap abundant energy that provided the basis for three centuries of industrial civilization stops being cheap and abundant, then, one of the consequences is a widening disconnection between the production of nonfiscal goods and services and the production of money in all its various forms.

Left to itself, the natural result would be a rising spiral of inflation in which the value of money declined steadily, to stay more or less in step with the amount of real goods and services available to buy. This natural result, though, is utterly unacceptable to the political classes – the people who take an active role in the political process – anywhere in the industrial world.

This has imposed any number of distortions on the global economy, but one of them is a constant push to keep the nominal rate of inflation as low as possible, thus sparing politicians the hard task of explaining to their constituents a reality that neither the politicians nor the constituents have yet begun to understand.

That push drives the widespread juggling of economic statistics across the industrial world, but I’ve come to believe that it also provides an important motive force behind stealth inflation. Large corporations have plenty of interfaces with governments, and governments have plenty of levers by which to influence corporate behavior for political ends; if the politicians in Washington DC, let’s say, decided that it would be really helpful if businesses increased their profit margins relative to their costs by some means other than raising prices, it doesn’t seem at all unlikely that this preference would be heard in corporate boardrooms, and play at least some role in shaping their decisions.

What this means for the individual green wizard, in turn, is that there’s every reason to think that a good many of the goods and services sold to consumers are going to continue to decrease in quality in the years ahead. That in turn implies at least two things. The first is that the strategy of salvaging energy discussed in last week’s post has an additional advantage, because what’s being salvaged in a good many cases is not simply an equivalent of what’s on the market today, but a better product, one that tolerably often will work better and last longer than a new product of the same type.

As we approach an age in which many goods may stop being available at all for extended periods, this is not an opportunity to ignore. The second implication is that those who learn the skills needed to take older products that are no longer working, or no longer working well, and recondition them so that they can return to usefulness, may find themselves with a job skill of no small importance in the emerging salvage economy. It’s not too hard, for example, to find old handsaws for sale very cheaply at flea markets and estate sales. Fairly often, after being handed down through a couple of generations, these have rusted blades, teeth that are dull and bent out of their proper set, cracked and damaged handles, and the like.

The steel of the blade, however, is very often of much higher quality than the equivalent new product in a hardware store today, and it doesn’t actually take that much in the way of skills and tools to remove the rust, polish the blade, reset and file the teeth, make a new handle out of hardwood and attach it to the old blade, and so on. The result is a saw that can be handed down for several more generations, and do a great deal of useful work in the meantime; it’s also a product that can be sold or bartered to craftspeople at a premium price. In at least a few cases, it’s also possible to go one step further and figure out how to manufacture products on a small scale to old specifications.

 I don’t have anything like the metallurgical knowledge to figure our what makes the difference between my old cast iron pan and my newer pans, but the information’s surely out there, and could be tracked down by someone with the necessary background. Whether or not there would be enough of a market to make this a paying proposition anytime soon is another matter; there are odd little niche markets that might at least pay the bills.

Myself, having more facility with words than with metals, I’m contemplating tracking down a basic letterpress and exploring the honorable profession of Benjamin Franklin. The printing press with movable type was invented in the Middle Ages, after all, and very likely can remain a viable technology no matter how far down the slope of decline we end up sliding. Under current conditions, it can help pay its own bills via handprinted wedding invitations and the like; as conditions change and the complex supply chains that keep computer printers and copiers functioning become more problematic, a printing press powered by human muscle and capable of running on supplies no more complex than paper and homebrewed ink may turn into a serious asset.

 Your mileage will unquestionably vary, and a second income refurbishing old items or using some outdated but sustainable technology will be the right choice for some people and the wrong choice for others.

I mention it here partly because a good many readers of these posts have asked about potential businesses and income sources in a deindustrializing world, and partly because a fair number of people out there in the peak oil blogosphere don’t yet seem to have thought through the fact that they’ll need to earn a living in one way or another during the long slow unraveling of the industrial economy.

That unraveling may have its sudden jolts, to be sure. If the politicians in Washington DC and an assortment of European capitals fumble the current situation spectacularly enough, this autumn could see an economic crisis on the grand scale, with markets seizing up, banks shutting down, and governments facing abrupt replacement by legal means or otherwise.

 Still, we’ll come out the other side of it, no doubt poorer but still faced with the ordinary challenges of the human condition; if learning how to recondition old tools allows someone to barter for necessities during the years ahead, that’s a positive step, and such positive steps on the individual scale are the raw materials from which the deindustrial future will gradually emerge.

See also:
Ea O Ka Aina: Salvaging Energy 7/6/11
Island Breath: Salvage Societies 10/28/07 .

Salvaging Energy

SUBHEAD: Nearly all used-goods stores, by contrast, are locally owned and circulate their earnings back into the community, where they generate jobs by way of the multiplier effect By John Michael Greer on 6 July 2011 for ArchDruid Report - (http://thearchdruidreport.blogspot.com/2011/07/salvaging-energy.html) Image above: Interior of new Nissan Leaf electric car. From (http://electriccarphotos.com/nissan-leaf-ev-2010.html). The peak oil blogosphere this year seems to have decided that the Fourth of July needed a Grinch of its own to compete with Christmas, and a fair number of blogs duly went up denouncing the day and its entertainments. I can’t say that these added much to the peak oil dialogue or, really, to much of anything. It’s hardly a secret, for instance, that intellectuals on the two coasts like to belittle working class people who live in between, nor that it’s still quite fashionable on both ends of the political spectrum to characterize our system of government in terms that would get those who do so dragged away by a death squad if their rhetoric had any relationship to reality. Me, I enjoyed the Fourth; I usually do. My wife and I spent a quiet day gardening, dined on chicken fried tofu – hey, don’t knock it if you haven’t tried it – and walked down to the city park that runs along the old Chesapeake and Ohio Canal, where we met friends, munched watermelon, and watched the annual fireworks display. The Fourth of July is one of the high points of any small town American summer, and I’m also sufficiently old-fashioned to celebrate the ideals that sent this country along its historical trajectory all those years ago. Since the United States is a country inhabited and governed by people rather than abstract ideological mannequins, those ideals got put into practice no more consistently than any other country’s, but they’ve lost none of their relevance, and I’d be a good deal less worried about the future of my country if I saw more people paying attention to them and fewer people waving them aside as obstacles to the pursuit of some allegedly glorious future. The theme of this week’s post isn’t primarily about the future, glorious or otherwise. Strictly speaking, it’s a response to circumstances that will almost certainly go away within the lifetimes of people now living, and only exist today in those few nations that can afford the overblown consumer economy that budded, bloomed, and went to seed in the second half of the twentieth century. That response, curiously enough, has more than a little to do with the theme of independence central to the holiday just past, but the easiest way to make sense of it is to start with the nearly complete state of dependence expressed by another fashionable topic under discussion in the peak oil scene. That topic is the return of electric cars to American roads. A dozen large and small automakers are in the process of bringing out battery-powered cars of various kinds, ranging from generic compacts like the Ford Focus and Nissan Leaf to more exotic items like the Aptera and the GEM. Most of them are pricey, and all of them have their share of drawbacks, mostly in terms of range and reliability, but a significant fraction of people on the green end of things are hailing their appearance as a great step forward. As things stand, that’s a bit of an oversimplification, since almost all the electricity these vehicles use will be generated by burning coal and natural gas, and the easy insistence that the grid can easily be switched over to solar and wind power has already been critiqued at some length in this blog. Still, there are a couple of other points that would be well worth making here. First of all, of course, the best way to reduce your ecological footprint isn’t to replace a petroleum-powered car with an electric car, it’s to replace it with a bicycle, a public transit ticket, or a good pair of walking shoes. This isn’t the first time I’ve mentioned that option, and I know I can expect to be belabored by commenters who are bursting with reasons why they can’t possibly do without a car, or even leave the car parked in the driveway most of the time. Granted, the built geography of much of rural and suburban North America makes it a little challenging to do without a car, but something close to a hundred million people in the United States live in places where a car is a luxury most or all of the time, and a significant fraction of the others choose to live in places where that’s not the case. Still, let’s set aside for the moment the fact that the one energy-related bumper sticker that might actually make a difference these days would belong on the back of a bicycle, and would say MY OTHER CAR IS A PAIR OF SHOES. For those Americans who actually do find themselves in need of a car, how about the new electric vehicles? Will they really decrease your carbon footprint and your fossil fuel use, as so much current verbiage claims? The answer is unfortunately no. First of all, as already mentioned, the vast majority of electricity in America and elsewhere comes from coal and natural gas, and so choosing an electric car simply means that the carbon dioxide you generate comes out of a smokestack at a power plant rather than the tailpipe of your car. The internal combustion engine is an inefficient way of turning fuel into motion – around 3/4 of the energy in a gallon of gas becomes low-grade heat dumped into the atmosphere via the radiator, leaving only a quarter to keep you rolling down the road – but the processes of turning fossil fuel into heat and heat into electricity, storing the electricity in a battery and extracting it again, and then turning the electricity into motion is less efficient still, so you’re getting less of the original fossil fuel energy turned into distance traveled than you would in an ordinary car. This means that you’d be burning more fossil fuel to power your car even if the power plant was burning petroleum, and since it isn’t – and coal and natural gas contain much less energy per unit of volume than petroleum distillates do – you’re burning quite a bit more fossil fuel, and dumping quite a bit more carbon in the atmosphere, than a petroleum-powered car would do. This isn’t something you’ll see discussed very often in e-car websites and sales flyers. It’s even less likely that you’ll find any mention there of the second factor that needs to be discussed, which is the energy cost of manufacture. An automobile, petroleum-powered or electric, is a very complicated piece of hardware, and every part of it comes into being through a process of manufacture that starts at an assortment of mines, oil wells, and the like, and proceeds through refineries, factories, warehouses, and assembly plants, linked together by long supply chains via train, truck or ship. All this costs energy. Working out the exact energy cost per car would be a huge project, since it would involve tracking the energy used to produce and distribute every last screw, drop of solvent, etc., but it’s probably safe to say that a large fraction of the total energy used in a car’s lifespan is used up before the car reaches the dealer. Electric cars are as subject to this rule as petroleum-powered ones. The energy cost of manufacture has generally been downplayed in discussions of energy issues, where it hasn’t been banished altogether to whichever corner of the outer darkness it is that provides a home for unwanted facts. (I’ve long suspected that this is not too far from “Away,” the place where pollution goes in the parallel universe that cornucopians apparently inhabit.) Promoters of the more grandiose end of alternative-energy projects – the solar power satellites and Nevada-sized algae farms that crop up so regularly when people are trying to ignore the reality of ecological limits – are particularly prone to brush aside the energy cost of manufacture with high-grade handwaving, but the same sort of evasion pervades nearly all thinking about energy these days. I’ve mentioned before that three centuries of cheap abundant fossil fuel energy have imposed lasting distortions on the modern mind; this is an example. Still, factor in the energy cost of manufacture, and there actually is an answer to the question we’ve just been considering. If you really feel you have to have a car, what kind involves the smallest carbon footprint and the least overall energy use? A used one. I suppose it’s just possible that one or two of the readers of this blog will remember a strange and politically edgy comic strip from the Sixties named Odd Bodkins. The rest of you will just have to forgive a bit of relevant reminiscence here. Somewhere between an encounter with the dreaded Were-Chicken of Petaluma and a journey to Mars with Five Dollar Bill, I think it was, the Norton-riding main character, Hugh, and his sidekick Fred the Bird had a run-in with General Injuns – the resemblance to the name of a certain large American automotive corporation was not accidental. I forget what it was that inspired Fred the Bird to shout “Buy a used car!” but the General’s response – “BLASPHEMY!!!” – was memorably rendered, and will probably be duplicated in a good many of the responses to this week’s blog. Most people in the industrial world nowadays are so used to thinking of the best option as new and shiny by definition, that the homely option of picking up a cheap used car as a way of saving energy is likely to offend them at a cellular level. Still, the energy cost of manufacture needs to be taken into account. If you buy a used car – let’s say, for the sake of argument, a ten-year-old compact with decent gas mileage – instead of a new electric car, you’ve just salvaged the energy cost of manufacture that went into the used car, most of which would otherwise have been wasted, and saved all the energy that would have been spent to produce, ship, and assemble every part of the new car. Since it’s a ten-year-old compact rather than a brand new e-car, furthermore, you’re not going to be tempted to drive it all over the place to show everyone how ecologically conscious you are; in fact, you may just be embarrassed enough to leave it in your driveway when you don’t actually need it, thus saving another good-sized chunk of energy. Finally, of course, the price difference between a brand new Nissan Leaf and a ten-year-old compact will buy you a solar water heating system, installation included, with enough left over to completely weatherize an average American home. It’s a win-win situation for everything but your ego. The same principle can be applied much more broadly. Very few people, for example, actually need a new computer. I’ve never owned one; I need a computer to make my living – publishers these days require book manuscripts to be submitted electronically – but I get my computers used, free or at a tiny fraction of their original price, and run them until they drop. One consequence is that I’ve salvaged the energy used in manufacturing the old computer, rather than burdening the planet with the energy cost of manufacturing a new one; another is that I’m keeping a small but measurable amount of toxic e-waste out of the waste stream; still another, of course, is that I save quite a bit of money that can then be directed to other purposes, such as insulation and garden tools. Most Americans buy most of the things they used new, and dump a great many perfectly useful items into the trash; the more conscientious package them up and donate them to thrift stores, which is at least a step in the right direction. As a society, we have been able to afford this fixation and its attendant costs – new houses, new cars, new computers, new everything – because we’ve been surfing a tidal wave of cheap abundant fossil fuel energy. As we get further into the territory on the far side of peak oil, and as peak coal and peak natural gas come within sight, that state of affairs is rapidly coming to an end. One option, as I suggested in last week’s post, is to plunge into the emerging reality of scarcity industrialism, which centers on an increasingly savage competition for access to a shrinking pool of new and shiny things produced by what’s left of the world’s fossil fuel stocks. A saner alternative, though, is to move directly into the stage that will follow scarcity industrialism – the stage of salvage economics. That’s what I’ve been discussing here, under a less threatening label. Right now, while the tidal wave of cheap energy has not yet receded very far, the beachscape of industrial society is still littered with the kind of high-quality salvage our descendants will dream of finding, and the only thing that has to be overcome in order to access most of it is the bit of currently fashionable arrogance that relegates used goods to the poor. Now of course that’s not a small thing. One of the reasons that Thoreau’s concept of voluntary poverty got rebranded “voluntary simplicity,” and repackaged as a set of fashionable lifestyle choices that imitate authentic simplicity at a premium price, is the stark panic felt by so many middle class Americans at the thought of being mistaken for someone who’s actually poor. Those of my readers who decide that the advantages of voluntary poverty are worth pursuing are going to have to confront that panic, if they haven’t done so already. Like all supposedly classless societies, America makes up for its lack of formal caste barriers by raising caste prejudice to a fine art; the cheap shots at small town America mentioned toward the beginning of this blog are an expression of that, of course, and so is the peer pressure that keeps most Americans from doing the sensible thing, and buying cheap and sturdy used products in place of increasingly overpriced and slipshod new ones. We are all going to be poor in the decades and centuries to come. Yes, I’m including today’s rich in that; the stark folly that leads today’s privileged classes to think they can prosper while gutting the society that alone guarantees them their wealth and status is nothing new, and will bring about the usual consequences in due time. Voluntarily embracing poverty in advance may seem like a strange tactic to take, at a time when a great many people will be clinging to every scrap of claim to the fading wealth of the industrial age, but it has certain important advantages. First, it offers a chance to get competent at getting by on less before sheer necessity forces the issue; second, it sidesteps the rising spiral of struggle that’s waiting for all those who commit themselves to holding on to an industrial-age standard of living; third, as I’ve already pointed out, buying cheap used items frees up money that can then be applied to something more useful. It’s probably going to be necessary here to insert a response to what used to be the standard objection to the piece of advice I’ve just offered. No, buying used goods instead of new ones isn’t going to put any significant number of Americans out of work. Very little is actually manufactured in America these days, and most of what is, is produced and sold by conglomerates that pump money out of American communities and into the black hole of the financial economy. Nearly all used-goods stores, by contrast, are locally owned and circulate their earnings back into the community, where they generate jobs by way of the multiplier effect. The calculations would be fiendishly difficult, and you won’t find a mainstream economist willing to touch the project with a ten-foot pole, but I suspect that when the differences just listed are taken into account, buying used goods actually yields a larger number of jobs than buying new ones – and while thrift store clerks don’t make as much as corporate office fauna, to be sure, I have to admit to a suspicion that the former contribute a good deal more to the world as a whole than the latter. For the time being, at least, the office fauna and their corporate sponsors are likely to continue to thrive after a fashion, lumbering through the jungles of deindustrializing America like so many dinosaurs, and the thrift store clerks and their customers will play the part of smart little mammals scurrying around in the underbrush. Still, like the mammals, those who opt out of scarcity industrialism to embrace the first stirrings of the salvage economies of the future will have certain advantages not available to their brontosaurian neighbors. One of them, as already suggested, will be a certain amount of spare room in the household budget, which can then be turned to other projects, or used to free up a family member to work in the household economy, or both. Another will be the chance to learn skills that could well become income sources in the not too distant future; as I’ve suggested more than once here, salvage trades – that is, anything that involves taking the leftovers of industrial civilization and turning them into something that people need or want – will likely be among the major growth industries of the next century or two, and the ground floor is open for business right now. Still, the advantage that comes to mind just at the moment is the one suggested by the holiday fireworks I mentioned toward the beginning of this post. Not uncommonly in history, people face a choice between being comfortable and dependent, on the one hand, and poor and free on the other. It’s been a particularly important theme in American history, driving phenomena as different as the settling of the Appalachians and the counterculture of the Sixties, and I’ve come to think that it’s going to become a live issue again in the decades ahead of us. In time to come, those who cling to the narrowing circle of scarcity industrialism will likely discover that most of the freedoms that remain to them are going to have to be handed over as part of the cost of admission; those who choose otherwise – and there will be a range of other options, though you won’t learn that from the mainstream media – will have to give up a great many expectations and privileges that are standard issue in the industrial world just now in order to preserve some degree of autonomy and individual choice. That’s the way the future looks to me, at least; if I’m right, the simple act of salvaging energy by buying used goods instead of new ones – a step that Ben Franklin would have appreciated, interestingly enough – might just turn out to be a useful step in the direction of the ideals that some of us, at least, were celebrating a few nights ago. We’ll talk more about this in the weeks ahead. Video above: Nissan Leaf ad "What if Gas Powered Everything?" From (http://www.youtube.com/watch?v=j0sCCJFkEbE) Island Breath: Salvage Societies 10/28/07 .