Wednesday, June 24, 2015

The Delusion of Control

I'm sure most of my readers have heard at least a little of the hullaballoo surrounding the release of Pope Francis’ encyclical on the environment, Laudato Si. It’s been entertaining to watch, not least because so many politicians in the United States who like to use Vatican pronouncements as window dressing for their own agendas have been left scrambling for cover now that the wind from Rome is blowing out of a noticeably different quarter.

Take Rick Santorum, a loudly Catholic Republican who used to be in the US Senate and now spends his time entertaining a variety of faux-conservative venues with his signature flavor of hate speech. Santorum loves to denounce fellow Catholics who disagree with Vatican edicts as “cafeteria Catholics,” and announced a while back that John F. Kennedy’s famous defense of the separation of church and state made him sick to his stomach. In the wake of Laudato Si, care to guess who’s elbowing his way to the head of the cafeteria line? Yes, that would be Santorum, who’s been insisting since the encyclical came out that the Pope is wrong and American Catholics shouldn’t be obliged to listen to him.

What makes all the yelling about Laudato Si a source of wry amusement to me is that it’s not actually a radical document at all. It’s a statement of plain common sense. It should have been obvious all along that treating the air as a gaseous sewer was a really dumb idea, and in particular, that dumping billions upon billions of tons of infrared-reflecting gases into the atmosphere would change its capacity for heat retention in unwelcome ways. It should have been just as obvious that all the other ways we maltreat the only habitable planet we’ve got were guaranteed to end just as badly. That this wasn’t obvious—that huge numbers of people find it impossible to realize that you can only wet your bed so many times before you have to sleep in a damp spot—deserves much more attention than it’s received so far.

It’s really a curious blindness, when you think about it. Since our distant ancestors climbed unsteadily down from the trees of late Pliocene Africa, the capacity to anticipate threats and do something about them has been central to the success of our species. A rustle in the grass might indicate the approach of a leopard, a series of unusually dry seasons might turn the local water hole into undrinkable mud: those of our ancestors who paid attention to such things, and took constructive action in response to them, were more likely to survive and leave offspring than those who shrugged and went on with business as usual. That’s why traditional societies around the world are hedged about with a dizzying assortment of taboos and customs meant to guard against every conceivable source of danger.

Somehow, though, we got from that to our present situation, where substantial majorities across the world’s industrial nations seem unable to notice that something bad can actually happen to them, where thoughtstoppers of the “I’m sure they’ll think of something” variety take the place of thinking about the future, and where, when something bad does happen to someone, the immediate response is to find some way to blame the victim for what happened, so that everyone else can continue to believe that the same thing can’t happen to them. A world where Laudato Si is controversial, not to mention necessary, is a world that’s become dangerously detached from the most basic requirements of collective survival.

For quite some time now, I’ve been wondering just what lies behind the bizarre paralogic with which most people these days turn blank and uncomprehending eyes on their onrushing fate. The process of writing last week’s blog post on the astonishing stupidity of US foreign policy, though, seems to have helped me push through to clarity on the subject. I may be wrong, but I think I’ve figured it out.

Let’s begin with the issue at the center of last week’s post, the realy remarkable cluelessness with which US policy toward Russia and China has convinced both nations they have nothing to gain from cooperating with a US-led global order, and are better off allying with each other and opposing the US instead. US politicians and diplomats made that happen, and the way they did it was set out in detail in a recent and thoughtful article by Paul R. Pillar in the online edition of The National Interest.

Pillar’s article pointed out that the United States has evolved a uniquely counterproductive notion of how negotiation works. Elsewhere on the planet, people understand that when you negotiate, you’re seeking a compromise where you get whatever you most need out of the situation, while the other side gets enough of its own agenda met to be willing to cooperate. To the US, by contrast, negotiation means that the other side complies with US demands, and that’s the end of it. The idea that other countries might have their own interests, and might expect to receive some substantive benefit in exchange for cooperation with the US, has apparently never entered the heads of official Washington—and the absence of that idea has resulted in the cascading failures of US foreign policy in recent years.

It’s only fair to point out that the United States isn’t the only practitioner of this kind of self-defeating behavior. A first-rate example has been unfolding in Europe in recent months—yes, that would be the ongoing non-negotiations between the Greek government and the so-called troika, the coalition of unelected bureaucrats who are trying to force Greece to keep pursuing a failed economic policy at all costs. The attitude of the troika is simple: the only outcome they’re willing to accept is capitulation on the part of the Greek government, and they’re not willing to give anything in return. Every time the Greek government has tried to point out to the troika that negotiation usually involves some degree of give and take, the bureaucrats simply give them a blank look and reiterate their previous demands.

That attitude has had drastic political consequences. It’s already convinced Greeks to elect a radical leftist government in place of the compliant centrists who ruled the country in the recent past. If the leftists fold, the neofascist Golden Dawn party is waiting in the wings. The problem with the troika’s stance is simple: the policies they’re insisting that Greece must accept have never—not once in the history of market economies—produced anything but mass impoverishment and national bankruptcy. The Greeks, among many other people, know this; they know that Greece will not return to prosperity until it defaults on its foreign debts the way Russia did in 1998, and scores of other countries have done as well.

If the troika won’t settle for a negotiated debt-relief program, and the current Greek government won’t default, the Greeks will elect someone else who will, no matter who that someone else happens to be; it’s that, after all, or continue along a course that’s already caused the Greek economy to lose a quarter of its precrisis GDP, and shows no sign of stopping anywhere this side of failed-state status. That this could quite easily hand Greece over to a fascist despot is just one of the potential problems with the troika’s strategy. It’s astonishing that so few people in Europe seem to be able to remember what happened the last time an international political establishment committed itself to the preservation of a failed economic orthodoxy no matter what; those of my readers who don’t know what I’m talking about may want to pick up any good book on the rise of fascism in Europe between the wars.

Let’s step back from specifics, though, and notice the thinking that underlies the dysfunctional behavior in Washington and Brussels alike. In both cases, the people who think they’re in charge have lost track of the fact that Russia, China, and Greece have needs, concerns, and interests of their own, and aren’t simply dolls that the US or EU can pose at will. These other nations can, perhaps, be bullied by threats over the short term, but that’s a strategy with a short shelf life.  Successful diplomacy depends on giving the other guy reasons to want to cooperate with you, while demanding cooperation at gunpoint guarantees that the other guy is going to look for ways to shoot back.

The same sort of thinking in a different context underlies the brutal stupidity of American drone attacks in the Middle East. Some wag in the media pointed out a while back that the US went to war against an enemy 5,000 strong, we’ve killed 10,000 of them, and now there are only 20,000 left. That’s a good summary of the situation; the US drone campaign has been a total failure by every objective measure, having worked out consistently to the benefit of the Muslim extremist groups against which it’s aimed, and yet nobody in official Washington seems capable of noticing this fact.

It’s hard to miss the conclusion, in fact, that the Obama administration thinks that in pursuing its drone-strike program, it’s playing some kind of video game, which the United States can win if it can rack up enough points. Notice the way that every report that a drone has taken out some al-Qaeda leader gets hailed in the media: hey, we nailed a commander, doesn’t that boost our score by five hundred? In the real world, meanwhile the indiscriminate slaughter of civilians by US drone strikes has become a core factor convincing Muslims around the world that the United States is just as evil as the jihadis claim, and thus sending young men by the thousands to join the jihadi ranks. Has anyone in the Obama administration caught on to this straightforward arithmetic of failure? Surely you jest.

For that matter, I wonder how many of my readers recall the much-ballyhooed “surge” in Afghanistan several years back.  The “surge” was discussed at great length in the US media before it was enacted on Afghan soil; talking heads of every persuasion babbled learnedly about how many troops would be sent, how long they’d stay, and so on. It apparently never occurred to anybody in the Pentagon or the White House that the Taliban could visit websites and read newspapers, and get a pretty good idea of what the US forces in Afghanistan were about to do. That’s exactly what happened, too; the Taliban simply hunkered down for the duration, and popped back up the moment the extra troops went home.

Both these examples of US military failure are driven by the same problem discussed earlier in the context of diplomacy: an inability to recognize that the other side will reliably respond to US actions in ways that further its own agenda, rather than playing along with the US. More broadly, it’s the same failure of thought that leads so many people to assume that the biosphere is somehow obligated to give us all the resources we want and take all the abuse we choose to dump on it, without ever responding in ways that might inconvenience us.

We can sum up all these forms of acquired stupidity in a single sentence: most people these days seem to have lost the ability to grasp that the other side can learn.

The entire concept of learning has been so poisoned by certain bad habits of contemporary thought that it’s probably necessary to pause here. Learning, in particular, isn’t the same thing as rote imitation. If you memorize a set of phrases in a foreign language, for example, that doesn’t mean you’ve learned that language. To learn the language means to grasp the underlying structure, so that you can come up with your own phrases and say whatever you want, not just what you’ve been taught to say.

In the same way, if you memorize a set of disconnected factoids about history, you haven’t learned history. This is something of a loaded topic right now in the US, because recent “reforms” in the American  public school system have replaced learning with rote memorization of disconnected factoids that are then regurgitated for multiple choice tests. This way of handling education penalizes those children who figure out how to learn, since they might well come up with answers that differ from the ones the test expects. That’s one of many ways that US education these days actively discourages learning—but that’s a subject for another post.

To learn is to grasp the underlying structure of a given subject of knowledge, so that the learner can come up with original responses to it. That’s what Russia and China did; they grasped the underlying structure of US diplomacy, figured out that they had nothing to gain by cooperating with that structure, and came up with a creative response, which was to ally against the United States. That’s what Greece is doing, too.  Bit by bit, the Greeks seem to be figuring out the underlying structure of troika policy, which amounts to the systematic looting of southern Europe for the benefit of Germany and a few of its allies, and are trying to come up with a response that doesn’t simply amount to unilateral submission.

That’s also what the jihadis and the Taliban are doing in the face of US military activity. If life hands you lemons, as the saying goes, make lemonade; if the US hands you drone strikes that routinely slaughter noncombatants, you can make very successful propaganda out of it—and if the US hands you a surge, you roll your eyes, hole up in your mountain fastnesses, and wait for the Americans to get bored or distracted, knowing that this won’t take long. That’s how learning works, but that’s something that US planners seem congenitally unable to take into account.

The same analysis, interestingly enough, makes just as much sense when applied to nonhuman nature. As Ervin Laszlo pointed out a long time ago in Introduction to Systems Philosophy, any sufficiently complex system behaves in ways that approximate intelligence.  Consider the way that bacteria respond to antibiotics. Individually, bacteria are as dumb as politicians, but their behavior on the species level shows an eerie similarity to learning; faced with antibiotics, a species of bacteria “tries out” different biochemical approaches until it finds one that sidesteps the antibiotic. In the same way, insects and weeds “try out” different responses to pesticides and herbicides until they find whatever allows them to munch on crops or flourish in the fields no matter how much poison the farmer sprays on them.

We can even apply the same logic to the environmental crisis as a whole. Complex systems tend to seek equilibrium, and will respond to anything that pushes them away from equilibrium by pushing back the other way. Any field biologist can show you plenty of examples: if conditions allow more rabbits to be born in a season, for instance, the population of hawks and foxes rises accordingly, reducing the rabbit surplus to a level the ecosystem can support. As humanity has put increasing pressure on the biosphere, the biosphere has begun to push back with increasing force, in an increasing number of ways; is it too much to think of this as a kind of learning, in which the biosphere “tries out” different ways to balance out the abusive behavior of humanity, until it finds one or more that work?

Now of course it’s long been a commonplace of modern thought that natural systems can’t possibly learn. The notion that nature is static, timeless, and unresponsive, a passive stage on which human beings alone play active roles, is welded into modern thought, unshaken even by the realities of biological evolution or the rising tide of evidence that natural systems are in fact quite able to adapt their way around human meddling. There’s a long and complex history to the notion of passive nature, but that’s a subject for another day; what interests me just now is that since 1990 or so, the governing classes of the United States, and some other Western nations as well, have applied the same frankly delusional logic to everything in the world other than themselves.

“We’re an empire now, and when we act, we create our own reality,” neoconservative guru Karl Rove is credited as saying to reporter Ron Suskind. “We’re history’s actors, and you, all of you, will be left to just study what we do.” That seems to be the thinking that governs the US government these days, on both sides of the supposed partisan divide. Obama says we’re in a recovery, and if the economy fails to act accordingly, why, rooms full of industrious flacks churn out elaborately fudged statistics to erase that unwelcome reality. That history’s self-proclaimed actors might turn out to be just one more set of flotsam awash on history’s vast tides has never entered their darkest dream.

Let’s step back from specifics again, though. What’s the source of this bizarre paralogic—the delusion that leads politicians to think that they create reality, and that everyone and everything else can only fill the roles they’ve been assigned by history’s actors?  I think I know. I think it comes from a simple but remarkably powerful fact, which is that the people in question, along with most people in the privileged classes of the industrial world, spend most of their time, from childhood on, dealing with machines.

We can define a machine as a subset of the universe that’s been deprived of the capacity to learn. The whole point of building a machine is that it does what you want, when you want it, and nothing else. Flip the switch on, and it turns on and goes through whatever rigidly defined set of behaviors it’s been designed to do; flip the switch off, and it stops. It may be fitted with controls, so you can manipulate its behavior in various tightly limited ways; nowadays, especially when computer technology is involved, the set of behaviors assigned to it may be complex enough that an outside observer may be fooled into thinking that there’s learning going on. There’s no inner life behind the facade, though.  It can’t learn, and to the extent that it pretends to learn, what happens is the product of the sort of rote memorization described above as the antithesis of learning.

A machine that learned would be capable of making its own decisions and coming up with a creative response to your actions—and that’s the opposite of what machines are meant to do, because that response might well involve frustrating your intentions so the machine can get what it wants instead. That’s why the trope of machines going to war against human beings has so large a presence in popular culture: it’s exactly because we expect machines not to act like people, not to pursue their own needs and interests, that the thought of machines acting the way we do gets so reliable a frisson of horror.

The habit of thought that treats the rest of the cosmos as a collection of machines, existing only to fulfill whatever purpose they might be assigned by their operators, is another matter entirely. Its origins can be traced back to the dawning of the scientific revolution in the seventeenth century, when a handful of thinkers first began to suggest that the universe might not be a vast organism—as everybody in the western world had presupposed for millennia before then—but might instead be a vast machine. It’s indicative that one immediate and popular response to this idea was to insist that other living things were simply “meat machines” who didn’t actually suffer pain under the vivisector’s knife, but had been designed by God to imitate sounds of pain in order to inspire feelings of pity in human beings.

The delusion of control—the conviction, apparently immune to correction by mere facts, that the world is a machine incapable of doing anything but the things we want it to do—pervades contemporary life in the world’s industrial societies. People in those societies spend so much more time dealing with machines than they do interacting with other people and other living things without a machine interface getting in the way, that it’s no wonder that this delusion is so widespread. As long as it retains its grip, though, we can expect the industrial world, and especially its privileged classes, to stumble onward from one preventable disaster to another. That’s the inner secret of the delusion of control, after all: those who insist on seeing the world in mechanical terms end up behaving mechanically themselves. Those who deny all other things the ability to learn lose the ability to learn from their own mistakes, and lurch robotically onward along a trajectory that leads straight to the scrapheap of the future.

Wednesday, June 17, 2015

An Affirming Flame

According to an assortment of recent news stories, this Thursday, June 18, is the make-or-break date by which a compromise has to be reached between Greece and the EU if a Greek default, with the ensuing risk of a potential Greek exit from the Eurozone, is to be avoided. If that’s more than just media hype, there’s a tremendous historical irony in the fact.  June 18 is after all the 200th anniversary of the Battle of Waterloo, where a previous attempt at European political and economic integration came to grief.

Now of course there are plenty of differences between the two events. In 1815 the preferred instrument of integration was raw military force; in 2015, for a variety of reasons, a variety of less overt forms of political and economic pressure have taken the place of Napoleon’s Grande Armée. The events of 1815 were also much further along the curve of defeat than those of 2015.  Waterloo was the end of the road for France’s dream of pan-European empire, while the current struggles over the Greek debt are taking place at a noticeably earlier milepost along the same road. The faceless EU bureaucrats who are filling Napoleon’s role this time around thus won’t be on their way to Elba for some time yet.

“What discords will drive Europe into that artificial unity—only dry or drying sticks can be tied into a bundle—which is the decadence of every civilization?” William Butler Yeats wrote that in 1936. It was a poignant question but also a highly relevant one, since the discords in question were moving rapidly toward explosion as he penned the last pages of A Vision, where those words appear.  Like most of those who see history in cyclical terms, Yeats recognized that the patterns that recur from age to age  are trends and motifs rather than exact narratives.  The part played by a conqueror in one era can end up in the hands of a heroic failure in the next, for circumstances can define a historical role but not the irreducibly human strengths and foibles of the person who happens to fill it.

Thus it’s not too hard to look at the rising spiral of stresses in the European Union just now and foresee the eventual descent of the continent into a mix of domestic insurgency and authoritarian nationalism, with the oncoming tide of mass migration from Africa and the Middle East adding further pressure to an already explosive mix. Exactly how that will play out over the next century, though, is a very tough question to answer. A century from now, due to raw demography, many countries in Europe will be majority-Muslim nations that look to Mecca for the roots of their faith and culture—but which ones, and how brutal or otherwise will the transition be? That’s impossible to know in advance.

There are plenty of similar examples just now; for the student of historical cycles, 2015 practically defines the phrase “target-rich environment.” Still, I want to focus on something a little different here. Partly, this is because the example I have in mind makes a good opportunity to point out the the way that what philosophers call the contingent nature of events—in less highflown language, the sheer cussedness of things—keeps history’s dice constantly rolling. Partly, though, it’s because this particular example is likely to have a substantial impact on the future of everyone reading this blog.

Last year saw a great deal of talk in the media about possible parallels between the current international situation and that of the world precisely a century ago, in the weeks leading up to the outbreak of the First World War.  Mind you, since I contributed to that discussion, I’m hardly in a position to reject the parallels out of hand. Still, the more I’ve observed the current situation, the more I’ve come to think that a different date makes a considerably better match to present conditions. To be precise, instead of a replay of 1914, I think we’re about to see an equivalent of 1939—but not quite the 1939 we know.

Two entirely contingent factors, added to all the other pressures driving toward that conflict, made the Second World War what it was. The first, of course, was the personality of Adolf Hitler. It was probably a safe bet that somebody in Weimar Germany would figure out how to build a bridge between the politically active but fragmented nationalist Right and the massive but politically inert German middle classes, restore Germany to great-power status, and gear up for a second attempt to elbow aside the British Empire. That the man who happened to do these things was an eccentric anti-Semite ideologue who combined shrewd political instincts, utter military incompetence, and a frankly psychotic faith in his own supposed infallibility, though, was in no way required by the logic of history.

Had Corporal Hitler taken an extra lungful of gas on the Western Front, someone else would likely have filled the same role in the politics of the time. We don’t even have to consider what might have happened if the nation that birthed Frederick the Great and Otto von Bismarck had come up with a third statesman of the same caliber. If the German head of state in 1939 had been merely a capable pragmatist with adequate government and military experience, and guided Germany’s actions by a logic less topsy-turvy than Hitler’s, the trajectory of those years would have been far different.

The second contingent factor that defined the outcome of the great wars of the twentieth century is broader in focus than the quirks of a single personality, but it was just as subject to those vagaries that make hash out of attempts at precise historical prediction. As discussed in an earlier post on this blog, it was by no means certain that America would be Britain’s ally when war finally came. From the Revolution onward, Britain was in many Americans’ eyes the national enemy; as late as the 1930s, when the US Army held its summer exercises, the standard scenario involved a British invasion of US territory.

All along, there was an Anglophile party in American cultural life, and its ascendancy in the years after 1900 played a major role in bringing the United States into two world wars on Britain’s side. Still, there was a considerably more important factor in play, which was a systematic British policy of conciliating the United States. From the American Civil War on, Britain allowed the United States liberties it would never have given any other power,  When the United States expanded its influence in Latin America and the Carribbean, Britain allowed itself to be upstaged there; when the United States shook off its  isolationism and built a massive blue-water navy, the British even allowed US naval vessels to refuel at British coaling stations during the global voyage of the “Great White Fleet” in 1907-9.

This was partly a reflection of the common cultural heritage that made many British politicians think of the United States as a sort of boisterous younger brother of theirs, and partly a cold-eyed recognition, in the wake of the Civil War, that war between Britain and the United States would almost certainly lead to a US invasion of Canada that Britain was very poorly positioned to counter. Still, there was another issue of major importance. To an extent few people realized at the time, the architecture of European peace after Waterloo depended on political arrangements that kept the German-speaking lands of the European core splintered into a diffuse cloud of statelets too small to threaten any of the major powers.

The great geopolitical fact of the 1860s was the collapse of that cloud into the nation of Germany, under the leadership of the dour northeastern kingdom of Prussia. In 1866, the Prussians pounded the stuffing out of Austria and brought the rest of the German states into a federation; in 1870-1871, the Prussians and their allies did the same thing to France, which was a considerably tougher proposition—this was the same French nation, remember, which brought Europe to its knees in Napoleon’s day—and the federation became the German Empire. The Austro-Hungarian Empire was widely considered the third great power in Europe until 1866; until 1870, France was the second; everybody knew that sooner or later the Germans were going to take on great power number one.

British policy toward the United States from 1871 onward was thus tempered by the harsh awareness that Britain could not afford to alienate a rising power who might become an ally, or at least a friendly neutral, when the inevitable war with Germany arrived. Above all, an alliance between Germany and the United States would have been Britain’s death warrant, and everyone in the Foreign Office and the Admiralty in London had to know that. The thought of German submarines operating out of US ports, German and American fleets combining to take on the Royal Navy, and American armies surging into Canada and depriving Britain of a critical source of raw materials and recruits while the British Army was pinned down elsewhere, must have given British planners many sleepless nights.

After 1918, that recognition must have been even more sharply pointed, because US loans and munitions shipments played a massive role in saving the western Allies from collapse in the face of the final German offensive in the autumn of 1917, and turned the tide in a war that, until then, had largely gone Germany’s way. During the two decades leading up to 1939, as Germany recovered and rearmed, British governments did everything they could to keep the United States on their side, with results that paid off handsomely when the Second World War finally came.

Let’s imagine, though, an alternative timeline in which the Foreign Office and the Admiralty from 1918 on are staffed by idiots. Let’s further imagine that Parliament is packed with clueless ideologues whose sole conception of foreign policy is that everyone, everywhere, ought to be bludgeoned into compliance with Britain’s edicts, no matter how moronic those happen to be. Let’s say, in particular, that one British government after another conducts its policy toward the United States on the basis of smug self-centered arrogance, and any move the US makes to assert itself on the international stage can count on an angry response from London. The United States launches an aircraft carrier? A threat to world peace, the London Times roars.  The United States exerts diplomatic pressure on Mexico, and builds military bases in Panama? British diplomats head for the Carribbean and Latin America to stir up as much opposition to America’s agenda as possible.

Let’s say, furthermore, that in this alternative timeline, Adolf Hitler did indeed take one too many deep breaths on the Western Front, and lies in a military cemetery, one more forgotten casualty of the Great War. In his absence, the German Workers Party remains a fringe group, and the alliance between the nationalist Right and the middle classes is built instead by the Deutsche Volksfreiheitspartei (DVFP), which seizes power in 1934. Ulrich von Hassenstein, the new Chancellor, is a competent insider who knows how to listen to his diplomats and General Staff, and German foreign and military policy under his leadership pursues the goal of restoring Germany to world-power status using considerably less erratic means than those used by von Hassenstein’s equivalent in our timeline.

Come 1939, finally, as rising tensions between Germany and the Anglo-French alliance over Poland’s status move toward war, Chancellor von Hassenstein welcomes US President Charles Lindbergh to Berlin, where the two heads of state sign a galaxy of treaties and trade agreements and talk earnestly to the media about the need to establish a multipolar world order to replace Britain’s global hegemony. A second world war is in the offing, but the shape of that war will be very different from the one that broke out in our version of 1939, and while the United States almost certainly will be among the victors, Britain almost certainly will not.

Does all this sound absurd? Let’s change the names around and see.

Just as the great rivalry of the first half of the twentieth century was fought out between Britain and Germany, the great rivalry of the century’s second half was between the United States and Russia. If nuclear weapons hadn’t been invented, it’s probably a safe bet that at some point the rivalry would have ended in another global war.  As it was, the threat of mutual assured destruction meant that the struggle for global power had to be fought out less directly, in a flurry of proxy wars, sponsored insurgencies, economic warfare, subversion, sabotage, and bare-knuckle diplomacy. In that war, the United States came out on top, and Soviet Russia went the way of Imperial Germany, plunging into the same sort of political and economic chaos that beset the Weimar Republic in its day.

The supreme strategic imperative of the United States in that war was finding ways to drive as deep a wedge as possible between Russia and China, in order to keep them from taking concerted action against the US. That wasn’t all that difficult a task, since the two nations have very little in common and many conflicting interests. Nixon’s 1972 trip to China was arguably the defining moment in the Cold War, the point at which China’s separation from the Soviet bloc became total and Chinese integration with the American economic order began. From that point on, for Russia, it was basically all downhill.

In the aftermath of Russia’s defeat, the same strategic imperative remained, but the conditions of the post-Cold War world made it almost absurdly easy to carry out. All that would have been needed were American policies that gave Russia and China meaningful, concrete reasons to think that their national interests and aspirations would be easier to achieve in cooperation with a US-led global order than in opposition to it. Granting Russia and China the same position of regional influence that the US accords to Germany and Japan as a matter of course probably would have been enough. A little forbearance, a little foreign aid, a little adroit diplomacy, and the United States would have been in the catbird’s seat, with Russia and China glaring suspiciously at each other across their long and problematic mutual border, and bidding against each other for US support in their various disagreements.

But that’s not what happened, of course.

What happened instead was that the US embraced a foreign policy so astonishingly stupid that I’m honestly not sure the English language has adequate resources to describe it. Since 1990, one US administration after another, with the enthusiastic bipartisan support of Congress and the capable assistance of bureaucrats across official Washington from the Pentagon and the State Department on down, has pursued policies guaranteed to force Russia and China to set aside their serious mutual differences and make common cause against us. Every time the US faced a choice between competing policies, it’s consistently chosen the option most likely to convince Russia, China, or both nations at once that they had nothing to gain from further cooperation with American agendas.

What’s more, the US has more recently managed the really quite impressive feat of bringing Iran into rapprochement with the emerging Russo-Chinese alliance. It’s hard to think of another nation on Earth that has fewer grounds for constructive engagement with Russia or China than the Islamic Republic of Iran, but several decades of cluelessly hamfisted American blundering and bullying finally did the job. My American readers can now take pride in the state-of-the-art Russian air defense systems around Tehran, the bustling highways carrying Russian and Iranian products to each other’s markets, and the Russian and Chinese intelligence officers who are doubtless settling into comfortable digs on the north shore of the Persian Gulf, where they can snoop on the daisy chain of US bases along the south shore. After all, a quarter century of US foreign policy made those things happen.

It’s one thing to engage in this kind of serene disregard for reality when you’ve got the political unity, the economic abundance, and the military superiority to back it up. The United States today, like the British Empire in 1939, no longer has those. We’ve got an impressive fleet of aircraft carriers, sure, but Britain had an equally impressive fleet of battleships in 1939, and you’ll notice how much good those did them. Like Britain in 1939, the United States today is perfectly prepared for a kind of war that nobody fights any more, while rival nations less constrained by the psychology of previous investment and less riddled with institutionalized graft are fielding novel weapons systems designed to do end runs around our strengths and focus with surgical precision on our weaknesses.

Meanwhile, inside the baroque carapace of carriers, drones, and all the other high-tech claptrap of an obsolete way of war, the United States is a society in freefall, far worse off than Britain was during its comparatively mild 1930s downturn. Its leaders have forfeited the respect of a growing majority of its citizens; its economy has morphed into a Potemkin-village capitalism in which the manipulation of unpayable IOUs in absurd and rising amounts has all but replaced the actual production of goods and services; its infrastructure is so far fallen into decay that many US counties no longer pave their roads; most Americans these days think of their country’s political institutions as the enemy and its loudly proclaimed ideals as some kind of sick joke—and in both cases, not without reason. The national unity that made victory in two world wars and the Cold War possible went by the boards a long time ago, drowned in a tub by Tea Party conservatives who thought they were getting rid of government and limousine liberals who were going through the motions of sticking it to the Man.

I could go on tracing parallels for some time—in particular, despite a common rhetorical trope of US Russophobes, Vladimir Putin is not an Adolf Hitler but a fair equivalent of the Ulrich von Hassenstein of my alternate-history narrative—but here again, my readers can do the math themselves. The point I want to make is that all the signs suggest we are entering an era of international conflict in which the United States has thrown away nearly all its potential strengths, and handed its enemies advantages they would never have had if our leaders had the brains the gods gave geese. Since nuclear weapons still foreclose the option of major wars between the great powers, the conflict in question will doubtless be fought using the same indirect methods as the Cold War; in fact, it’s already being fought by those means, as the victims of proxy wars in Ukraine, Syria, and Yemen already know. The question in my mind is simply how soon those same methods get applied on American soil.

We thus stand at the beginning of a long, brutal epoch, as unforgiving as the one that dawned in 1939. Those who pin Utopian hopes on the end of American hegemony will get to add disappointment to that already bitter mix, since hegemony remains the same no matter who happens to be perched temporarily in the saddle. (I also wonder how many of the people who think they’ll rejoice at the end of American hegemony have thought through the impact on their hopes of collective betterment, not to mention their own lifestyles, once the 5% of the world’s population who live in the US can no longer claim a quarter or so of the world’s resources and wealth.) If there’s any hope possible at such a time, to my mind, it’s the one W.H. Auden proposed as the conclusion of his bleak and brilliant poem “September 1, 1939”:

Defenceless under the night,
Our world in stupor lies;
Yet, dotted everywhere,
Ironic points of light
Flash out wherever the just
Exchange their messages:
May I, composed like them
Of Eros and of dust,
Beleaguered by the same
Negation and despair,
Show an affirming flame.

Wednesday, June 10, 2015

The Era of Dissolution

The last of the five phases of the collapse process we’ve been discussing here in recent posts is the era of dissolution. (For those that haven’t been keeping track, the first four are the eras of pretense, impact, response, and breakdown). I suppose you could call the era of dissolution the Rodney Dangerfield of collapse, though it’s not so much that it gets no respect; it generally doesn’t even get discussed.

To some extent, of course, that’s because a great many of the people who talk about collapse don’t actually believe that it’s going to happen. That lack of belief stands out most clearly in the rhetorical roles assigned to collapse in so much of modern thinking. People who actually believe that a disaster is imminent generally put a lot of time and effort into getting out of its way in one way or another; it’s those who treat it as a scarecrow to elicit predictable emotional reactions from other people, or from themselves, who never quite manage to walk their talk.

Interestingly, the factor that determines the target of scarecrow-tactics of this sort seems to be political in nature. Groups that think they have a chance of manipulating the public into following their notion of good behavior tend to use the scarecrow of collapse to affect other people; for them, collapse is the horrible fate that’s sure to gobble us up if we don’t do whatever it is they want us to do. Those who’ve given up any hope of getting a response from the public, by contrast, turn the scarecrow around and use it on themselves; for them, collapse is a combination of Dante’s Inferno and the Big Rock Candy Mountain, the fantasy setting where the wicked get the walloping they deserve while they themselves get whatever goodies they’ve been unsuccessful at getting  in the here and now.

Then, of course, you get the people for whom collapse is less scarecrow than teddy bear, the thing that allows them to drift off comfortably to sleep in the face of an unwelcome future. It’s been my repeated observation that many of those who insist that humanity will become totally extinct in the very near future fall into this category. Most people, faced with a serious threat to their own lives, will take drastic action to save themselves; faced with a threat to the survival of their family or community, a good many people will take actions so drastic as to put their own lives at risk in an effort to save others they care about. The fact that so many people who insist that the human race is doomed go on to claim that the proper reaction is to sit around feeling very, very sad about it all does not inspire confidence in the seriousness of that prediction—especially when feeling very, very sad seems mostly to function as an excuse to keep enjoying privileged lifestyles for just a little bit longer.

So we have the people for whom collapse is a means of claiming unearned power, the people for whom it’s a blank screen on which to project an assortment of self-regarding fantasies, and the people for whom it’s an excuse to do nothing in the face of a challenging future. All three of those are popular gimmicks with an extremely long track record, and they’ll doubtless all see plenty of use millennia after industrial civilization has taken its place in the list of failed civilizations. The only problem with them is that they don’t happen to provide any useful guidance for those of us who have noticed that collapse isn’t merely a rhetorical gimmick meant to get emotional reactions—that it’s something that actually happens, to actual civilizations, and that it’s already happening to ours.

From the three perspectives already discussed, after all, realistic questions about what will come after the rubble stops bouncing are entirely irrelevant. If you’re trying to use collapse as a boogeyman to scare other people into doing what you tell them, your best option is to combine a vague sense of dread with an assortment of cherrypicked factoids intended to make a worst-case scenario look not nearly bad enough; if you’re trying to use collapse as a source of revenge fantasies where you get what you want and the people you don’t like get what’s coming to them, daydreams of various levels and modes of dampness are far more useful to you than sober assessments; while if you’re trying to use collapse as an excuse to maintain an unsustainable and planet-damaging SUV lifestyle, your best bet is to insist that everyone and everything dies all at once, so nothing will ever matter again to anybody.

On the other hand, there are also those who recognize that collapse happens, that we’re heading toward one, and that it might be useful to talk about what the world might look like on the far side of that long and difficult process. I’ve tried to sketch out a portrait of the postcollapse world in last year’s series of posts here on Dark Age America, and I haven’t yet seen any reason to abandon that portrait of a harsh but livable future, in which a sharply reduced global population returns to agrarian or nomadic lives in those areas of the planet not poisoned by nuclear or chemical wastes or rendered uninhabitable by prolonged drought or the other impacts of climate change, and in which much or most of today’s scientific and technological knowledge is irretrievably lost.

The five phases of collapse discussed in this latest sequence of posts is simply a description of how we get there—or, more precisely, of one of the steps by which we get there. That latter point’s a detail that a good many of my readers, and an even larger fraction of my critics, seem to have misplaced. The five-stage model is a map of how human societies shake off an unsustainable version of business as usual and replace it with something better suited to the realities of the time. It applies to a very wide range of social transformations, reaching in scale from the local to the global and in intensity from the relatively modest to the cataclysmic. To insist that it’s irrelevant because the current example of the species covers more geographical area than any previous example, or has further to fall than most, is like insisting that a law of physics that governs the behavior of marbles and billiards must somehow stop working just because you’re trying to do the same thing with bowling balls.

A difference of scale is not a difference of kind. Differences of scale have their own implications, which we’ll discuss a little later on in this post, but the complex curve of decline is recognizably the same in small things as in big ones, in the most as in the least catastrophic examples. That’s why I’ve used a relatively modest example—the collapse of the economic system of 1920s America and the resulting Great Depression—and an example from the midrange—the collapse of the French monarchy and the descent of 18th-century Europe into the maelstrom of the Napoleonic Wars—to provide convenient outlines for something toward the upper end of the scale—the decline and fall of modern industrial civilization and the coming of a deindustrial dark age. Let’s return to those examples, and see how the thread of collapse winds to its end.

As we saw in last week’s thrilling episode, the breakdown stage of the Great Depression came when the newly inaugurated Roosevelt administration completely redefined the US currency system. Up to that time, US dollar bills were in effect receipts for gold held in banks; after that time, those receipts could no longer be exchanged for gold, and the gold held by the US government became little more than a public relations gimmick. That action succeeded in stopping the ghastly credit crunch that shuttered every bank and most businesses in the US in the spring of 1933.

Roosevelt’s policies didn’t get rid of the broader economic dysfunction the 1929 crash had kickstarted. That was inherent in the industrial system itself, and remains a massive issue today, though its effects were papered over for a while by a series of temporary military, political, and economic factors that briefly enabled the United States to prosper at the expense of the rest of the world. The basic issue is simply that replacing human labor with machines powered by fossil fuel results in unemployment, and no law of nature or economics requires that new jobs can be found or created to replace the ones that are eliminated by mechanization. The history of the industrial age has been powerfully shaped by a whole series of attempts to ignore, evade, or paper over that relentless arithmetic.

Until 1940, the Roosevelt administration had no more luck with that project than the governments of most other nations.  It wasn’t until the Second World War made the lesson inescapable that anyone realized that the only way to provide full employment in an industrial society was to produce far more goods than consumers could consume, and let the military and a variety of other gimmicks take up the slack. That was a temporary gimmick, due to stark limitations in the resource base needed to support the mass production of useless goods, but in 1940, and even more so in 1950, few people recognized that and fewer cared. It’s our bad luck to be living at the time when that particular bill is coming due.

The first lesson to learn from the history of collapse, then, is that the breakdown phase doesn’t necessarily solve all the problems that brought it about. It doesn’t even necessarily take away every dysfunctional feature of the status quo. What it does with fair reliability is eliminate enough of the existing order of things that the problems being caused by that order decline to a manageable level. The more deeply rooted the problematic features of the status quo are in the structure of society and daily life, the harder it will be to change them, and the more likely other features are to be changed: in the example just given, it was much easier to break the effective link between the US currency and gold, and expand the money supply enough to get the economy out of cardiac arrest, than it was to break a link between mechanization and unemployment that’s hardwired into the basic logic of industrialism.

What this implies in turn is that it’s entirely possible for one collapse to cycle through the five stages we’ve explored, and then to have the era of dissolution morph straight into a new era of pretense in which the fact that all society’s problems haven’t been solved is one of the central things nobody in any relation to the centers of power wants to discuss. If the Second World War, the massive expansion of the petroleum economy, the invention of suburbia, the Cold War, and a flurry of other events hadn’t ushered in the immensely wasteful but temporarily prosperous boomtime of late 20th century America, there might well have been another vast speculative bubble in the mid- to late 1940s, resulting in another crash, another depression, and so on. This is after all what we’ve seen over the last twenty years: the tech stock bubble and bust, the housing bubble and bust, the fracking bubble and bust, each one hammering the economy further down the slope of decline.

With that in mind, let’s turn to our second example, the French Revolution. This is particularly fascinating since the aftermath of that particular era of breakdown saw a nominal return to the conditions of the era of pretense. After Napoleon’s final defeat in 1815, the Allied powers found an heir to the French throne and plopped him into the throne of the Bourbons as Louis XVIII to well-coached shouts of “Vive le Roi!” On paper, nothing had changed.

In reality, everything had changed, and the monarchy of post-Napoleonic France had roots about as deep and sturdy as the democracy of post-Saddam Iraq. Louis XVIII was clever enough to recognize this, and so managed to end his reign in the traditional fashion, feet first from natural causes. His heir Charles X was nothing like so clever, and got chucked off the throne after six years on it by another revolution in 1830. King Louis-Philippe went the same way in 1848—the French people were getting very good at revolution by that point. There followed a Republic, an Empire headed by Napoleon’s nephew, and finally another Republic which lasted out the century. All in all, French politics in the 19th century was the sort of thing you’d expect to see in an unusually excitable banana republic.

The lesson to learn from this example is that it’s very easy, and very common, for a society in the dissolution phase of collapse to insist that nothing has changed and pretend to turn back the clock. Depending on just how traumatic the collapse has been, everybody involved may play along with the charade, the way everyone in Rome nodded and smiled when Augustus Caesar pretended to uphold the legal forms of the defunct Roman Republic, and their descendants did exactly the same thing centuries later when Theodoric the Ostrogoth pretended to uphold the legal forms of the defunct Roman Empire. Those who recognize the charade as charade and play along without losing track of the realities, like Louis XVIII, can quite often navigate such times successfully; those who mistake charade for reality, like Charles X, are cruising for a bruising and normally get it in short order.

Combine these two lessons and you’ll get what I suspect will turn out to be a tolerably good sketch of the American future. Whatever comes out of the impact, response, and breakdown phases of the crisis looming ahead of the United States just now—whether it’s a fragmentary mess of successor states, a redefined nation beginning to recover from a period of personal rule by some successful demagogue or, just possibly, a battered and weary republic facing a long trudge back to its foundational principles, it seems very likely that everyone involved will do their level best to insist that nothing has really changed. If the current constitution has been abolished, it may be officially reinstated with much fanfare; there may be new elections, and some shuffling semblance of the two-party system may well come lurching out of the crypts for one or two more turns on the stage.

None of that will matter. The nation will have changed decisively in ways we can only begin to envision at this point, and the forms of twentieth-century American politics will cover a reality that has undergone drastic transformations, just as the forms of nineteenth-century French monarchy did. In due time, by some combination of legal and extralegal means, the forms will be changed to reflect the new realities, and the territory we now call the United States of America—which will almost certainly have a different name, and may well be divided into several different and competing nations by then—will be as prepared to face the next round of turmoil as it’s going to get.

Yes, there will be a next round of turmoil. That’s the thing that most people miss when thinking about the decline and fall of a civilization: it’s not a single event, or even a single linear process. It’s a whole series of cascading events that vary drastically in their importance, geographical scope, and body count. That’s true of every process of historic change.

It was true even of so simple an event as the 1929 crash and Great Depression: 1929 saw the crash, 1930 the suckers’ rally, 1931 the first wave of European bank failures, 1932 the unraveling of the US banking system, and so on until bombs falling on Pearl Harbor ushered in a different era. It was even more true of the French Revolution: between 1789 and 1815 France basically didn’t have a single year without dramatic events and drastic changes of one kind or another, and the echoes of the Revolution kept things stirred up for decades to come. Check out the fall of civilizations and you’ll see the same thing unfolding on a truly vast scale, with crisis after crisis along an arc centuries in length.

The process that’s going on around us is the decline and fall of industrial civilization. Everything we think of as normal and natural, modern and progressive, solid and inescapable is going to melt away into nothingness in the years, decades, and centuries ahead, to be replaced first by the very different but predictable institutions of a dark age, and then by the new and wholly unfamiliar forms of the successor societies of the far future. There’s nothing inevitable about the way we do things in today’s industrial world; our political arrangements, our economic practices, our social instutions, our cultural habits, our sciences and our technologies all unfold from industrial civilization’s distinctive and profoundly idiosyncratic worldview.  So does the central flaw in the entire baroque edifice, our lethally muddleheaded inability to understand our inescapable dependence on the biosphere that supports our lives. All that is going away in the time before us—but it won’t go away suddenly, or all at once.

Here in the United States, we’re facing one of the larger downward jolts in that prolonged process, the end of American global empire and of the robust economic benefits that the machinery of empire pumps from the periphery to the imperial center. Until recently, the five per cent of us who lived here got to enjoy a quarter of the world’s energy supply and raw materials and a third of its manufactured products. Those figures have already decreased noticeably, with consequences that are ringing through every corner of our society; in the years to come they’re going to decrease much further still, most likely to something like a five per cent share of the world’s wealth or even a little less. That’s going to impact every aspect of our lives in ways that very few Americans have even begun to think about.

All of that is taking place in a broader context, to be sure. Other countries will have their own trajectories through the arc of industrial civilization’s decline and fall, and some of those trajectories will be considerably less harsh in the short term than ours. In the long run, the human population of the globe is going to decline sharply; the population bubble that’s causing so many destructive effects just now will be followed in due time by a population bust, in which those four guys on horseback will doubtless play their usual roles. In the long run, furthermore, the vast majority of today’s technologies are going to go away as the resource base needed to support them gets used up, or stops being available due to other bottlenecks. Those are givens—but the long run is not the only scale that matters.

It’s not at all surprising that the foreshocks of that immense change are driving the kind of flight to fantasy criticized in the opening paragraphs of this essay. That’s business as usual when empires go down; pick up a good cultural history of the decline and fall of any empire in the last two millennia or so and you’ll find plenty of colorful prophecies of universal destruction. I’d like to encourage my readers, though, to step back from those fantasies—entertaining as they are—and try to orient themselves instead to the actual shape of the future ahead of us. That shape’s not only a good deal less gaseous than the current offerings of the Apocalypse of the Month Club (internet edition), it also offers an opportunity to do something about the future—a point we’ll be discussing further in posts to come.

Wednesday, June 03, 2015

The Era of Breakdown

The fourth of the stages in the sequence of collapse we’ve been discussing is the era of breakdown. (For those who haven’t been keeping track, the first three phases are the eras of pretense, impact, and response; the final phase, which we’ll be discussing next week, is the era of dissolution.) The era of breakdown is the phase that gets most of the press, and thus inevitably no other stage has attracted anything like the crop of misperceptions, misunderstandings, and flat-out hokum as this one.

The era of breakdown is the point along the curve of collapse at which business as usual finally comes to an end. That’s where the confusion comes in. It’s one of the central articles of faith in pretty much every human society that business as usual functions as a bulwark against chaos, a defense against whatever problems the society might face. That’s exactly where the difficulty slips in, because in pretty much every human society, what counts as business as usual—the established institutions and familiar activities on which everyone relies day by day—is the most important cause of the problems the society faces, and the primary cause of collapse is thus quite simply that societies inevitably attempt to solve their problems by doing all the things that make their problems worse.

The phase of breakdown is the point at which this exercise in futility finally grinds to a halt. The three previous phases are all attempts to avoid breakdown: in the phase of pretense, by making believe that the problems don’t exist; in the phase of impact, by making believe that the problems will go away if only everyone doubles down on whatever’s causing them; and in the phase of response, by making believe that changing something other than the things that are causing the problems will fix the problems. Finally, after everything else has been tried, the institutions and activities that define business as usual either fall apart or are forcibly torn down, and then—and only then—it becomes possible for a society to do something about its problems.

It’s important not to mistake the possibility of constructive action for the inevitability of a solution. The collapse of business as usual in the breakdown phase doesn’t solve a society’s problems; it doesn’t even prevent those problems from being made worse by bad choices. It merely removes the primary obstacle to a solution, which is the wholly fictitious aura of inevitability that surrounds the core institutions and activities that are responsible for the problems. Once people in a society realize that no law of God or nature requires them to maintain a failed status quo, they can then choose to dismantle whatever fragments of business as usual haven’t yet fallen down of their own weight.

That’s a more important action than it might seem at first glance. It doesn’t just put an end to the principal cause of the society’s problems. It also frees up resources that have been locked up in the struggle to keep business as usual going at all costs, and those newly freed resources very often make it possible for a society in crisis to transform itself drastically in a remarkably short period of time. Whether those transformations are for good or ill, or as usually happens, a mixture of the two, is another matter, and one I’ll address a little further on.

Stories in the media, some recent, some recently reprinted, happen to have brought up a couple of first-rate examples of the way that resources get locked up in unproductive activities during the twilight years of a failing society. A California newspaper, for example, recently mentioned that Elon Musk’s large and much-ballyhooed fortune is almost entirely a product of government subsidies. Musk is a smart guy; he obviously realized a good long time ago that federal and state subsidies for technology was where the money was at, and he’s constructed an industrial empire funded by US taxpayers to the tune of many billions of dollars. None of his publicly traded firms has ever made a profit, and as long as the subsidies keep flowing, none of them ever has to; between an overflowing feed trough of government largesse and the longstanding eagerness of fools to be parted from their money by way of the stock market, he’s pretty much set for life.

This is business as usual in today’s America. An article from 2013 pointed out, along the same lines, that the profits made by the five largest US banks were almost exactly equal to the amount of taxpayer money those same five banks got from the government. Like Elon Musk, the banks in question have figured out where the money is, and have gone after it with their usual verve; the revolving door that allows men in suits to shuttle back and forth between those same banks and the financial end of the US government doesn’t exactly hinder that process. It’s lucrative, it’s legal, and the mere fact that it’s bankrupting the real economy of goods and services in order to further enrich an already glutted minority of kleptocrats is nothing anyone in the citadels of power worries about.

A useful light on a different side of the same process comes from an editorial (in PDF) which claims that something like half of all current scientific papers are unreliable junk. Is this the utterance of an archdruid, or some other wild-eyed critic of science? No, it comes from the editor of Lancet, one of the two or three most reputable medical journals on the planet. The managing editor of The New England Journal of Medicine, which has a comparable ranking to Lancet, expressed much the same opinion of the shoddy experimental design, dubious analysis, and blatant conflicts of interest that pervade contemporary scientific research.

Notice that what’s happening here affects the flow of information in the same way that misplaced government subsidies affect the flow of investment. The functioning of the scientific process, like that of the market, depends on the presupposition that everyone who takes part abides by certain rules. When those rules are flouted, individual actors profit, but they do so at the expense of the whole system: the results of scientific research are distorted so that (for example) pharmaceutical firms can profit from drugs that don’t actually have the benefits claimed for them, just as the behavior of the market is distorted so that (for example) banks that would otherwise struggle for survival, and would certainly not be able to pay their CEOs gargantuan bonuses, can continue on their merry way.

The costs imposed by these actions are real, and they fall on all other participants in science and the economy respectively. Scientists these days, especially but not only in such blatantly corrupt fields as pharmaceutical research, face a lose-lose choice between basing their own investigations on invalid studies, on the one hand, or having to distrust any experimental results they don’t replicate themselves, on the other. Meanwhile the consumers of the products of scientific research—yes, that would be all of us—have to contend with the fact that we have no way of knowing whether any given claim about the result of research is the product of valid science or not. Similarly, the federal subsidies that direct investment toward politically savvy entrepreneurs like Elon Musk, and politically well-connected banks such as Goldman Sachs, and away from less parasitic and more productive options distort the entire economic system by preventing the normal workings of the market from weeding out nonviable projects and firms, and rewarding the more viable ones.

Turn to the  historical examples we’ve been following for the last three weeks, and distortions of the same kind are impossible to miss. In the US economy before and during the stock market crash of 1929 and its long and brutal aftermath, a legal and financial system dominated by a handful of very rich men saw to it that the bulk of the nation’s wealth flowed uphill, out of productive economic activities and into speculative ventures increasingly detached from the productive economy. When the markets imploded, in turn, the same people did their level best to see to it that their lifestyles weren’t affected even though everyone else’s was. The resulting collapse in consumer expenditures played a huge role in driving the cascading collapse of the US economy that, by the spring of 1933, had shuttered every consumer bank in the nation and driven joblessness and impoverishment to record highs.

That’s what Franklin Roosevelt fixed. It’s always amused me that the people who criticize FDR—and of course there’s plenty to criticize in a figure who, aside from his far greater success as a wartime head of state, can best be characterized as America’s answer to Mussolini—always talk about the very mixed record of the economic policies of his second term. They rarely bother to mention the Hundred Days, in which FDR stopped a massive credit collapse in its tracks. The Hundred Days and their aftermath are the part of FDR’s presidency that mattered most; it was in that brief period that he slapped shock paddles on an economy in cardiac arrest and got a pulse going, by violating most of the rules that had guided the economy up to that time. That casual attitude toward economic dogma is one of the two things his critics have never been able to forgive; the other is that it worked.

In the same way, France before, during, and immediately after the Revolution was for all practical purposes a medieval state that had somehow staggered its way to the brink of the nineteenth century. The various revolutionary governments that succeeded one another in quick succession after 1789 made some badly needed changes, but it was left to Napoléon Bonaparte to drag France by the scruff of its collective neck out of the late Middle Ages. Napoléon has plenty of critics—and of course there’s plenty to criticize in a figure who was basically what Mussolini wanted to be when he grew up—but the man’s domestic policies were by and large inspired. To name only two of his most important changes, he replaced the sprawling provinces of medieval France with a system of smaller and geographically meaningful départements, and abolished the entire body of existing French law in favor of a newly created legal system, the Code Napoléon. When he was overthrown, those stayed; in fact, a great many other countries in Europe and elsewhere proceeded to adopt the Code Napoléon in place of their existing legal systems. There were several reasons for this, but one of the most important was that the new Code simply made that much more sense.

Both men were able to accomplish what they did, in turn, because abolishing the political, economic, and cultural distortions imposed on their respective countries by a fossilized status quo freed up all the resources that had bene locked up in maintaining those distortions. Slapping a range of legal barriers and taxes on the more egregious forms of speculative excess—another of the major achievements of the Roosevelt era—drove enough wealth back into the productive economy to lay the foundations of America’s postwar boom; in the same way, tipping a galaxy of feudal customs into history’s compost bin transformed France from the economic basket case it was in 1789 to the conqueror of Europe twenty years later, and the succesful and innovative economic and cultural powerhouse it became during most of the nineteenth century thereafter.

That’s one of the advantages of revolutionary change. By breaking down existing institutions and the encrusted layers of economic parasitism that inevitably build up around them over time, it reliably breaks loose an abundance of resources that were not available in the prerevolutionary period. Here again, it’s crucial to remember that the availability of resources doesn’t guarantee that they’ll be used wisely; they may be thrown away on absurdities of one kind or another. Nor, even more critically, does it mean that the same abundance of resources will be available indefinitely. The surge of additional resources made available by catabolizing old and corrupt systems is a temporary jackpot, not a permanent state of affairs. That said, when you combine the collapse of fossilized institutions that stand in the way of change, and a sudden rush of previously unavailable resources of various kinds, quite a range of possibilities previously closed to a society suddenly come open.

Applying this same pattern to the crisis of modern industrial civilization, though, requires attention to certain inescapable but highly unwelcome realities. In 1789, the problem faced by France was the need to get rid of a thousand years of fossilized political, economic, and social institutions at a time when the coming of the industrial age had made them hopelessly dysfunctional. In 1929, the problem faced by the United States was the need to pry the dead hand of an equally dysfunctional economic orthodoxy off the throat of the nation so that its economy would actually function again. In both cases, the era of breakdown was catalyzed by a talented despot, and was followed, after an interval of chaos and war, by a period of relative prosperity.

We may well get the despot this time around, too, not to mention the chaos and war, but the period of prosperity is probably quite another matter. The problem we face today, in the United States and more broadly throughout the world’s industrial societies, is that all the institutions of industrial civilization presuppose limitless economic growth, but the conditions that provided the basis for continued economic growth simply aren’t there any more. The 300-year joyride of industrialism was made possible by vast and cheaply extractable reserves of highly concentrated fossil fuels and other natural resources, on the one hand, and a biosphere sufficiently undamaged that it could soak up the wastes of human industry without imposing burdens on the economy, on the other. We no longer have either of those requirements.

With every passing year, more and more of the world’s total economic output has to be diverted from other activities to keep fossil fuels and other resources flowing into the industrial world’s power plants, factories, and fuel tanks; with every passing year, in turn, more and more of the world’s total economic output has to be diverted from other activities to deal with the rising costs of climate change and other ecological disruptions. These are the two jaws of the trap sketched out more than forty years ago in the pages of The Limits to Growth, still the most accurate (and thus inevitably the most savagely denounced) map of the predicament we face. The consequences of that trap can be summed up neatly: on a finite planet, after a certain point—the point of diminishing returns, which we’ve already passed—the costs of growth rise faster than the benefits, and finally force the global economy to its knees.

The task ahead of us is thus in some ways the opposite of the one that France faced in the aftermath of 1789. Instead of replacing a sclerotic and failing medieval economy with one better suited to a new era of industrial expansion, we need to replace a sclerotic and failing industrial economy with one better suited to a new era of deindustrial contraction. That’s a tall order, no question, and it’s not something that can be achieved easily, or in a single leap. In all probability, the industrial world will have to pass through the whole sequence of phases we’ve been discussing several times before things finally bottom out in the deindustrial dark ages to come.

Still, I’m going to shock my fans and critics alike here by pointing out that there’s actually some reason to think that positive change on more than an individual level will be possible as the industrial world slams facefirst into the limits to growth. Two things give me that measured sense of hope. The first is the sheer scale of the resources locked up in today’s spectacularly dysfunctional political, economic, and social institutions, which will become available for other uses when those institutions come apart. The $83 billion a year currently being poured down the oversized rathole of the five biggest US banks, just for starters, could pay for a lot of solar water heaters, training programs for organic farmers, and other things that could actually do some good.

Throw in the resources currently being chucked into all of the other attempts currently under way to prop up a failing system, and you’ve got quite the jackpot that could, in an era of breakdown, be put to work doing things worth while. It’s by no means certain, as already noted, that these resources will go to the best possible use, but it’s all but certain that they’ll go to something less stunningly pointless than, say, handing Elon Musk his next billion dollars.

The second thing that gives me a measured sense of hope is at once subtler and far more profound. These days, despite a practically endless barrage of rhetoric to the contrary, the great majority of Americans are getting fewer and fewer benefits from the industrial system, and are being forced to pay more and more of its costs, so that a relatively small fraction of the population can monopolize an ever-increasing fraction of the national wealth and contribute less and less in exchange. What’s more, a growing number of Americans are aware of this fact. The traditional schism of a collapsing society into a dominant minority and an internal proletariat, to use Arnold Toynbee’s terms, is a massive and accelerating social reality in the United States today.

As that schism widens, and more and more Americans are forced into the Third World poverty that’s among the unmentionable realities of public life in today’s United States, several changes of great importance are taking place. The first, of course, is precisely that a great many Americans are perforce learning to live with less—not in the playacting style popular just now on the faux-green end of the privileged classes, but really, seriously living with much less, because that’s all there is. That’s a huge shift and a necessary one, since the absurd extravagance many Americans consider to be a normal lifestyle is among the most important things that will be landing in history’s compost heap in the not too distant future.

At the same time, the collective consensus that keeps the hopelessly dysfunctional institutions of today’s status quo glued in place is already coming apart, and can be expected to dissolve completely in the years ahead. What sort of consensus will replace it, after the inevitable interval of chaos and struggle, is anybody’s guess at this point—though it’s vanishingly unlikely to have anything to do with the current political fantasies of left and right. It’s just possible, given luck and a great deal of hard work, that whatever new system gets cobbled together during the breakdown phase of our present crisis will embody at least some of the values that will be needed to get our species back into some kind of balance with the biosphere on which our lives depend. A future post will discuss how that might be accomplished—after, that is, we explore the last phase of the collapse process: the era of dissolution, which will be the theme of next week’s post.

Wednesday, May 27, 2015

The Era of Response

The third stage of the process of collapse, following what I’ve called the eras of pretense and impact, is the era of response. It’s easy to misunderstand what this involves, because both of the previous eras have their own kinds of response to whatever is driving the collapse; it’s just that those kinds of response are more precisely nonresponses, attempts to make the crisis go away without addressing any of the things that are making it happen.

If you want a first-rate example of the standard nonresponse of the era of pretense, you’ll find one in the sunny streets of Miami, Florida right now. As a result of global climate change, sea level has gone up and the Gulf Stream has slowed down. One consequence is that these days, whenever Miami gets a high tide combined with a stiff onshore wind, salt water comes boiling up through the storm sewers of the city all over the low-lying parts of town. The response of the Florida state government has been to ssue an order to all state employees that they’re not allowed to utter the phrase “climate change.”

That sort of thing is standard practice in an astonishing range of subjects in America these days. Consider the roles that the essentially nonexistent recovery from the housing-bubble crash of 2008-9 has played in political rhetoric since that time. The current inmate of the White House has been insisting through most of two turns that happy days are here again, and the usual reams of doctored statistics have been churned out in an effort to convince people who know better that they’re just imagining that something is wrong with the economy. We can expect to hear that same claim made in increasingly loud and confident tones right up until the day the bottom finally drops out. 

With the end of the era of pretense and the arrival of the era of impact comes a distinct shift in the standard mode of nonresponse, which can be used quite neatly to time the transition from one era to another. Where the nonresponses of the era of pretense insist that there’s nothing wrong and nobody has to do anything outside the realm of business as usual, the nonresponses of the era of impact claim just as forcefully that whatever’s gone wrong is a temporary difficulty and everything will be fine if we all unite to do even more of whatever activity defines business as usual. That this normally amounts to doing more of whatever made the crisis happen in the first place, and thus reliably makes things worse is just one of the little ironies history has to offer.

What unites the era of pretense with the era of impact is the unshaken belief that in the final analysis, there’s nothing essentially wrong with the existing order of things. Whatever little difficulties may show up from time to time may be ignored as irrelevant or talked out of existence, or they may have to be shoved aside by some concerted effort, but it’s inconceivable to most people in these two eras that the existing order of things is itself the source of society’s problems, and has to be changed in some way that goes beyond the cosmetic dimension. When the inconceivable becomes inescapable, in turn, the second phase gives way to the third, and the era of response has arrived.

This doesn’t mean that everyone comes to grips with the real issues, and buckles down to the hard work that will be needed to rebuild society on a sounder footing. Winston Churchill once noted with his customary wry humor that the American people can be counted on to do the right thing, once they have exhausted every other possibility. He was of course quite correct, but the same rule can be applied with equal validity to every other nation this side of Utopia, too. The era of response, in practice, generally consists of a desperate attempt to find something that will solve the crisis du jour, other than the one thing that everyone knows will solve the crisis du jour but nobody wants to do.

Let’s return to the two examples we’ve been following so far, the outbreak of the Great Depression and the coming of the French Revolution. In the aftermath of the 1929 stock market crash, once the initial impact was over and the “sucker’s rally” of early 1930 had come and gone, the federal government and the various power centers and pressure groups that struggled for influence within its capacious frame were united in pursuit of a single goal: finding a way to restore prosperity without doing either of the things that had to be done in order to restore prosperity.  That task occupied the best minds in the US elite from the summer of 1930 straight through until April of 1933, and the mere fact that their attempts to accomplish this impossibility proved to be a wretched failure shouldn’t blind anyone to the Herculean efforts that were involved in the attempt.

The first of the two things that had to be tackled in order to restore prosperity was to do something about the drastic imbalance in the distribution of income in the United States. As noted in previous posts, an economy dependent on consumer expenditures can’t thrive unless consumers have plenty of money to spend, and in the United States in the late 1920s, they didn’t—well, except for the very modest number of those who belonged to the narrow circles of the well-to-do. It’s not often recalled these days just how ghastly the slums of urban America were in 1929, or how many rural Americans lived in squalid one-room shacks of the sort you pretty much have to travel to the Third World to see these days. Labor unions and strikes were illegal in 1920s America; concepts such as a minimum wage, sick pay, and health benefits didn’t exist, and the legal system was slanted savagely against the poor.

You can’t build prosperity in a consumer society when a good half of your citizenry can’t afford more than the basic necessities of life. That’s the predicament that America found clamped to the tender parts of its economic anatomy at the end of the 1920s. In that decade, as in our time, the temporary solution was to inflate a vast speculative bubble, under the endearing delusion that this would flood the economy with enough unearned cash to make the lack of earned income moot. That worked over the short term and then blew up spectacularly, since a speculative bubble is simply a Ponzi scheme that the legal authorities refuse to prosecute as such, and inevitably ends the same way.

There were, of course, effective solutions to the problem of inadequate consumer income. They were exactly those measures that were taken once the era of response gave way to the era of breakdown; everyone knew what they were, and nobody with access to political or economic power was willing to see them put into effect, because those measures would require a modest decline in the relative wealth and political dominance of the rich as compared to everyone else. Thus, as usually happens, they were postponed until the arrival of the era of breakdown made it impossible to avoid them any longer.

The second thing that had to be changed in order to restore prosperity was even more explosive, and I’m quite certain that some of my readers will screech like banshees the moment I mention it. The United States in 1929 had a precious metal-backed currency in the most literal sense of the term. Paper bills in those days were quite literally receipts for a certain quantity of gold—1.5 grams, for much of the time the US spent on the gold standard. That sort of arrangement was standard in most of the world’s industrial nations; it was backed by a dogmatic orthodoxy all but universal among respectable economists; and it was strangling the US economy.

It’s fashionable among certain sects on the economic fringes these days to look back on the era of the gold standard as a kind of economic Utopia in which there were no booms and busts, just a warm sunny landscape of stability and prosperity until the wicked witches of the Federal Reserve came along and spoiled it all. That claim flies in the face of economic history. During the entire period that the United States was on the gold standard, from 1873 to 1933, the US economy was a moonscape cratered by more than a dozen significant depressions. There’s a reason for that, and it’s relevant to our current situation—in a backhanded manner, admittedly.

Money, let us please remember, is not wealth. It’s a system of arbitrary tokens that represent real wealth—that is, actual, nonfinancial goods and services. Every society produces a certain amount of real wealth each year, and those societies that use money thus need to have enough money in circulation to more or less correspond to the annual supply of real wealth. That sounds simple; in practice, though, it’s anything but. Nowadays, for example, the amount of real wealth being produced in the United States each year is contracting steadily as more and more of the nation’s economic output has to be diverted into the task of keeping it supplied with fossil fuels. That’s happening, in turn, because of the limits to growth—the awkward but inescapable reality that you can’t extract infinite resources, or dump limitless wastes, on a finite planet.

The gimmick currently being used to keep fossil fuel extraction funded and cover the costs of the rising impact of environmental disruptions, without cutting into a culture of extravagance that only cheap abundant fossil fuel and a mostly intact biosphere can support, is to increase the money supply ad infinitum. That’s become the bedrock of US economic policy since the 2008-9 crash. It’s not a gimmick with a long shelf life; as the mismatch between real wealth and the money supply balloons, distortions and discontinuities are surging out through the crawlspaces of our economic life, and crisis is the most likely outcome.

In the United States in the first half or so of the twentieth century, by contrast, the amount of real wealth being produced each year soared, largely because of the steady increases in fossil fuel energy being applied to every sphere of life. While the nation was on the gold standard, though, the total supply of money could only grow as fast as gold could be mined out of the ground, which wasn’t even close to fast enough. So you had more goods and services being produced than there was money to pay for them; people who wanted goods and services couldn’t buy them because there wasn’t enough money to go around; business that wanted to expand and hire workers were unable to do so for the same reason. The result was that moonscape of economic disasters I mentioned a moment ago.

The necessary response at that time was to go off the gold standard. Nobody in power wanted to do this, partly because of the dogmatic economic orthodoxy noted earlier, and partly because a money shortage paid substantial benefits to those who had guaranteed access to money. The rentier class—those people who lived off income from their investments—could count on stable or falling prices as long as the gold standard stayed in place, and the mere fact that the same stable or falling prices meant low wages, massive unemployment, and widespread destitution troubled them not at all. Since the rentier class included the vast majority of the US economic and political elite, in turn, going off the gold standard was unthinkable until it became unavoidable.

The period of the French revolution from the fall of the Bastille in 1789 to the election of the National Convention in 1792 was a period of the same kind, though driven by different forces. Here the great problem was how to replace the Old Regime—not just the French monarchy, but the entire lumbering mass of political, economic, and social laws, customs, forms, and institutions that France had inherited from the Middle Ages and never quite gotten around to adapting to drastically changed conditions—with something that would actually work. It’s among the more interesting features of the resulting era of response that nearly every detail differed from the American example just outlined, and yet the results were remarkably similar.

Thus the leaders of the National Assembly who suddenly became the new rulers of France in the summer of 1789 had no desire whatsoever to retain the traditional economic arrangements that gave France’s former elites their stranglehold on an oversized share of the nation’s wealth. The abolition of manorial rights that summer, together with the explosive rural uprisingsagainst feudal landlords and their chateaux in the wake of the Bastille’s fall, gutted the feudal system and left most of its former beneficiaries the choice between fleeing into exile and trying to find some way to make ends meet in a society that had no particular market for used aristocrats. The problem faced by the National Assembly wasn’t that of prying the dead fingers of a failed system off the nation’s throat; it was that of trying to find some other basis for national unity and effective government.

It’s a surprisingly difficult challenge. Those of my readers who know their way around current events will already have guessed that an attempt was made to establish a copy of whatever system was most fashionable among liberals at the time, and that this attempt turned out to be an abject failure. What’s more, they’ll have been quite correct. The National Assembly moved to establish a constitutional monarchy along British lines, bring in British economic institutions, and the like; it was all very popular among liberal circles in France and, naturally, in Britain as well, and it flopped. Those who recall the outcome of the attempt to turn Iraq into a nice pseudo-American democracy in the wake of the US invasion will have a tolerably good sense of how the project unraveled.

One of the unwelcome but reliable facts of history is that democracy doesn’t transplant well. It thrives only where it grows up naturally, out of the civil institutions and social habits of a people; when liberal intellectuals try to impose it on a nation that hasn’t evolved the necessary foundations for it, the results are pretty much always a disaster. That latter was the situation in France at the time of the Revolution. What happened thereafter  is what almost always happens to a failed democratic experiment: a period of chaos, followed by the rise of a talented despot who’s smart and ruthless enough to impose order on a chaotic situation and allow new, pragmatic institutions to emerge to replace those destroyed by clueless democratic idealists. In many cases, though by no means all, those pragmatic institutions have ended up providing a bridge to a future democracy, but that’s another matter.

Here again, those of my readers who have been paying attention to current events already know this; the collapse of the Soviet Union was followed in classic form by a failed democracy, a period of chaos, and the rise of a talented despot. It’s a curious detail of history that the despots in question are often rather short. Russia has had the great good fortune to find, as its despot du jour, a canny realist who has successfully brought it back from the brink of collapse and reestablished it as a major power with a body count considerably smaller than usual.. France was rather less fortunate; the despot it found, Napoleon Bonaparte, turned out to be a megalomaniac with an Alexander the Great complex who proceeded to plunge Europe into a quarter century of cataclysmic war. Mind you, things could have been even worse; when Germany ended up in a similar situation, what it got was Adolf Hitler.

Charismatic strongmen are a standard endpoint for the era of response, but they properly belong to the era that follows, the era of breakdown, which will be discussed next week. What I want to explore here is how an era of response might work out in the future immediately before us, as the United States topples from its increasingly unsteady imperial perch and industrial civilization as a whole slams facefirst into the limits to growth. The examples just cited outline the two most common patterns by which the era of response works itself out. In the first pattern, the old elite retains its grip on power, and fumbles around with increasing desperation for a response to the crisis. In the second, the old elite is shoved aside, and the new holders of power are left floundering in a political vacuum.

We could see either pattern in the United States. For what it’s worth, I suspect the latter is the more likely option; the spreading crisis of legitimacy that grips the country these days is exactly the sort of thing you saw in France before the Revolution, and in any number of other countries in the few decades just prior to revolutionary political and social change. Every time a government tries to cope with a crisis by claiming that it doesn’t exist, every time some member of the well-to-do tries to dismiss the collective burdens its culture of executive kleptocracy imposes on the country by flinging abuse at critics, every time institutions that claim to uphold the rule of law defend the rule of entrenched privilege instead, the United States takes another step closer to the revolutionary abyss.

I use that last word advisedly. It’s a common superstition in every troubled age that any change must be for the better—that the overthrow of a bad system must by definition lead to the establishment of a better one. This simply isn’t true. The vast majority of revolutions have established governments that were far more abusive than the ones they replaced. The exceptions have generally been those that brought about a social upheaval without wrecking the political system: where, for example, an election rather than a coup d’etat or a mass rising put the revolutionaries in power, and the political institutions of an earlier time remained in place with only such reshaping as new necessities required.

We could still see that sort of transformation as the United States sees the end of its age of empire and has to find its way back to a less arrogant and extravagant way of functioning in the world. I don’t think it’s likely, but I think it’s possible, and it would probably be a good deal less destructive than the other alternative. It’s worth remembering, though, that history is under no obligation to give us the future we think we want.