Wednesday, February 15, 2017

The World as Will

It's impressively easy to misunderstand the point made in last week’s post here on The Archdruid Report. To say that the world we experience is made up of representations of reality, constructed in our minds by taking the trickle of data we get from the senses and fitting those into patterns that are there already, doesn’t mean that nothing exists outside of our minds. Quite the contrary, in fact; there are two very good reasons to think that there really is something “out there,” a reality outside our minds that produces the trickle of data we’ve discussed.

The first of those reasons seems almost absurdly simple at first glance: the world doesn’t always make sense to us. Consider, as one example out of godzillions, the way that light seems to behave like a particle on some occasions and like a wave on others. That’s been described, inaccurately, as a paradox, but it’s actually a reflection of the limitations of the human mind.

What, after all, does it mean to call something a particle? Poke around the concept for a while and you’ll find that at root, this concept “particle” is an abstract metaphor, extracted from the common human experience of dealing with little round objects such as pebbles and marbles. What, in turn, is a wave? Another abstract metaphor, extracted from the common human experience of watching water in motion. When a physicist says that light sometimes acts like a particle and sometimes like a wave, what she’s saying is that neither of these two metaphors fits more than a part of the way that light behaves, and we don’t have any better metaphor available.

If the world was nothing but a hallucination projected by our minds, then it would contain nothing that wasn’t already present in our minds—for what other source could there be?  That implies in turn that there would be a perfect match between the contents of the world and the contents of our minds, and we wouldn’t get the kind of mismatch between mind and world that leaves physicists flailing. More generally, the fact that the world so often baffles us offers good evidence that behind the world we experience, the world as representation, there’s some “thing in itself” that’s the source of the sense data we assemble into representations.

The other reason to think that there’s a reality distinct from our representations is that, in a certain sense, we experience such a reality at every moment.

Raise one of your hands to a position where you can see it, and wiggle the fingers. You see the fingers wiggling—or, more precisely, you see a representation of the wiggling fingers, and that representation is constructed in your mind out of bits of visual data, a great deal of memory, and certain patterns that seem to be hardwired into your mind. You also feel the fingers wiggling—or, here again, you feel a representation of the wiggling fingers, which is constructed in your mind out of bits of tactile and kinesthetic data, plus the usual inputs from memory and hardwired patterns. Pay close attention and you might be able to sense the way your mind assembles the visual representation and the tactile one into a single pattern; that happens close enough to the surface of consciousness that a good many people can catch themselves doing it.

So you’ve got a representation of wiggling fingers, part of the world as representation we experience. Now ask yourself this: the action of the will that makes the fingers wiggle—is that a representation?

This is where things get interesting, because the only reasonable answer is no, it’s not. You don’t experience the action of the will as a representation; you don’t experience it at all. You simply wiggle your fingers. Sure, you experience the results of the will’s action in the form of representations—the visual and tactile experiences we’ve just been considering—but not the will itself. If it were true that you could expect to see or hear or feel or smell or taste the impulse of the will rolling down your arm to the fingers, say, it would be reasonable to treat the will as just one more representation. Since that isn’t the case, it’s worth exploring the possibility that in the will, we encounter something that isn’t just a representation of reality—it’s a reality we encounter directly.

That’s the insight at the foundation of Arthur Schopenhauer’s philosophy. Schopenhauer’s one of the two principal guides who are going to show us around the giddy funhouse that philosophy has turned into of late, and guide us to the well-marked exits, so you’ll want to know a little about him. He lived in the ramshackle assortment of little countries that later became the nation of Germany; he was born in 1788 and died in 1860; he got his doctorate in philosophy in 1813; he wrote his most important work, The World as Will and Representation, before he turned thirty; and he spent all but the last ten years of his life in complete obscurity, ignored by the universities and almost everyone else. A small inheritance, carefully managed, kept him from having to work for a living, and so he spent his time reading, writing, playing the flute for an hour a day before dinner, and grumbling under his breath as philosophy went its merry way into metaphysical fantasy. He grumbled a lot, and not always under his breath. Fans of Sesame Street can think of him as philosophy’s answer to Oscar the Grouch.

Schopenhauer came of age intellectually in the wake of Immanuel Kant, whose work we discussed briefly last week, and so the question he faced was how philosophy could respond to the immense challenge Kant threw at the discipline’s feet. Before you go back to chattering about what’s true and what’s real, Kant said in effect, show me that these labels mean something and relate to something, and that you’re not just chasing phantoms manufactured by your own minds.

Most of the philosophers who followed in Kant’s footsteps responded to his challenge by ignoring it, or using various modes of handwaving to pretend that it didn’t matter. One common gambit at the time was to claim that the human mind has a special superpower of intellectual intuition that enables it to leap tall representations in a single bound, and get to a direct experience of reality that way. What that meant in practice, of course, is that philosophers could claim to have intellectually intuited this, that, and the other thing, and then build a great tottering system on top of them. What that meant in practice, of course, that a philosopher could simply treat whatever abstractions he fancied as truths that didn’t have to be proved; after all, he’d intellectually intuited them—prove that he hadn’t!

There were other such gimmicks. What set Schopenhauer apart was that he took Kant’s challenge seriously enough to go looking for something that wasn’t simply a representation. What he found—why, that brings us back to the wiggling fingers.

As discussed in last week’s post, every one of the world’s great philosophical traditions has ended up having to face the same challenge Kant flung in the face of the philosophers of his time. Schopenhauer knew this, since a fair amount of philosophy from India had been translated into European languages by his time, and he read extensively on the subject. This was helpful because Indian philosophy hit its own epistemological crisis around the tenth century BCE, a good twenty-nine centuries before Western philosophy got there, and so had a pretty impressive head start. There’s a rich diversity of responses to that crisis in the classical Indian philosophical schools, but most of them came to see consciousness as a (or the) thing-in-itself, as reality rather than representation.

It’s a plausible claim. Look at your hand again, with or without wiggling fingers. Now be aware of yourself looking at the hand—many people find this difficult, so be willing to work at it, and remember to feel as well as see. There’s your hand; there’s the space between your hand and your eyes; there’s whatever of your face you can see, with or without eyeglasses attached; pay close attention and you can also feel your face and your eyes from within; and then there’s—

There’s the thing we call consciousness, the whatever-it-is that watches through your eyes. Like the act of will that wiggled your fingers, it’s not a representation; you don’t experience it. In fact, it’s very like the act of will that wiggled your fingers, and that’s where Schopenhauer went his own way.

What, after all, does it mean to be conscious of something? Some simple examples will help clarify this. Move your hand until it bumps into something; it’s when something stops the movement that you feel it. Look at anything; you can see it if and only if you can’t see through it. You are conscious of something when, and only when, it resists your will.

That suggested to Schopenhauer that consciousness derives from will, not the other way around. There were other lines of reasoning that point in the same direction, and all of them derive from common human experiences. For example, each of us stops being conscious for some hours out of every day, whenever we go to sleep. During part of the time we’re sleeping, we experience nothing at all; during another part, we experience the weirdly disconnected representations we call “dreams.”  Even in dreamless sleep, though, it’s common for a sleeper to shift a limb away from an unpleasant stimulus. Thus the will is active even when consciousness is absent.

Schopenhauer proposed that there are different forms or, as he put it, grades of the will. Consciousness, which we can define for present purposes as the ability to experience representations, is one grade of the will—one way that the will can adapt to existence in a world that often resists it. Life is another, more basic grade. Consider the way that plants orient themselves toward sunlight, bending and twisting like snakes in slow motion, and seek out concentrations of nutrients with probing, hungry roots. As far as anyone knows, plants aren’t conscious—that is, they don’t experience a world of representations the way that animals do—but they display the kind of goal-seeking behavior that shows the action of will.

Animals also show goal-seeking behavior, and they do it in a much more complex and flexible way than plants do. There’s good reason to think that many animals are conscious, and experience a world of representations in something of the same way we do; certainly students of animal behavior have found that animals let incidents from the past shape their actions in the present, mistake one person for another, and otherwise behave in ways that suggest that their actions are guided, as ours are, by representations rather than direct reaction to stimuli. In animals, the will has developed the ability to represent its environment to itself.

Animals, at least the more complex ones, also have that distinctive mode of consciousness we call emotion. They can be happy, sad, lonely, furious, and so on; they feel affection for some beings and aversion toward others. Pay attention to your own emotions and you’ll soon notice how closely they relate to the will. Some emotions—love and hate are among them—are motives for action, and thus expressions of will; others—happiness and sadness are among them—are responses to the success or failure of the will to achieve its goals. While emotions are tangled up with representations in our minds, and presumably in those of animals as well, they stand apart; they’re best understood as conditions of the will, expressions of its state as it copes with the world through its own representations.

And humans? We’ve got another grade of the will, which we can call intellect:  the ability to add up representations into abstract concepts, which we do, ahem, at will. Here’s one representation, which is brown and furry and barks; here’s another like it; here’s a whole kennel of them—and we lump them all together in a single abstract category, to which we assign a sound such as “dog.” We can then add these categories together, creating broader categories such as “quadruped” and “pet;” we can subdivide the categories to create narrower ones such as “puppy” and “Corgi;” we can extract qualities from the whole and treat them as separate concepts, such as “furry” and “loud;” we can take certain very general qualities and conjure up the entire realm of abstract number, by noticing how many paws most dogs have and using that, and a great many other things, to come up with the concept of “four.”

So life, consciousness, and intellect are three grades of the will. One interesting thing about them is that the more basic ones are more enduring and stable than the more complex ones. Humans, again, are good examples. Humans remain alive all the way from birth to death; they’re conscious only when awake; they’re intelligent only when actively engaged in thinking—which is a lot less often than we generally like to admit. A certain degree of tiredness, a strong emotion, or a good stiff drink are usually enough to shut off the intellect and leave us dealing with the world on the same mental basis as an ordinarily bright dog; it takes quite a bit more to reduce us to the vegetative level, and serious physical trauma to go one more level down.

Let’s take a look at that final level, though. The conventional wisdom of our age holds that everything that exists is made up of something called “matter,” which is configured in various ways; further, that matter is what really exists, and everything else is somehow a function of matter if it exists at all. For most of us, this is the default setting, the philosophical opinion we start from and come back to, and anyone who tries to question it can count on massive pushback.

The difficulty here is that philosophers and scientists have both proved, in their own ways, that the usual conception of matter is quite simply nonsense. Any physical scientist worth his or her sodium chloride, to begin with, will tell you that what we habitually call “solid matter” is nearly as empty as the vacuum of deep space—a bit of four-dimensional curved spacetime that happens to have certain tiny probability waves spinning dizzily in it, and it’s the interaction between those probability waves and those composing that other patch of curved spacetime we each call “my body” that creates the illusions of solidity, color, and the other properties we attribute to matter.

The philosophers got to the same destination a couple of centuries earlier, and by a different route. The epistemologists I mentioned in last week’s post—Locke, Berkeley, and Hobbes—took the common conception of matter apart layer by layer and showed, to use the formulation we’ve already discussed, that all the things we attribute to matter are simply representations in the mind. Is there something out there that causes those representations? As already mentioned, yes, there’s very good reason to think so—but that doesn’t mean that the “something out there” has to consist of matter in any sense of the word that means anything.

That’s where Schopenhauer got to work, and once again, he proceeded by calling attention to certain very basic and common human experiences. Each of us has direct access, in a certain sense, to one portion of the “something out there,” the portion each of us calls “my body.” When we experience our bodies, we experience them as representations, just like anything else—but we also act with them, and as the experiment with the wiggling fingers demonstrated, the will that acts isn’t a representation.

Thus there’s a boundary between the part of the universe we encounter as will and representation, and the part we encounter only as representation. The exact location of that boundary is more complex than it seems at first sight. It’s a commonplace in the martial arts, for example, that a capable martial artist can learn to feel with a weapon as though it were a part of the body. Many kinds of swordsmanship, for example, rely on what fencers call sentiment de fer, the “sense of the steel;” the competent fencer can feel the lightest touch of the other blade against his own, just as though it brushed his hand.

There are also certain circumstances—lovemaking, dancing, ecstatic religious experience, and mob violence are among them—in which under certain hard-to-replicate conditions, two or more people seem to become, at least briefly, a single entity that moves and acts with a will of its own. All of those involve a shift from the intellect to a more basic grade of the will, and they lead in directions that will deserve a good deal more examination later on; for now, the point at issue is that the boundary line between self and other can be a little more fluid than we normally tend to assume.

For our present purposes, though, we can set that aside and focus on the body as the part of the world each of us encounters in a twofold way: as a representation among representations, and as a means of expression for the will.  Everything we perceive about our bodies is a representation, but by noticing these representations, we observe the action of something that isn’t a representation, something we call the will, manifesting in its various grades. That’s all there is. Go looking as long as you want, says Schopenhauer, and you won’t find anything but will and representations. What if that’s all there is—if the thing we call "matter" is simpy the most basic grade of the will, and everything in the world thus amounts to will on the one hand, and representations experienced by that mode of will we call consciousness on the other, and the thing that representations are representing are various expressions of this one energy that, by way of its distinctive manifestations in our own experience, we call the will?

That’s Schopenhauer’s vision. The remarkable thing is how close it is to the vision that comes out of modern science. A century before quantum mechanics, he’d already grasped that behind the facade of sensory representations that you and I call matter lies an incomprehensible and insubstantial reality, a realm of complex forces dancing in the void. Follow his arguments out to their logical conclusion and you get a close enough equivalent of the universe of modern physics that it’s not at all implausible that they’re one and the same. Of course plausibility isn’t proof—but given the fragile, dependent, and derivative nature of the human intellect, it may be as close as we can get.

And of course that latter point is a core reason why Arthur Schopenhauer spent most of his life in complete obscurity and why, after a brief period of mostly posthumous superstardom in the late nineteenth century, his work dropped out of sight and has rarely been noticed since. (To be precise, it’s one of two core reasons; we’ll get to the other one later.) If he’s right, then the universe is not rational. Reason—the disciplined use of the grade of will I’ve called the intellect—isn’t a key to the truth of things.  It’s simply the systematic exploitation of a set of habits of mind that turned out to be convenient for our ancestors as they struggled with the hard but intellectually undemanding tasks of staying fed, attracting mates, chasing off predators, and the like, and later on got pulled out of context and put to work coming up with complicated stories about what causes the representations we experience.

To suggest that, much less to back it up with a great deal of argument and evidence, is to collide head on with one of the most pervasive presuppositions of our culture. We’ll survey the wreckage left behind by that collision in next week’s post.

Wednesday, February 08, 2017

The World as Representation

It can be hard to remember these days that not much more than half a century ago, philosophy was something you read about in general-interest magazines and the better grade of newspapers. Existentialist philosopher Jean-Paul Sartre was an international celebrity; the posthumous publication of Pierre Teilhard de Chardin’s Le Phenomenon Humaine (the English translation, predictably, was titled The Phenomenon of Man) got significant flurries of media coverage; Random House’s Vintage Books label brought out cheap mass-market paperback editions of major philosophical writings from Plato straight through to Nietzsche and beyond, and made money off them.

Though philosophy was never really part of the cultural mainstream, it had the same kind of following as avant-garde jazz, say, or science fiction.  At any reasonably large cocktail party you had a pretty fair chance of meeting someone who was into it, and if you knew where to look in any big city—or any college town with pretensions to intellectual culture, for that matter—you could find at least one bar or bookstore or all-night coffee joint where the philosophy geeks hung out, and talked earnestly into the small hours about Kant or Kierkegaard. What’s more, that level of interest in the subject had been pretty standard in the Western world for a very long time.

We’ve come a long way since then, and not in a particularly useful direction. These days, if you hear somebody talk about philosophy in the media, it’s probably a scientific materialist like Neil deGrasse Tyson ranting about how all philosophy is nonsense. The occasional work of philosophical exegesis still gets a page or two in the New York Review of Books now and then, but popular interest in the subject has vanished, and more than vanished: the sort of truculent ignorance about philosophy displayed by Tyson and his many equivalents has become just as common among the chattering classes as a feigned interest in the subject was a half century in the past.

Like most human events, the decline of philosophy in modern times was overdetermined; like the victim in the murder-mystery paperback who was shot, strangled, stabbed, poisoned, whacked over the head with a lead pipe, and then shoved off a bridge to drown, there were more causes of death than the situation actually required. Part of the problem, certainly, was the explosive expansion of the academic industry in the US and elsewhere in the second half of the twentieth century.  In an era when every state teacher’s college aspired to become a university and every state university dreamed of rivaling the Ivy League, a philosophy department was an essential status symbol. The resulting expansion of the field was not necessarily matched by an equivalent increase in genuine philosophers, but it was certainly followed by the transformation of university-employed philosophy professors into a professional caste which, as such castes generally do, defended its status by adopting an impenetrable jargon and ignoring or rebuffing attempts at participation from outside its increasingly airtight circle.

Another factor was the rise of the sort of belligerent scientific materialism exemplified, as noted earlier, by Neil deGrasse Tyson. Scientific inquiry itself is philosophically neutral—it’s possible to practice science from just about any philosophical standpoint you care to name—but the claim at the heart of scientific materialism, the dogmatic insistence that those things that can be investigated using scientific methods and explained by current scientific theory are the only things that can possibly exist, depends on arbitrary metaphysical postulates that were comprehensively disproved by philosophers more than two centuries ago. (We’ll get to those postulates and their problems later on.) Thus the ascendancy of scientific materialism in educated culture pretty much mandated the dismissal of philosophy.

There were plenty of other factors as well, most of them having no more to do with philosophy as such than the ones just cited. Philosophy itself, though, bears some of the responsibility for its own decline. Starting in the seventeenth century and reaching a crisis point in the nineteenth, western philosophy came to a parting of the ways—one that the philosophical traditions of other cultures reached long before it, with similar consequences—and by and large, philosophers and their audiences alike chose a route that led to its present eclipse. That choice isn’t irreparable, and there’s much to be gained by reversing it, but it’s going to take a fair amount of hard intellectual effort and a willingness to abandon some highly popular shibboleths to work back to the mistake that was made, and undo it.

To help make sense of what follows, a concrete metaphor might be useful. If you’re in a place where there are windows nearby, especially if the windows aren’t particularly clean, go look out through a window at the view beyond it. Then, after you’ve done this for a minute or so, change your focus and look at the window rather than through it, so that you see the slight color of the glass and whatever dust or dirt is clinging to it. Repeat the process a few times, until you’re clear on the shift I mean: looking through the window, you see the world; looking at the window, you see the medium through which you see the world—and you might just discover that some of what you thought at first glance was out there in the world was actually on the window glass the whole time.

That, in effect, was the great change that shook western philosophy to its foundations beginning in the seventeenth century. Up to that point, most philosophers in the western world started from a set of unexamined presuppositions about what was true, and used the tools of reasoning and evidence to proceed from those presuppositions to a more or less complete account of the world. They were into what philosophers call metaphysics: reasoned inquiry into the basic principles of existence. That’s the focus of every philosophical tradition in its early years, before the confusing results of metaphysical inquiry refocus attention from “What exists?” to “How do we know what exists?” Metaphysics then gives way to epistemology: reasoned inquiry into what human beings are capable of knowing.

That refocusing happened in Greek philosophy around the fourth century BCE, in Indian philosophy around the tenth century BCE, and in Chinese philosophy a little earlier than in Greece. In each case, philosophers who had been busy constructing elegant explanations of the world on the basis of some set of unexamined cultural assumptions found themselves face to face with hard questions about the validity of those assumptions. In terms of the metaphor suggested above, they were making all kinds of statements about what they saw through the window, and then suddenly realized that the colors they’d attributed to the world were being contributed in part by the window glass and the dust on it, the vast dark shape that seemed to be moving purposefully across the sky was actually a beetle walking on the outside of the window, and so on.

The same refocusing began in the modern world with Rene Descartes, who famously attempted to start his philosophical explorations by doubting everything. That’s a good deal easier said than done, as it happens, and to a modern eye, Descartes’ writings are riddled with unexamined assumptions, but the first attempt had been made and others followed. A trio of epistemologists from the British Isles—John Locke, George Berkeley, and David Hume—rushed in where Descartes feared to tread, demonstrating that the view from the window had much more to do with the window glass than it did with the world outside. The final step in the process was taken by the German philosopher Immanuel Kant, who subjected human sensory and rational knowledge to relentless scrutiny and showed that most of what we think of as “out there,” including such apparently hard realities as space and time, are actually  artifacts of the processes by which we perceive things.

Look at an object nearby: a coffee cup, let’s say. You experience the cup as something solid and real, outside yourself: seeing it, you know you can reach for it and pick it up; and to the extent that you notice the processes by which you perceive it, you experience these as wholly passive, a transparent window on an objective external reality. That’s normal, and there are good practical reasons why we usually experience the world that way, but it’s not actually what’s going on.

What’s going on is that a thin stream of visual information is flowing into your mind in the form of brief fragmentary glimpses of color and shape. Your mind then assembles these together into the mental image of the coffee cup, using your memories of that and other coffee cups, and a range of other things as well, as a template onto which the glimpses can be arranged. Arthur Schopenhauer, about whom we’ll be talking a great deal as we proceed, gave the process we’re discussing the useful label of “representation;” when you look at the coffee cup, you’re not passively seeing the cup as it exists, you’re actively representing—literally re-presenting—an image of the cup in your mind.

There are certain special situations in which you can watch representation at work. If you’ve ever woken up in an unfamiliar room at night, and had a few seconds pass before the dark unknown shapes around you finally turned into ordinary furniture, you’ve had one of those experiences. Another is provided by the kind of optical illusion that can be seen as two different things. With a little practice, you can flip from one way of seeing the illusion to another, and watch the process of representation as it happens.

What makes the realization just described so challenging is that it’s fairly easy to prove that the cup as we represent it has very little in common with the cup as it exists “out there.” You can prove this by means of science: the cup “out there,” according to the evidence collected painstakingly by physicists, consists of an intricate matrix of quantum probability fields and ripples in space-time, which our senses systematically misperceive as a solid object with a certain color, surface texture, and so on. You can also prove this, as it happens, by sheer sustained introspection—that’s how Indian philosophers got there in the age of the Upanishads—and you can prove it just as well by a sufficiently rigorous logical analysis of the basis of human knowledge, which is what Kant did.

The difficulty here, of course, is that once you’ve figured this out, you’ve basically scuttled any chance at pursuing the kind of metaphysics that’s traditional in the formative period of your philosophical tradition. Kant got this, which is why he titled the most relentless of his analyses Prolegomena to Any Future Metaphysics; what he meant by this was that anybody who wanted to try to talk about what actually exists had better be prepared to answer some extremely difficult questions first.  When philosophical traditions hit their epistemological crises, accordingly, some philosophers accept the hard limits on human knowledge, ditch the metaphysics, and look for something more useful to do—a quest that typically leads to ethics, mysticism, or both. Other philosophers double down on the metaphysics and either try to find some way around the epistemological barrier, or simply ignore it, and this latter option is the one that most Western philosophers after Kant ended up choosing.  Where that leads—well, we’ll get to that later on.

For the moment, I want to focus a little more closely on the epistemological crisis itself, because there are certain very common ways to misunderstand it. One of them I remember with a certain amount of discomfort, because I made it myself in my first published book, Paths of Wisdom. This is the sort of argument that sees the sensory organs and the nervous system as the reason for the gap between the reality out there—the “thing in itself” (Ding an Sich), as Kant called it—and the representation as we experience it. It’s superficially very convincing: the eye receives light in certain patterns and turns those into a cascade of electrochemical bursts running up the optic nerve, and the visual centers in the brain then fold, spindle, and mutilate the results into the image we see.

The difficulty? When we look at light, an eye, an optic nerve, a brain, we’re not seeing things in themselves, we’re seeing another set of representations, constructed just as arbitrarily in our minds as any other representation. Nietzsche had fun with this one: “What? and others even go so far as to say that the external world is the work of our organs? But then our body, as a piece of this external world, would be the work of our organs! But then our organs themselves would be—the work of our organs!” That is to say, the body is also a representation—or, more precisely, the body as we perceive it is a representation. It has another aspect, but we’ll get to that in a future post.

Another common misunderstanding of the epistemological crisis is to think that it’s saying that your conscious mind assembles the world, and can do so in whatever way it wishes. Not so. Look at the coffee cup again. Can you, by any act of consciousness, make that coffee cup suddenly sprout wings and fly chirping around your computer desk? Of course not. (Those who disagree should be prepared to show their work.) The crucial point here is that representation is neither a conscious activity nor an arbitrary one. Much of it seems to be hardwired, and most of the rest is learned very early in life—each of us spent our first few years learning how to do it, and scientists such as Jean Piaget have chronicled in detail the processes by which children gradually learn how to assemble the world into the specific meaningful shape their culture expects them to get. 

By the time you’re an adult, you do that instantly, with no more conscious effort than you’re using right now to extract meaning from the little squiggles on your computer screen we call “letters.” Much of the learning process, in turn, involves finding meaningful correlations between the bits of sensory data and weaving those into your representations—thus you’ve learned that when you get the bits of visual data that normally assemble into a coffee cup, you can reach for it and get the bits of tactile data that normally assemble into the feeling of picking up the cup, followed by certain sensations of movement, followed by certain sensations of taste, temperature, etc. corresponding to drinking the coffee.

That’s why Kant included the “thing in itself” in his account: there really does seem to be something out there that gives rise to the data we assemble into our representations. It’s just that the window we’re looking through might as well be a funhouse mirror:  it imposes so much of itself on the data that trickles through it that it’s almost impossible to draw firm conclusions about what’s “out there” from our representations.  The most we can do, most of the time, is to see what representations do the best job of allowing us to predict what the next series of fragmentary sensory images will include. That’s what science does, when its practitioners are honest with themselves about its limitations—and it’s possible to do perfectly good science on that basis, by the way.

It’s possible to do quite a lot intellectually on that basis, in fact. From the golden age of ancient Greece straight through to the end of the Renaissance, in fact, a field of scholarship that’s almost completely forgotten today—topics—was an important part of a general education, the kind of thing you studied as a matter of course once you got past grammar school. Topics is the study of those things that can’t be proved logically, but are broadly accepted as more or less true, and so can be used as “places” (in Greek, topoi) on which you can ground a line of argument. The most important of these are the commonplaces (literally, the common places or topoi) that we all use all the time as a basis for our thinking and speaking; in modern terms, we can think of them as “things on which a general consensus exists.” They aren’t truths; they’re useful approximations of truths, things that have been found to work most of the time, things to be set aside only if you have good reason to do so.

Science could have been seen as a way to expand the range of useful topoi. That’s what a scientific experiment does, after all: it answers the question, “If I do this, what happens?” As the results of experiments add up, you end up with a consensus—usually an approximate consensus, because it’s all but unheard of for repetitions of any experiment to get exactly the same result every time, but a consensus nonetheless—that’s accepted by the scientific community as a useful approximation of the truth, and can be set aside only if you have good reason to do so. To a significant extent, that’s the way science is actually practiced—well, when it hasn’t been hopelessly corrupted for economic or political gain—but that’s not the social role that science has come to fill in modern industrial society.

I’ve written here several times already about the trap into which institutional science has backed itself in recent decades, with the enthusiastic assistance of the belligerent scientific materialists mentioned earlier in this post. Public figures in the scientific community routinely like to insist that the current consensus among scientists on any topic must be accepted by the lay public without question, even when scientific opinion has swung around like a weathercock in living memory, and even when unpleasantly detailed evidence of the deliberate falsification of scientific data is tolerably easy to find, especially but not only in the medical and pharmaceutical fields. That insistence isn’t wearing well; nor does it help when scientific materialists insist—as they very often do—that something can’t exist or something else can’t happen, simply because current theory doesn’t happen to provide a mechanism for it.

Too obsessive a fixation on that claim to authority, and the political and financial baggage that comes with it, could very possibly result in the widespread rejection of science across the industrial world in the decades ahead. That’s not yet set in stone, and it’s still possible that scientists who aren’t too deeply enmeshed in the existing order of things could provide a balancing voice, and help see to it that a less doctrinaire understanding of science gets a voice and a public presence.

Doing that, though, would require an attitude we might as well call epistemic modesty: the recognition that the human capacity to know has hard limits, and the unqualified absolute truth about most things is out of our reach. Socrates was called the wisest of the Greeks because he accepted the need for epistemic modesty, and recognized that he didn’t actually know much of anything for certain. That recognition didn’t keep him from being able to get up in the morning and go to work at his day job as a stonecutter, and it needn’t keep the rest of us from doing what we have to do as industrial civilization lurches down the trajectory toward a difficult future.

Taken seriously, though, epistemic modesty requires some serious second thoughts about certain very deeply ingrained presuppositions of the cultures of the West. Some of those second thoughts are fairly easy to reach, but one of the most challenging starts with a seemingly simple question: is there anything we experience that isn’t a representation? In the weeks ahead we’ll track that question all the way to its deeply troubling destination.

Wednesday, February 01, 2017

Perched on the Wheel of Time

There's a curious predictability in the comments I field in response to posts here that talk about the likely shape of the future. The conventional wisdom of our era insists that modern industrial society can’t possibly undergo the same life cycle of rise and fall as every other civilization in history; no, no, there’s got to be some unique future awaiting us—uniquely splendid or uniquely horrible, it doesn’t even seem to matter that much, so long as it’s unique. Since I reject that conventional wisdom, my dissent routinely fields pushback from those of my readers who embrace it.

That’s not surprising in the least, of course. What’s surprising is that the pushback doesn’t surface when the conventional wisdom seems to be producing accurate predictions, as it does now and then. Rather, it shows up like clockwork whenever the conventional wisdom fails.

The present situation is as good an example as any. The basis of my dissident views is the theory of cyclical history—the theory, first proposed in the early 18th century by the Italian historian Giambattista Vico and later refined and developed by such scholars as Oswald Spengler and Arnold Toynbee, that civilizations rise and fall in a predictable life cycle, regardless of scale or technological level. That theory’s not just a vague generalization, either; each of the major writers on the subject set out specific stages that appear in order, showed that these have occurred in all past civilizations, and made detailed, falsifiable predictions about how those stages can be expected to occur in our civilization. Have those panned out? So far, a good deal more often than not.

In the final chapters of his second volume, for example, Spengler noted that civilizations in the stage ours was about to reach always end up racked by conflicts that pit established hierarchies against upstart demagogues who rally the disaffected and transform them into a power base. Looking at the trends visible in his own time, he sketched out the most likely form those conflicts would take in the Winter phase of our civilization. Modern representative democracy, he pointed out, has no effective defenses against corruption by wealth, and so could be expected to evolve into corporate-bureaucratic plutocracies that benefit the affluent at the expense of everyone else. Those left out in the cold by these transformations, in turn, end up backing what Spengler called Caesarism—the rise of charismatic demagogues who challenge and eventually overturn the corporate-bureaucratic order.

These demagogues needn’t come from within the excluded classes, by the way. Julius Caesar, the obvious example, came from an old upper-class Roman family and parlayed his family connections into a successful political career. Watchers of the current political scene may be interested to know that Caesar during his lifetime wasn’t the imposing figure he became in retrospect; he had a high shrill voice, his morals were remarkably flexible even by Roman standards—the scurrilous gossip of his time called him “every man’s wife and every woman’s husband”—and he spent much of his career piling up huge debts and then wriggling out from under them. Yet he became the political standardbearer for the plebeian classes, and his assassination by a conspiracy of rich Senators launched the era of civil wars that ended the rule of the old elite once and for all.

Thus those people watching the political scene last year who knew their way around Spengler, and noticed that a rich guy had suddenly broken with the corporate-bureaucratic consensus and called for changes that would benefit the excluded classes at the expense of the affluent, wouldn’t have had to wonder what was happening, or what the likely outcome would be. It was those who insisted on linear models of history—for example, the claim that the recent ascendancy of modern liberalism counted as the onward march of progress, and therefore was by definition irreversible—who found themselves flailing wildly as history took a turn they considered unthinkable.

The rise of Caesarism, by the way, has other features I haven’t mentioned. As Spengler sketches out the process, it also represents the exhaustion of ideology and its replacement by personality. Those of my readers who watched the political scene over the last few years may have noticed the way that the issues have been sidelined by sweeping claims about the supposed personal qualities of candidates. The practically content-free campaign that swept Barack Obama into the presidency in 2008—“Hope,” “Change,” and “Yes We Can” aren’t statements about issues, you know—was typical of this stage, as was the emergence of competing personality cults around the candidates in the 2016 election.  In the ordinary way of things, we can expect even more of this in elections to come, with messianic hopes clustering around competing politicians until the point of absurdity is well past. These will then implode, and the political process collapse into a raw scramble for power at any cost.

There’s plenty more in Spengler’s characterization of the politics of the Winter phase, and all of it’s well represented in today’s headlines, but the rest can be left to those of my readers interested enough to turn the pages of The Decline of the West for themselves. What I’d like to discuss here is the nature of the pushback I tend to field when I point out that yet again, predictions offered by Spengler and other students of cyclic history turned out to be correct and those who dismissed them turned out to be smoking their shorts. The responses I field are as predictable as—well, the arrival of charismatic demagogues at a certain point in the Winter phase, for example—and they reveal some useful flimpses into the value, or lack of it, of our society’s thinking about the future in this turn of the wheel.

Probably the most common response I get can best be characterized as simple incantation: that is to say, the repetition of some brief summary of the conventional wisdom, usually without a shred of evidence or argument backing it up, as though the mere utterance is enough to disprove all other ideas.   It’s a rare week when I don’t get at least one comment along these lines, and they divide up roughly evenly between those that insist that progress will inevitably triumph over all its obstacles, on the one hand, and those that insist that modern industrial civilization will inevitably crash to ruin in a sudden cataclysmic downfall on the other. I tend to think of this as a sort of futurological fundamentalism along the lines of “pop culture said it, I believe it, that settles it,” and it’s no more useful, or for that matter interesting, than fundamentalism of any other sort.

A little less common and a little more interesting are a second class of arguments, which insist that I can’t dismiss the possibility that something might pop up out of the blue to make things different this time around. As I pointed out very early on in the history of this blog, these are examples of the classic logical fallacy of argumentum ad ignorantiam, the argument from ignorance. They bring in some factor whose existence and relevance is unknown, and use that claim to insist that since the conventional wisdom can’t be disproved, it must be true.

Arguments from ignorance are astonishingly common these days. My readers may have noticed, for example, that every few years some new version of nuclear power gets trotted out as the answer to our species’ energy needs. From thorium fission plants to Bussard fusion reactors to helium-3 from the Moon, they all have one thing in common: nobody’s actually built a working example, and so it’s possible for their proponents to insist that their pet technology will lack the galaxy of technical and economic problems that have made every existing form of nuclear power uneconomical without gargantuan government subsidies. That’s an argument from ignorance: since we haven’t built one yet, it’s impossible to be absolutely certain that they’ll have the usual cascading cost overruns and the rest of it, and therefore their proponents can insist that those won’t happen this time. Prove them wrong!

More generally, it’s impressive how many people can look at the landscape of dysfunctional technology and failed promises that surrounds us today and still insist that the future won’t be like that. Most of us have learned already that upgrades on average have fewer benefits and more bugs than the programs they replace, and that products labeled “new and improved” may be new but they’re rarely improved; it’s starting to sink in that most new technologies are simply more complicated and less satisfactory ways of doing things that older technologies did at least as well at a lower cost.  Try suggesting this as a general principle, though, and I promise you that plenty of people will twist themselves mentally into pretzel shapes trying to avoid the implication that progress has passed its pull date.

Even so, there’s a very simple answer to all such arguments, though in the nature of such things it’s an answer that only speaks to those who aren’t too obsessively wedded to the conventional wisdom. None of the arguments from ignorance I’ve mentioned are new; all of them have been tested repeatedly by events, and they’ve failed. I’ve lost track of the number of times I’ve been told, for example, that the economic crisis du jour could lead to the sudden collapse of the global economy, or that the fashionable energy technology du jour could lead to a new era of abundant energy. No doubt they could, at least in theory, but the fact remains that they don’t. 

It so happens that there are good reasons why they don’t, varying from case to case, but that’s actually beside the point I want to make here. This particular version of the argument from ignorance is also an example of the fallacy the old logicians called petitio principii, better known as “begging the question.” Imagine, by way of counterexample, that someone were to post a comment saying, “Nobody knows what the future will be like, so the future you’ve predicted is as likely as any other.” That would be open to debate, since there’s some reason to think we can in fact predict some things about the future, but at least it would follow logically from the premise.  Still, I don’t think I’ve ever seen anyone make that claim. Nor have I ever seen anybody claim that since nobody knows what the future will be like, say, we can’t assume that progress is going to continue.

In practice, rather, the argument from ignorance is applied to discussions of the future in a distinctly one-sided manner. Predictions based on any point of view other than the conventional wisdom of modern popular culture are dismissed with claims that it might possibly be different this time, while predictions based on the conventional wisdom of modern popular culture are spared that treatment. That’s begging the question: covertly assuming that one side of an argument must be true unless it’s disproved, and that the other side can’t be true unless it’s proved.

Now in fact, a case can be made that we can in fact know quite a bit about the shape of the future, at least in its broad outlines. The heart of that case, as already noted, is the fact that certain theories about the future do in fact make accurate predictions, while others don’t. This in itself shows that history isn’t random—that there’s some structure to the flow of historical events that can be figured out by learning from the past, and that similar causes at work in similar situations will have similar outcomes. Apply that reasoning to any other set of phenomena, and you’ve got the ordinary, uncontroversial basis for the sciences. It’s only when it’s applied to the future that people balk, because it doesn’t promise them the kind of future they want.

The argument by incantation and the argument from ignorance make up most of the pushback I get. I’m pleased to say, though, that every so often I get an argument that’s considerably more original than these. One of those came in last week—tip of the archdruidical hat to DoubtingThomas—and it’s interesting enough that it deserves a detailed discussion.

DoubtingThomas began with the standard argument from ignorance, claiming that it’s always possible that something might possibly happen to disrupt the cyclic patterns of history in any given case, and therefore the cyclic theory should be dismissed no matter how many accurate predictions it scored. As we’ve already seen, this is handwaving, but let’s move on.  He went on from there to argue that much of the shape of history is defined by the actions of unique individuals such as Isaac Newton, whose work sends the world careening along entirely new and unpredicted paths. Such individuals have appeared over and over again in history, he pointed out, and was kind enough to suggest that my activities here on The Archdruid Report were, in a small way, another example of the influence of an individual on history. Given that reality, he insisted, a theory of history that didn’t take the actions of unique individuals into account was invalid.

Fair enough; let’s consider that argument. Does the cyclic theory of history fail to take the actions of unique individuals into account?

Here again, Oswald Spengler’s The Decline of the West is the go-to source, because he’s dealt with the sciences and arts to a much greater extent than other researchers into historical cycles. What he shows, with a wealth of examples drawn from the rise and fall of many different civilizations, is that the phenomenon DoubtingThomas describes is a predictable part of the cycles of history. In every generation, in effect, a certain number of geniuses will be born, but their upbringing, the problems that confront them, and the resources they will have available to solve those problems, are not theirs to choose. All these things are produced by the labors of other creative minds of the past and present, and are profoundly influenced by the cycles of history.

Let’s take Isaac Newton as an example. He happened to be born just as the scientific revolution was beginning to hit its stride, but before it had found its paradigm, the set of accomplishments on which all future scientific efforts would be directly or indirectly modeled. His impressive mathematical and scientific gifts thus fastened onto the biggest unsolved problem of the time—the relationship between the physics of moving bodies sketched out by Galileo and the laws of planetary motion discovered by Kepler—and resulted in the Principia Mathematica, which became the paradigm for the next three hundred years or so of scientific endeavor.

Had he been born a hundred years earlier, none of those preparations would have been in place, and the Principia Mathematica wouldn’t have been possible. Given the different cultural attitudes of the century before Newton’s time, in fact, he would almost certainly become a theologian rather than a mathematician and physicist—as it was, he spent much of his career engaged in theology, a detail usually left out by the more hagiographical of his biographers—and he would be remembered today only by students of theological history. Had he been born a century later, equally, some other great scientific achievement would have provided the paradigm for emerging science—my guess is that it would have been Edmund Halley’s successful prediction of the return of the comet that bears his name—and Newton would have had the same sort of reputation that Karl Friedrich Gauss has today: famous in his field, sure, but a household name? Not a chance.

What makes the point even more precise is that every other civilization from which adequate records survive had its own paradigmatic thinker, the figure whose achievements provided a model for the dawning age of reason and for whatever form of rational thought became that age’s principal cultural expression. In the classical world, for example, it was Pythagoras, who invented the word “philosophy” and whose mathematical discoveries gave classical rationalism its central theme, the idea of an ideal mathematical order to which the hurly-burly of the world of appearances must somehow be reduced. (Like Newton, by the way, Pythagoras was more than half a theologian; it’s a common feature of figures who fill that role.)

To take the same argument to a far more modest level, what about DoubtingThomas’ claim that The Archdruid Report represents the act of a unique individual influencing the course of history? Here again, a glance at history shows otherwise. I’m a figure of an easily recognizable type, which shows up reliably as each civilization’s Age of Reason wanes and it begins moving toward what Spengler called the Second Religiosity, the resurgence of religion that inevitably happens in the wake of rationalism’s failure to deliver on its promises. At such times you get intellectuals who can communicate fluently on both sides of the chasm between rationalism and religion, and who put together syntheses of various kinds that reframe the legacies of the Age of Reason so that they can be taken up by emergent religious movements and preserved for the future.

In the classical world, for example, you got Iamblichus of Chalcis, who stepped into the gap between Greek philosophical rationalism and the burgeoning Second Religiosity of late classical times, and figured out how to make philosophy, logic, and mathematics appealing to the increasingly religious temper of his time. He was one of many such figures, and it was largely because of their efforts that the religious traditions that ended up taking over the classical world—Christianity to the north of the Mediterranean, and Islam to the south—got over their early anti-intellectual streak so readily and ended up preserving so much of the intellectual heritage of the past.

That sort of thing is a worthwhile task, and if I can contribute to it I’ll consider this life well spent. That said, there’s nothing unique about it. What’s more, it’s only possible and meaningful because I happen to be perched on this particular arc of the wheel of time, when our civilization’s Age of Reason is visibly crumbling and the Second Religiosity is only beginning to build up a head of steam. A century earlier or a century later, I’d have faced some different tasks.

All of this presupposes a relationship between the individual and human society that fits very poorly with the unthinking prejudices of our time. That’s something that Spengler grappled with in his book, too;  it’s going to take a long sojourn in some very unfamiliar realms of thought to make sense of what he had to say, but that can’t be helped.

We really are going to have to talk about philosophy, aren’t we? We’ll begin that stunningly unfashionable discussion next week.

Wednesday, January 25, 2017

How Great the Fall Can Be

While I type these words, an old Supertramp CD is playing in the next room. Those of my readers who belong to the same slice of an American generation I do will likely remember the words Roger Hodgson is singing just now, the opening line from “Fool’s Overture”:

“History recalls how great the fall can be...”

It’s an apposite quote for a troubled time.

Over the last year or so, in and among the other issues I’ve tried to discuss in this blog, the US presidential campaign has gotten a certain amount of air time. Some of the conversations that resulted generated a good deal more heat than light, but then that’s been true across the board since Donald Trump overturned the established certainties of American political life and launched himself and the nation on an improbable trajectory toward our current situation. Though the diatribes I fielded from various sides were more than occasionally tiresome, I don’t regret making the election a theme for discussion here, as it offered a close-up view of issues I’ve been covering for years now.

A while back on this blog, for example, I spent more than a year sketching out the process by which civilizations fall and dark ages begin, with an eye toward the next five centuries of North American history—a conversation that turned into my book Dark Age America. Among the historical constants I discussed in the posts and the book was the way that governing elites and their affluent supporters stop adapting their policies to changing political and economic conditions, and demand instead that political and economic conditions should conform to their preferred policies. That’s all over today’s headlines, as the governing elites of the industrial world cower before the furious backlash sparked by their rigid commitment to the failed neoliberal nostrums of global trade and open borders.

Another theme I discussed in the same posts and book was the way that science and culture in a civilization in decline become so closely identified with the interests of the governing elite that the backlash against the failed policies of the elite inevitably becomes a backlash against science and culture as well. We’ve got plenty of that in the headlines as well. According to recent news stories, for example, the Trump administration plans to scrap the National Endowment for the Arts, the National Endowment for the Humanities, and the Corporation for Public Broadcasting, and get rid of all the federal offices that study anthropogenic climate change.

Their termination with extreme prejudice isn’t simply a matter of pruning the federal bureaucracy, though that’s a factor. All these organizations display various forms of the identification of science and culture with elite values just discussed, and their dismantling will be greeted by cheers from a great many people outside the circles of the affluent, who have had more than their fill of patronizing lectures from their self-proclaimed betters in recent years. Will many worthwhile programs be lost, along with a great deal that’s less than worthwhile?  Of course.  That’s a normal feature of the twilight years of a civilization.

A couple of years before the sequence of posts on dark age America, for that matter, I did another series on the end of US global hegemony and the rough road down from empire. That sequence also turned into a book, Decline and Fall. In the posts and the book, I pointed out that one of the constants of the history of democratic societies—actual democracies, warts and all, as distinct from the imaginary “real democracy” that exists solely in rhetoric—is a regular cycle of concentration and diffusion of power. The ancient Greek historian Polybius, who worked it out in detail, called it anacyclosis.

A lot can be said about anacyclosis, but the detail that’s relevant just now is the crisis phase, when power has become so gridlocked among competing power centers that it becomes impossible for the system to break out of even the most hopelessly counterproductive policies. That ends, according to Polybius, when a charismatic demagogue gets into power, overturns the existing political order, and sets in motion a general free-for-all in which old alliances shatter and improbable new ones take shape. Does that sound familiar? In a week when union leaders emerged beaming from a meeting with the new president, while Democrats are still stoutly defending the integrity of the CIA, it should.

For that matter, one of the central themes of the sequence of posts and the book was the necessity of stepping back from global commitments that the United States can no longer afford to maintain. That’s happening, too, though it’s being covered up just now by a great deal of Trumped-up bluster about a massive naval expansion. (If we do get a 350-ship navy in the next decade, I’d be willing to bet that a lot of those ships will turn out to be inexpensive corvettes, like the ones the Russians have been using so efficiently as cruise missile platforms on the Caspian Sea.)  European politicians are squawking at top volume about the importance of NATO, which means in practice the continuation of a scheme that allows most European countries to push most of the costs of their own defense onto the United States, but the new administration doesn’t seem to be buying it.

Mind you, I’m far from enthusiastic about the remilitarization of Europe. Outside the brief interval of enforced peace following the Second World War, Europe has been a boiling cauldron of warfare since its modern cultures began to emerge out of the chaos of the post-Roman dark ages. Most of the world’s most devastating wars have been European in origin, and of course it escapes no one’s attention in the rest of the world that it was from Europe that hordes of invaders and colonizers swept over the entire planet from the sixteenth through the nineteenth centuries, as often as not leaving total devastation in their wake. In histories written a thousand years from now, Europeans will have the same sort of reputation that Huns and Mongols have today—and it’s only in the fond fantasies of those who think history has a direction that those days are definitely over.

It can’t be helped, though, for the fact of the matter is that the United States can no longer afford to foot the bill for the defense of other countries. Behind a facade of hallucinatory paper wealth, our nation is effectively bankrupt. The only thing that enables us to pay our debts now is the status of the dollar as the world’s reserve currency—this allows the Treasury to issue debt at a breakneck pace and never have to worry about the cost—and that status is trickling away as one country after another signs bilateral deals to facilitate trading in other currencies. Sooner or later, probably in the next two decades, the United States will be forced to default on its national debt, the way Russia did in 1998.  Before that happens, a great many currently overvalued corporations that support themselves by way of frantic borrowing will have done the same thing by way of the bankruptcy courts, and of course the vast majority of America’s immense consumer debt will have to be discharged the same way.

That means, among other things, that the extravagant lifestyles available to affluent Americans in recent decades will be going away forever in the not too distant future. That’s another point I made in Decline and Fall and the series of posts that became raw material for it. During the era of US global hegemony, the five per cent of our species who lived in the United States disposed of a third of the world’s raw materials and manufactured products and a quarter of its total energy production. That disproportionate share came to us via unbalanced patterns of exchange hardwired into the global economy, and enforced at gunpoint by the military garrisons we keep in more than a hundred countries worldwide. The ballooning US government, corporate, and consumer debt load of recent years was an attempt to keep those imbalances in place even as their basis in geopolitics trickled away. Now the dance is ending and the piper has to be paid.

There’s a certain bleak amusement to be had from the fact that one of the central themes of this blog not that many years back—“Collapse Now and Avoid the Rush”—has already passed its pull date. The rush, in case you haven’t noticed, is already under way. The fraction of US adults of working age who are permanently outside the work force is at an all-time high; so is the fraction of young adults who are living with their parents because they can’t afford to start households of their own. There’s good reason to think that the new administration’s trade and immigration policies may succeed in driving both those figures down, at least for a while, but of course there’ll a price to be paid for that—and those industries and social classes that have profited most from the policies of the last thirty years, and threw their political and financial weight behind the Clinton campaign, will be first in line to pay it. Vae victis!*

More generally, the broader landscape of ideas this blog has tried to explore since its early days remains what it is. The Earth’s economically accessible reserves of fossil carbon dwindle day by day; with each year that passes, on average, the amount of coal, oil, and natural gas burnt exceeds the amount that’s discovered by a wider margin; the current temporary glut in the oil markets is waning so fast that analysts are predicting the next price spike as soon as 2018. Talk of transitioning away from fossil fuels to renewable energy, on the one hand, or nuclear power on the other, remains talk—I encourage anyone who doubts this to look up the amount of fossil fuels burnt each year over the last two decades and see if they can find a noticeable decrease in global fossil fuel consumption to match the much-ballyhooed buildout of solar and wind power.

The industrial world remains shackled to fossil fuels for most of its energy and all of its transportation fuel, for the simple reason that no other energy source in this end of the known universe provides the abundant, concentrated, and fungible energy supply that’s needed to keep our current lifestyles going. There was always an alternative—deliberately downshifting out of the embarrassing extravagance that counts for normal lifestyles in the industrial world these days, accepting more restricted ways of living in order to leave a better world for our descendants—but not enough people were willing to accept that alternative to make a difference while there was still a chance.

Meanwhile the other jaw of the vise that’s tightening around the future is becoming increasingly visible just now. In the Arctic, freak weather systems has sucked warm air up from lower latitudes and brought the normal process of winter ice formation to a standstill. In the Antarctic, the Larsen C ice shelf, until a few years ago considered immovable by most glaciologists, is in the process of loosing an ice sheet the size of Delaware into the Antarctic Ocean. I look out my window and see warm rain falling; here in the north central Appalachians, in January, it’s been most of a month since the thermometer last dipped below freezing. The new administration has committed itself to do nothing about anthropogenic climate change, but then, despite plenty of talk, the Obama administration didn’t do anything about it either.

There’s good reason for that, too. The only way to stop anthropogenic climate change in its tracks is to stop putting greenhouse gases into the atmosphere, and doing that would require the world to ground its airlines, turn its highways over to bicycles and oxcarts, and shut down every other technology that won’t be economically viable if it has to depend on the diffuse intermittent energy available from renewable sources. Does the political will to embrace such changes exist? Since I know of precisely three climate change scientists, out of thousands, who take their own data seriously enough to cut their carbon footprint by giving up air travel, it’s safe to say that the answer is “no.”

So, basically, we’re in for it.

The thing that fascinates me is that this is something I’ve been saying for the whole time this blog has been appearing. The window of opportunity for making a smooth transition to a renewable future slammed shut in the early 1980s, when majorities across the industrial world turned their backs on the previous decade’s promising initiatives toward sustainability, and bought into the triumphalist rhetoric of the Reagan-Thatcher counterrevolution instead. Since then, year after weary year, most of the green movement—with noble exceptions—has been long on talk and short on action.  Excuses for doing nothing and justifications for clinging to lifestyles the planet cannot support have proliferated like rabbits on Viagra, and most of the people who talked about sustainability at all took it for granted that the time to change course was still somewhere conveniently off in the future. That guaranteed that the chance to change course would slide steadily further back into the past.

There was another detail of the post-Seventies sustainability scene that deserves discussion, though, because it’s been displayed with an almost pornographic degree of nakedness in the weeks just past. From the early days of the peak oil movement in the late 1990s on, a remarkably large number of the people who talked eagerly about the looming crisis of our age seemed to think that its consequences would leave them and the people and things they cared about more or less intact. That wasn’t universal by any means; there were always some people who grappled with the hard realities that the end of the fossil fuel age was going to impose on their own lives; but all things considered, there weren’t that many, in comparison to all those who chattered amiably about how comfortable they’d be in their rural doomsteads, lifeboat communities, Transition Towns, et al.

Now, as discussed earlier in this post, we’ve gotten a very modest helping of decline and fall, and people who were enthusiastically discussing the end of the industrial age not that long ago are freaking out six ways from Sunday. If a relatively tame event like the election of an unpopular president can send people into this kind of tailspin, what are they going to do the day their paychecks suddenly turn out to be worth only half as much in terms of goods and services as before—a kind of event that’s already become tolerably common elsewhere, and could quite easily happen in this country as the dollar loses its reserve currency status?

What kinds of meltdowns are we going to get when internet service or modern health care get priced out of reach, or become unavailable at any price?  How are they going to cope if the accelerating crisis of legitimacy in this country causes the federal government to implode, the way the government of the Soviet Union did, and suddenly they’re living under cobbled-together regional governments that don’t have the money to pay for basic services? What sort of reaction are we going to see if the US blunders into a sustained domestic insurgency—suicide bombs going off in public places, firefights between insurgent forces and government troops, death squads from both sides rounding up potential opponents and leaving them in unmarked mass graves—or, heaven help us, all-out civil war?

This is what the decline and fall of a civilization looks like. It’s not about sitting in a cozy earth-sheltered home under a roof loaded with solar panels, living some close approximation of a modern industrial lifestyle, while the rest of the world slides meekly down the chute toward history’s compost bin, leaving you and yours untouched. It’s about political chaos—meaning that you won’t get the leaders you want, and you may not be able to count on the rule of law or even the most basic civil liberties. It’s about economic implosion—meaning that your salary will probably go away, your savings almost certainly won’t keep its value, and if you have gold bars hidden in your home, you’d better hope to Hannah that nobody ever finds out, or it’ll be a race between the local government and the local bandits to see which one gets to tie your family up and torture them to death, starting with the children, until somebody breaks and tells them where your stash is located.

It’s about environmental chaos—meaning that you and the people you care about may have many hungry days ahead as crazy weather messes with the harvests, and it’s by no means certain you won’t die early from some tropical microbe that’s been jarred loose from its native habitat to find a new and tasty home in you. It’s about rapid demographic contraction—meaning that you get to have the experience a lot of people in the Rust Belt have already, of walking past one abandoned house after another and remembering the people who used to live there, until they didn’t any more.

More than anything else, it’s about loss. Things that you value—things you think of as important, meaningful, even necessary—are going to go away forever in the years immediately ahead of us, and there will be nothing you can do about it.  It really is as simple as that. People who live in an age of decline and fall can’t afford to cultivate a sense of entitlement. Unfortunately, for reasons discussed at some length in one of last month’s posts, the notion that the universe is somehow obliged to give people what they think they deserve is very deeply engrained in American popular culture these days. That’s a very unwise notion to believe right now, and as we slide further down the slope, it could very readily become fatal—and no, by the way, I don’t mean that last adjective in a metaphorical sense.

History recalls how great the fall can be, Roger Hodgson sang. In our case, it’s shaping up to be one for the record books—and those of my readers who have worked themselves up to the screaming point about the comparatively mild events we’ve seen so far may want to save some of their breath for the times ahead when it’s going to get much, much worse.
_________________
*In colloquial English: “It sucks to lose.”

Wednesday, January 18, 2017

The Hate that Dare Not Speak its Name

As the United States stumbles toward the last act of its electoral process two days from now, and the new administration prepares to take over the reins of power from its feckless predecessor, the obligatory caterwauling of the losing side has taken on an unfamiliar shrillness. Granted, the behavior of both sides in the last few decades of American elections can be neatly summed up in the words “sore loser”; the Republicans in 1992 and 2008 behaved not one whit better than the Democrats in 1980 and 2000.  I think it’s fair, though, to say that the current example has plunged well past the low-water mark set by those dismal occasions. The question I’d like to discuss here is why that should be.

I think we can all admit that there are plenty of reasons why Americans might reasonably object to the policies and appointments of the incoming president, but the same thing has been true of every other president we’ve had since George Washington’s day. Equally, both of our major parties have long been enthusiastic practitioners of the fine art of shrieking in horror at the other side’s behavior, while blithely excusing the identical behavior on their side.  Had the election last November gone the other way, for example, we can be quite certain that all the people who are ranting about Donald Trump’s appointment of Goldman Sachs employees to various federal offices would be busy explaining how reasonable it was for Hillary Clinton to do exactly the same thing—as of course she would have.

That said, I don’t think reasonable differences of opinion on the one hand, and the ordinary hypocrisy of partisan politics on the other, explain the extraordinarily stridency, the venom, and the hatred being flung at the incoming administration by its enemies. There may be many factors involved, to be sure, but I’d like to suggest that one factor in particular plays a massive role here.

To be precise, I think a lot of what we’re seeing is the product of class bigotry.

Some definitions are probably necessary here. We can define bigotry as the act of believing hateful things about all the members of a given category of people, just because they belong to that category. Thus racial bigots believe hateful things about everyone who belongs to races they don’t like, religious bigots do the same thing to every member of the religions they don’t like, and so on through the dismal chronicle of humanity’s collective nastiness.

Defining social class is a little more difficult to do in the abstract, as different societies draw up and enforce their class barriers in different ways. In the United States, though, the matter is made a good deal easier by the lack of a fully elaborated feudal system in our nation’s past, on the one hand, and on the other, the tolerably precise dependency of how much privilege you have in modern American society on how much money you make. Thus we can describe class bigotry in the United States, without too much inaccuracy, as bigotry directed against people who make either significantly more money than the bigot does, or significantly less. (Of course that’s not all there is to social class, not by a long shot, but for our present purposes, as an ostensive definition, it will do.)

Are the poor bigoted against the well-to-do? You bet. Bigotry directed up the social ladder, though, is far more than matched, in volume and nastiness, by bigotry directed down. It’s a source of repeated amusement to me that rich people in this country so often inveigh against the horrors of class warfare. Class warfare is their bread and butter. The ongoing warfare of the rich against the poor, and of the affluent middle and upper middle classes against the working class, create and maintain the vast disparities of wealth and privilege in contemporary American society. What upsets the rich and the merely affluent about class warfare, of course, is the thought that they might someday be treated the way they treat everyone else.

Until last year, if you wanted to experience the class bigotry that’s so common among the affluent classes in today’s America, you pretty much had to be a member of those affluent classes, or at least good enough at passing to be present at the social events where their bigotry saw free play. Since Donald Trump broke out of the Republican pack early last year, though, that hindrance has gone by the boards. Those who want to observe American class bigotry at its choicest need only listen to what a great many of the public voices of the well-to-do are saying about the people who votes and enthusiasm have sent Trump to the White House.

You see, that’s a massive part of the reason a Trump presidency is so unacceptable to so many affluent Americans:  his candidacy, unlike those of all his rivals, was primarily backed by “those people.”

It’s probably necessary to clarify just who “those people” are. During the election, and even more so afterwards, the mainstream media here in the United States have seemingly been unable to utter the words “working class” without sticking the labels “white” in front and “men” behind. The resulting rhetoric seems to be claiming that the relatively small fraction of the American voting public that’s white, male, and working class somehow managed to hand the election to Donald Trump all by themselves, despite the united efforts of everyone else.

Of course that’s not what happened. A huge majority of white working class women also voted for Trump, for example.  So, according to exit polls, did about a third of Hispanic men and about a quarter of Hispanic women; so did varying fractions of other American minority voting blocs, with African-American voters (the least likely to vote for Trump) still putting something like fourteen per cent in his column. Add it all up, and you’ll find that the majority of people who voted for Trump weren’t white working class men at all—and we don’t even need to talk about the huge number of registered voters of all races and genders who usually turn out for Democratic candidates, but stayed home in disgust this year, and thus deprived Clinton of the turnout that could have given her the victory.

Somehow, though, pundits and activists who fly to their keyboards at a moment’s notice to denounce the erasure of women and people of color in any other context are eagerly cooperating in the erasure of women and people of color in this one case. What’s more, that same erasure went on continuously all through the campaign. Those of my readers who followed the media coverage of the race last year will recall confident proclamations that women wouldn’t vote for Trump because his words and actions had given offense to feminists, that Hispanics (or people of color in general) wouldn’t vote for Trump because social-justice activists denounced his attitudes toward illegal immigrants from Mexico as racist, and so on. The media took these proclamations as simple statements of fact—and of course that was one of the reasons media pundits were blindsided by Trump’s victory.

The facts of the matter are that a great many American women don’t happen to agree with feminists, nor do all people of color agree with the social-justice activists who claim to speak in their name. For that matter, may I point out to my fellow inhabitants of Gringostan that the terms “Hispanic” and “Mexican-American” are not synonyms? Americans of Hispanic descent trace their ancestry to many different nations of origin, each of which has its own distinctive culture and history, and they don’t form a single monolithic electoral bloc. (The Cuban-American community in Florida, to cite only one of the more obvious examples, very often vote Republican and  played a significant role in giving that electoral vote-rich state to Trump.)

Behind the media-manufactured facade of white working class men as the cackling villains who gave the country to Donald Trump, in other words, lies a reality far more in keeping with the complexities of American electoral politics: a ramshackle coalition of many different voting blocs and interest groups, each with its own assortment of reasons for voting for a candidate feared and despised by the US political establishment and the mainstream media.  That coalition included a very large majority of the US working class in general, and while white working class voters of both genders were disproportionately more likely to have voted for Trump than their nonwhite equivalents, it wasn’t simply a matter of whiteness, or for that matter maleness.

It was, however, to a very great extent a matter of social class. This isn’t just because so large a fraction of working class voters generally backed Trump; it’s also because Trump saw this from the beginning, and aimed his campaign squarely at the working class vote. His signature red ball cap was part of that—can you imagine Hillary Clinton wearing so proletarian a garment without absurdity?—but, as I pointed out a year ago, so was his deliberate strategy of saying (and tweeting) things that would get the liberal punditocracy to denounce him. The tones of sneering contempt and condescension they directed at him were all too familiar to his working class audiences, who have been treated to the same tones unceasingly by their soi-disant betters for decades now.

Much of the pushback against Trump’s impending presidency, in turn, is heavily larded with that same sneering contempt and condescension—the unending claims, for example, that the only reason people could possibly have chosen to vote for Trump was because they were racist misogynistic morons, and the like. (These days, terms such as “racist” and “misogynistic,” in the mouths of the affluent, are as often as not class-based insults rather than objective descriptions of attitudes.) The question I’d like to raise at this point, though, is why the affluent don’t seem to be able to bring themselves to come right out and denounce Trump as the candidate of the filthy rabble. Why must they borrow the rhetoric of identity politics and twist it (and themselves) into pretzel shapes instead?

There, dear reader, hangs a tale.

In the aftermath of the social convulsions of the 1960s, the wealthy elite occupying the core positions of power in the United States offered a tacit bargain to a variety of movements for social change.  Those individuals and groups who were willing to give up the struggle to change the system, and settled instead for a slightly improved place within it, suddenly started to receive corporate and government funding, and carefully vetted leaders from within the movements in question were brought into elite circles as junior partners. Those individuals and groups who refused these blandishments were marginalized, generally with the help of their more compliant peers.

If you ever wondered, for example, why environmental groups such as the Sierra Club and Friends of the Earth changed so quickly from scruffy fire-breathing activists to slickly groomed and well-funded corporate enablers, well, now you know. Equally, that’s why mainstream feminist organizations by and large stopped worrying about the concerns of the majority of women and fixated instead on “breaking the glass ceiling”—that is to say, giving women who already belong to the privileged classes access to more privilege than they have already. The core demand placed on former radicals who wanted to cash in on the offer, though, was that they drop their demands for economic justice—and American society being what it is, that meant that they had to stop talking about class issues.

The interesting thing is that a good many American radicals were already willing to meet them halfway on that. The New Left of the 1960s, like the old Left of the between-the-wars era, was mostly Marxist in its theoretical underpinnings, and so was hamstrung by the mismatch between Marxist theory and one of the enduring realities of American politics. According to Marxist theory, socialist revolution is led by the radicalized intelligentsia, but it gets the muscle it needs to overthrow the capitalist system from the working classes. This is the rock on which wave after wave of Marxist activism has broken and gone streaming back out to sea, because the American working classes are serenely uninterested in taking up the world-historical role that Marxist theory assigns to them. All they want is plenty of full time jobs at a living wage.  Give them that, and revolutionary activists can bellow themselves hoarse without getting the least flicker of interest out of them.

Every so often, the affluent classes lose track of this, and try to force the working classes to put up with extensive joblessness and low pay, so that affluent Americans can pocket the proceeds. This never ends well.  After an interval, the working classes pick up whatever implement is handy—Andrew Jackson, the Grange, the Populist movement, the New Deal, Donald Trump—and beat the affluent classes about the head and shoulders with it until the latter finally get a clue. This might seem  promising for Marxist revolutionaries, but it isn’t, because the Marxist revolutionaries inevitably rush in saying, in effect, “No, no, you shouldn’t settle for plenty of full time jobs at a living wage, you should die by the tens of thousands in an orgy of revolutionary violence so that we can seize power in your name.” My readers are welcome to imagine the response of the American working class to this sort of rhetoric.

The New Left, like the other American Marxist movements before its time, thus had a bruising face-first collision with cognitive dissonance: its supposedly infallible theory said one thing, but the facts refused to play along and said something very different. For much of the Sixties and Seventies, New Left theoreticians tried to cope with this by coming up with increasingly Byzantine redefinitions of “working class” that excluded the actual working class, so that they could continue to believe in the inevitability and imminence of the proletarian revolution Marx promised them. Around the time that this effort finally petered out into absurdity, it was replaced by the core concept of the identity politics currently central to the American left: the conviction that the only divisions in American society that matter are those that have some basis in biology.

Skin color, gender, ethnicity, sexual orientation, disability—these are the divisions that the American left likes to talk about these days, to the exclusion of all other social divisions, and especially to the exclusion of social class.  Since the left has dominated public discourse in the United States for many decades now, those have become the divisions that the American right talks about, too. (Please note, by the way, the last four words in the paragraph above: “some basis in biology.” I’m not saying that these categories are purely biological in nature; every one of them is defined in practice by a galaxy of cultural constructs and presuppositions, and the link to biology is an ostensive category marker rather than a definition. I insert this caveat because I’ve noticed that a great many people go out of their way to misunderstand the point I’m trying to make here.)

Are the divisions listed above important when it comes to discriminatory treatment in America today? Of course they are—but social class is also important. It’s by way of the erasure of social class as a major factor in American injustice that we wind up in the absurd situation in which a woman of color who makes a quarter million dollars a year plus benefits as a New York stockbroker can claim to be oppressed by a white guy in Indiana who’s working three part time jobs at minimum wage with no benefits in a desperate effort to keep his kids fed, when the political candidates that she supports and the economic policies from which she profits are largely responsible for his plight.

In politics as in physics, every action produces an equal and opposite reaction, and so absurdities of the sort just described have kindled the inevitable blowback. The Alt-Right scene that’s attracted so much belated attention from politicians and pundits over the last year is in large part a straightforward reaction to the identity politics of the left. Without too much inaccuracy, the Alt-Right can be seen as a network of young white men who’ve noticed that every other identity group in the country is being encouraged to band together to further its own interests at their expense, and responded by saying, “Okay, we can play that game too.” So far, you’ve got to admit, they’ve played it with verve.

That said, on the off chance that any devout worshippers of the great god Kek happen to be within earshot, I have a bit of advice that I hope will prove helpful. The next time you want to goad affluent American liberals into an all-out, fist-pounding, saliva-spraying Donald Duck meltdown, you don’t need the Jew-baiting, the misogyny, the racial slurs, and the rest of it.  All you have to do is call them on their class privilege. You’ll want to have the popcorn popped, buttered, and salted first, though, because if my experience is anything to go by, you’ll be enjoying a world-class hissy fit in seconds.

I’d also like to offer the rest of my readers another bit of advice that, again, I hope will prove helpful. As Donald Trump becomes the forty-fifth president of the United States and begins to push the agenda that got him into the White House, it may be useful to have a convenient way to sort through the mix of signals and noise from the opposition. When you hear people raising reasoned objections to Trump’s policies and appointments, odds are that you’re listening to the sort of thoughtful dissent that’s essential to any semblance of democracy, and it may be worth taking seriously. When you hear people criticizing Trump and his appointees for doing the same thing his rivals would have done, or his predecessors did, odds are that you’re getting the normal hypocrisy of partisan politics, and you can roll your eyes and stroll on.

But when you hear people shrieking that Donald Trump is the illegitimate result of a one-night stand between Ming the Merciless and Cruella de Vil, that he cackles in Russian while barbecuing babies on a bonfire, that everyone who voted for him must be a card-carrying Nazi who hates the human race, or whatever other bit of over-the-top hate speech happens to be fashionable among the chattering classes at the moment—why, then, dear reader, you’re hearing a phenomenon as omnipresent and unmentionable in today’s America as sex was in Victorian England. You’re hearing the voice of class bigotry: the hate that dare not speak its name.