Control

Blue Heron

The word traces to the medieval Latin verb contrarotulare, a combination of contra (against) and rotulus (wheel or roll). In ancient times, official contracts, laws, regulations, treaties, and decrees, were reified through written documentation stored as cylinders of rolled parchment. Supporting a legal argument or verifying the accuracy of a claim involved comparing statements or assertions “against the rolls.” To control something originally meant that you had access to the documentary means of justifying the exercise of power over that thing: the bill of sale for a slave, for example.

Control is an entirely civilized word. Literacy is a prerequisite—the very idea of control is derived from the magical power civilization assigns to things that have been written down.

The word has grown much since its earliest days. Its application is no longer limited to the realm of bureaucratized power; its broad metaphorical application has made it a word that the civilized simply cannot do without. Control as metaphor is so widespread and commonplace that it is no longer possible to recognize it as metaphor.

We are asked to control our temper, for example. We are told the control of fire was a decisive event in our species’ evolution. But this is metaphor. No one really has power over fire—whether we are talking about an emotional flame or one that burns skin, not the kind of power that commands the labor of another person, not the kind of power that seals a man in prison, not the kind of power that empties the ocean of fish and the forest of trees.

Control—real control, not its metaphorical cousin—still requires documentary means of justification. An Achilles heel here, I think: remove the words, and you remove the power. Burn the rolls, and the magic becomes mere ashes.  

Two unrelated events

A warm afternoon in early March, and the dog and I are sitting in the sunshine on the deck. There is a birdfeeder hanging just on the other side of the deck rail in front of me, perhaps only six feet away. A single small round bird is feeding, apparently also enjoying the sun’s munificence. I am watching the bird eat—seed husks spilling out of both sides of its beak—while I ride a slowly snaking train of thought on its way to nowhere in particular.

Then the air shatters suddenly someplace just to my left, and a hawk bullets past the birdfeeder in a laser-straight line, a speckled football that plucks the bird off its perch like a street magician making a dollar bill disappear. The dog jumps to her feet and breathes a reflexive growl, but it is over and done by the time either of us knows what happened. My eyes linger on the motionless birdfeeder while I shake off the impulse toward horror and replace it with the ragged beginnings of a joke: “it’s called a birdfeeder for a reason.”

A half hour later a student emails me, inquiring about her grade and apologizing for several missing assignments. She says that things have not been good. She says she has been struggling and unable to keep up with her classes because several members of her family have recently been killed, victims of ethnic cleansing, slaughtered as part of the genocide ripping its way through the Tigray region of northern Ethiopia.  

She asks about the possibility for extra credit to bring her grade up.

My eyes linger on the motionless words on the screen, and I am unable to shake off the impulse toward horror because there is nothing to replace it with.  

Been caught stealing, once when I was five: some thoughts on instinctual morality

Just like the first line in the Jane’s Addiction song. In fact, during that same summer, the summer of 1966, I became a criminal of the lowest caliber, not just a thief, but an arsonist as well as a kind of pimp. The arson is the only thing that I felt truly guilty about. And that’s the thing that fascinates me now, that I felt rock-in-the-pit-of-my-stomach horrible for weeks after starting a fire, but had no misgivings about the other two; with the theft in particular, I felt absolutely no remorse whatsoever.

I got caught stealing a single peanut from the grocery store. While mom was distracted by my little sister, I pocketed a peanut from a heaping pile of roasted peanuts in a bulk bin in the produce section. When we got back to the car, I pulled it out, started to crack it open, and mom suddenly realized what I had done.

She was livid. She was angrier with me than she had ever been before. She said she was going to take me back into the store and find the store manager and have me give the peanut back to him—in person—and apologize for stealing. I resisted frantically and started balling. The thought of confessing my crime to the store manager, a complete stranger, was terrifying. I distinctly remember trying to reason with her at the time, arguing that there were so many peanuts that taking a single one of them simply didn’t make a difference. And, besides, unlike the other boxed and packaged products for sale in the store, the peanuts were unpackaged and out in the open, clearly available for anyone who wanted one. It was like the always-full bowl of tiny pillow-shaped pastel mints that my grandmother had sitting on the coffee table, free for anyone who had a hankering.

OK, so my five-year-old arguments were nowhere near as articulate as all that. And even if they were, I’m pretty sure my mother wouldn’t have been persuaded. We sat like that in the car for a long time, with her demanding that I take the peanut back, and me pleading desperately through tears. I’m not sure why, but she eventually gave into my sobbing pleas—I suspect it probably had something to do with my one-year-old sister, who, not wanting to be left out, had started crying herself. Whatever the case, she started the car and drove us home.

The arson occurred sometime after the peanut incident, on a weekend when my dad was home and burning some yard waste just outside the backyard fence. I grabbed a stick and plunged it into the heart of the fire until it ignited, and then took it across the alley and tossed it in the tall dry grass alongside the neighbor’s red-stained woven-cedar fence. By the time the fire department put it out, an entire section of the fence was turned to charcoal, along with the lower branches of a crabapple tree and a sizeable portion of the vacant field next door.

But I never got caught. When asked if I knew how the fire started, I lied and blamed it on a couple of older kids I said that I saw in the alley earlier that day. I only sort of made the story up, and my description of the kids was based entirely on the bullies who had tossed my bike in the field and tried to beat me up a few weeks earlier, so my story was coherent and consistent—and the cops seemed to recognize who I was talking about. I was entirely in the clear. No one suspected me. And when I finally came clean several weeks after the incident, my parents didn’t believe me at first. I had to come clean, though. My immature conscience simply couldn’t bear the weight.

My time as a pimp was pretty limited, restricted to one afternoon. It involved Becky, a girl my age who, for a short time, lived two houses down. I convinced Becky to pee in front of me and two of my friends in the backyard sandbox. She felt embarrassed and went home right afterwards, but my friends and I talked about the incident for months.

So, here’s my question. I was completely wracked with guilt about the fire. And although I don’t remember experiencing any actual guilt about Becky, I no longer wanted to play with her because things between us felt a little awkward, something perhaps approaching guilt—at the very least I had the sense that I probably shouldn’t have talked her into the dirty deed. But the shoplifting never bothered me. I was bothered by the fact that my mother was so upset, but not by the act of theft itself. Why? What is it about the peanut that is different from the other two?

The easy answer is that it was just a single peanut. But that doesn’t feel right to me. I suspect that my mother’s reaction wouldn’t have been any different if it had been a more expensive item. And, besides, my five-year-old mind was not developed enough to appreciate the relative worth of things in economic terms. There is something else about this specific act of theft itself, something that makes it qualitatively different from the other two crimes.

Qualitatively different from other kinds of theft, as well. I have stolen a few things since the peanut incident, but never just for the thrill of it like the kleptomaniac couple in the Jane’s Addiction song. Stealing seems morally wrong to me on some level, but, as offenses go, stealing food barely registers, something on par with j-walking or growing cannabis in your backyard garden in Indiana in terms of its triviality. Looking back, I wonder whether I would have felt guilt had I stolen something that wasn’t edible, a plastic toy for example, instead of a peanut.

And the guilt I felt about the fire wasn’t really about the fire itself. Although arson is a property crime, I was too young to fully appreciate the destructive impact of the fire I started. My guilt was not directed at the material damage I caused—which was, all things considered, pretty negligible. And even from my adult perspective, I can’t imagine ever feeling remorse solely for causing property damage. Especially since I now know how insurance companies work.

What I remember most about my experience of the incident at the time was feeling shock and horror at all the commotion the fire caused: neighbors ran over with lengths of garden hose; two fire trucks came with their sirens blasting; the police were there. From my five-year-old point of view, it was a massive spectacle. All of these people were mobilized over something that I did and lied about doing.

And I think that was it. It was the lie part; it was the not-taking-responsibility for the results of my actions, in combination with the trouble I had caused other people, that played most heavily on my conscience. Once I told my parents what I had done, the guilt dissipated quickly. Thinking back on the incident now, from the nostalgic vantage of five and a half decades, only makes me smile—no trace of anything close to guilt or remorse remains.

So, back to my question: why feelings of guilt over the fire, even if the guilt was really more about causing other people trouble and then lying about it than the actual fire, but no feelings of anything approaching guilt for stealing the peanut? Again, to say that it is a matter scope of impact or degree of result—pocketing one tiny peanut versus mobilizing the fire department—doesn’t feel like the whole story. The critical difference seems to me to be one of social impact: the fire affected numerous other people; the peanut was only a problem for my mom.

Perhaps the difference has something to do with the fact that I am a social primate, that I have psychological expectations—call them instincts—that are a result of millions of years of natural selection, most of which occurred in small egalitarian communities where having good relationships with others was a requisite for mutual survival. And despite the fact that civilization has veneered over my egalitarian hunter-gatherer psychology with a thin layer supporting obligatory acquiescence to authority—a layer entirely unrelated to my evolutionary heritage but necessary to keep the consumption machine running smoothly and efficiently—my base morality is, like yours and everyone else’s, Paleolithic in its design.

Think about my feelings of guilt (or lack thereof) in terms of what might be expected of an instinctual morality stitched from a social cloth woven across a quarter million years of life in the context of hunter-gatherer band society. There are distinct features of band society that would seem to be consistent both with my feelings of remorse about the fire and my lack of those feelings for stealing the peanut.  

First, modern-day hunter-gatherers have sometimes very elaborate social leveling mechanisms to guard against the undue influence of specific individuals. Any action by an individual that disturbs the relative harmony of the group is a potential threat to group coherence. And any threat to group coherence is a potential threat to survival. By starting the fire, I caused a spectacle that disrupted the normal calm activity of the people in my community. And by lying about it, I further violated my instinctual sense of obligation to the larger group. And, although this might be stretching things a bit, perhaps coming clean and telling my parents about my crime was a way of seeking to repair a psychological rift in my feelings of community connection.    

The peanut is much easier to explain in terms my instinctual hunter-gatherer sensibilities. Band society—like the cultures of most other higher primates—operates under powerful norms of reciprocity. This is especially true with respect to food sharing. Some days the hunting or foraging goes well for you, and when it does, you share all the results with everyone else, making sure to divi things up as equitably as possible. On other days, you might come up empty, and on those days one or more of your neighbors will have your back, so you never really have to worry about going hungry. Food is something that is readily available, always shared, and never something that you need to ask for. Seen through the instinctual lens of reciprocity, the massive pile of peanuts in the grocery store obviously meant that I was expected to help myself (and the fact that people have to pay other people for food flies in the face of a quarter million years of our species’ experience).

OK, so my feelings of guilt about the fire can be linked (perhaps) to my instincts regarding my group obligations and my lack of remorse for stealing a peanut can be linked to my evolved expectations regarding reciprocity norms. But what about my brief career as a pimp? How might instinctual morality explain my relative lack of remorse about Becky?

Now that I think about it, probably no need to tease that one out. I suspect the evolutionary connections there are, unfortunately, pretty obvious. 

A response to a “Q” about ridiculous conspiracy theories

I have been asked numerous times recently some version of the following question:

How can people like these QAnon folks actually believe their completely ridiculous conspiracy theories?

The easy answer comes right out of social psychology: it’s a simple case of social media facilitated group polarization. Group polarization is the phenomenon in which when you interact with a group of people who share your beliefs and opinions, your beliefs and opinions become more extreme versions of what they were before. With frequent interaction, beliefs can evolve into some pretty bizarre forms.

There are at least two mechanisms for this. First, adopting an extreme version of what everyone in your group already believes is a way of gaining notice and notoriety in the group.

Second, you have reasons and justifications for believing what you do, and the people you interact with also have reasons and justifications for believing what they do—some of which might be different from your reasons and justifications. So, when you interact with a group of like-minded people, you are likely to acquire additional ways of justifying your shared beliefs—which makes your beliefs appear even more reasonable to you than they were before.

The more difficult answer—and one that I suspect some folks won’t want to hear—is as much philosophical as it is psychological, and hinges on the fact that reality is something that is socially constructed. The bat-shit crazy conspiracy theories that QAnon and their ilk promote are not actually all that unusual in terms of their bat-shit craziness. The thing that makes them seem unusual has more to do with the (relatively) small number of folks who believe them, than with anything about their content. Right now, a large proportion of my friends and acquaintances believe medieval fairytales about the magical exploits of the son of a Bronze Age sky-dwelling war god. In terms of craziness of content, the only thing that separates QAnon conspiracy theories and mainstream Christianity is the mainstream part.  

A response to a student question about the mentality of pro-Trump senators

[…] The reactance theory was clearly at play in Senate. The majority of Republicans did not want to have their attitudes changed so they were not changed. It is interesting that in the lecture you mentioned that people with higher levels of intelligence are more immune to attitude change. If we were in class I would ask how this relates to learning and neuroplasticity. I always thought that children were smarter than adults because of their willingness to change their beliefs when presented with new information.

Your question would make for a very good class discussion. I would be tempted to approach this from two levels, the neural level and the cognitive level.

In terms of brain development, a child’s brain is much more plastic than an adult’s is. Early brain development proceeds by a process of proliferation and pruning in which an explosion of neural connections is followed by a winnowing away of those that aren’t used. At two years old, you have more neural connections than you will have the rest of your life. Learning from that point is as much about eliminating connections as it is about growing new ones—you might think of it as reducing the noise in the system.

From a cognitive perspective what is happening is schema development. Recall the distinction between assimilation and accommodation, what is happening with the young child is all about accommodation: creating entirely new schemas and actively modifying old ones. As we get older, and our schemas become more plentiful and more elaborate, we become far more prone to assimilation—to the point where we are quite likely to assimilate—deal with new information by using existing schemas—even when we should accommodate. This tendency is sometimes called “the assimilation bias.” This bias occurs because at the neurological level accommodation is “expensive” in that it requires the formation of new connections and/or alterations to existing ones (this is also why taking a class in an area that you aren’t familiar with can be so taxing: creating new schemas is hard work).

Intelligence comes into play here. Higher intelligence means more sophisticated and elaborate schemas, which means more ways to assimilate new information into your existing knowledge base, which means that it is going to take more sophisticated persuasion to get you to change your attitudes. But I don’t think the senate republicans’ refusal to change their minds had much to do with intelligence–the republicans who voted to convict seem to me to be at least as intelligent as a group as the ones who voted to acquit.

As we get older, we become increasingly likely to rely on existing schemas and incorporate new information into what we already think we know (aka increasingly dogmatic). Ignoring the likely influence of factors such as party loyalty and fear of blowback from constituents, and strictly from a cognitive processing perspective, assimilation bias is how I would explain the pro-Trump republican’s apparent inability to be moved by the overwhelming weight of the facts presented.

Broken state part 2: the elephant in the room

“The elephant in the room” is a metaphorical idiom that can be traced back to an early 19th century fable by a Russian poet. It is invoked in situations where attention is being diverted from something that is or should be glaringly obvious because it would be too uncomfortable, embarrassing, dangerous, or difficult to address it head on. It is a favorite of addiction counselors and family therapists. It also frequently finds its way into political rhetoric, as a tool for intimating that “the real issue” is being ignored.

I want to borrow this particular room-inhabiting elephant for a moment, but I want to squeeze him into a slightly altered form in order to use him as an allegory for our current political circumstances.

Imagine a room that is occupied by an elephant, but everyone in this room openly acknowledges the elephant’s presence. Imagine further that the people in the room are desperately trying to limit the elephant’s ability to damage the room’s furnishings but everyone has a different idea about how to handle the situation; no one can agree on what needs to be done to keep the elephant calm, where in the room it should be standing, which direction its trunk should be pointing, how each of its feet should be positioned, what to do about the rapidly accumulating volume of elephant dung, etc.

As with the original Russian fable version, this altered version is also meant to convey the underlying moral that something obvious is being ignored. And, as with the original, the obvious thing has something to do with the elephant. But in this case, the reason the obvious thing is being ignored has nothing to do with avoiding discomfort or embarrassment or danger or difficulty. Instead, the real issue is being ignored because the way the problem has been framed renders it invisible. The real issue is not apparent to the people in the room because they have identified the problem as the need to limit the elephant’s destructive potential, they have framed the problem in terms of the question: “How do we best control the elephant?” rather than in terms of the most glaringly obvious question: Why the hell is there an elephant in this room?

In part 1, I suggested that a common cognitive bias might help explain some of the differences between folks on the extreme left and the extreme right regions of the political spectrum. Specifically, folks on the far right tend to make internal attributions when rationalizing their policy agenda, and those on the far left are more likely to frame things in terms of external, situational factors. The right sees individuals as ultimately responsible for their good or bad fortune in life: poverty is a sign of personal failing; wealth is a sign of personal merit. The left, on the other hand (the left hand?) is more sensitive to the influence of external, contextual forces: poverty comes from lack of education and opportunity, and is exacerbated by systemic racism and an economic system designed to increase income disparity—a system that actually requires income disparity in order to function properly, a system that intentionally leverages the desperation that attends the threat of poverty.

I went on to say that, because of the focus placed on the role of systemic forces, I see the “radical” left as the lesser of evils, and then went on to clarify that the left reflects the lesser of evils, but is still very much an evil. And I ended with: “The truth is that everyone on both the far left and far right—and all points in between—is making a fundamental and critical mistake in their judgment of things.” Here I want to suggest that the fundamental and critical mistake in judgement is a result of how the problem is being framed. As with the people in my altered elephant-in-the-room account, the real issue is being ignored by everyone on both the left and right and all points in between because the problem is being framed in a way that obviates the asking of the most glaringly obvious question.

Before I go on, let me quickly address my use of the singular problem in the above sentence. Despite an expanding number of specific issues and concerns, there is really only one main problem that is being addressed: the system needs to change.[1] Everyone differs in terms of which particular parts of the system they think need to change, and in which specific direction, and by how much. But the problem—the only problem—is with the current state of the system. Poverty, climate change, pandemic response, healthcare, most if not all mainstream political policy disagreements ultimately come down to whether or how or which direction or to what degree the system needs to be changed. Once the problem has been framed in this way, all proposed solutions will naturally involve making—or not making or reversing—changes, tweaks, alterations, and adjustments to the system.

The system, of course, is the elephant in my allegory. And despite all the massive damage and destruction and pain and suffering it is causing, everyone continues to act as if its presence in the room is a natural thing, as if it belongs in the room. Everyone on all sides of every debate wants the system to be different than it is, but they all fully embrace the system itself as a necessary feature of human social life. The idea that there needs to be a system to begin with is never questioned. It is, in fact, unquestionable.

“Why the hell does there need to be a system in the first place?” should be the most glaringly obvious question. And the fact that humans have existed as a species for a long, long time before their lives were made systematic, the fact that people flourished for hundreds of thousands of years in the complete absence of a system of any kind, provides pretty clear evidence that a system isn’t an essential feature of human life.   

Yes, there are all kinds of problems with our current system. But none of these problems are the real issue. The real issue isn’t with the nature of the system, with the details about how it is currently structured or organized or with how its operative rules are or are not being applied in any specific situation. The real issue is that our lives are being structured according to the operative rules of a system. Our very thoughts have been systematized—and even our emotional responses are being structured according to systematic patterns.

We need to flush the system out of our hearts and minds. But before we can do that, we need to acknowledge that the real issue is the system itself. The real issue is that we are daily (hourly, every second) obliged to superimpose a mechanical, systematic overlay atop all of our organically human activity, framing our experience of the world in terms of civilization’s mechanistic thought-forms. Until we acknowledge this particularly gargantuan elephant in the room, any revolt against the current system is doomed to fail.

Consider what Robert Pirsig had to say about this in Zen and the Art of Motorcycle Maintenance:

But to tear down a factory or to revolt against a government […] because it is a system is to attack effects rather than causes; and as long as the attack is upon effects only, no change is possible. The true system, the real system, is our present construction of systematic thought itself […], and if a factory is torn down but the rationality which produced it is left standing, then that rationality will simply produce another factory. If a revolution destroys a systematic government,[2] but the systematic patterns of thought that produced that government are left intact, then those patterns will repeat themselves in the succeeding government.

It isn’t a matter of deciding whether internal motives or situational influences are more or less important. They both emerge from within a systematize way of living that is rapidly destroying the living planet. Instead of arguing about how best to fix the system, we need to stop living in systematic ways and reembrace organic and authentically human modes of life.


[1] By “system” I mean the collection of laws and policies and procedures and technologies and bureaucratically organized conduits of power that control and structure and direct and otherwise impact the activity of people.  

[2] “Systematic government” is redundant.

Broken state part 1: attributional bias and the political divide

After we memorized the names of all 50 state capitals, we had to choose one state to become a quasi-expert on. It was fifth grade, and I chose Utah because it was the location of Dinosaur National Monument, a massive fossilized sandbar packed to the brim with Jurassic period bones, and dinosaurs occupied most of the free storage area in my eleven-year-old brain. I added Utah to the blank spaces in a form letter request for information that we all copied from the chalk board, and mailed it off to Utah’s chamber of commerce. I didn’t hear anything back after several weeks of waiting, so the information for my project came from an out-of-date edition of The Encyclopedia Britannica.

The culmination of the months-long state assignment was a mock-up state float, a desktop depiction of important things about our chosen state, rendered in some kind of artistic fashion. My float consisted of an upside down double-wide shoebox wrapped in orange crape paper (Utah was orange on the US map hanging in the classroom) topped with a grotesque honeybee made out of knotted yellow and black yarn suspended at the end of a piece of wire inserted into a clay beehive (Utah is “The Beehive State”). The beehive was flanked by the state’s name in block construction paper letters.

Elona sat in the desk behind me. She won the lottery that was held for the dozen or so students who wanted Washington as their state, and her mother had helped her construct a truly amazing Washington-shaped float made out of some kind of homemade plaster, elaborately painted and labeled, and adorned with tiny trees, a toothpick model of the Space Needle, and creative representations of other iconic structures, all resting delicately on a large wooden kitchen cutting board, the kind that slides out from under the counter. The board was too big to fit entirely on the top of her desk, and it stuck out three or four inches to the front and back. When I pulled my chair out to sit down, the back of my wrist knocked the board onto the floor, where her float shattered into tiny Humpty-Dumpty pieces.

The room fell silent. Elona wailed and burst into tears. What happened after that is kind of foggy. I remember the teacher grabbing me and yelling in my face. I remember saying I was sorry and pleading that it was an accident. The teacher eventually convened the class into a kind of classroom tribunal, and asked the other students what should be done about the situation. It seemed obvious to everyone that I needed to be punished. It was suggested that I get an F on my project. It was also suggested that Elona should be allowed to destroy my float—and this suggestion quickly gained consensus.

I tried to plead my case. It was an accident, and, although I was truly sorry it happened, it clearly wasn’t my fault. Eventually, after the teacher was satisfied that I had displayed the appropriate level of contrition, she agreed that I was probably not entirely to blame. Or at the very least, that the act wasn’t intentional. But “negligent float-slaughter” is still a serious crime, and to satisfy the students’ bloodlust Elona would get an automatic A for her float and I would not be able to receive a grade higher than B-.   

Here, I want to spend a moment on the students’ bloodlust. I was traumatized by this at the time. How could these people possibly think that I would purposely wreck Elona’s float? These kids were not strangers. Most of us had known each other since first or second grade. I wasn’t a bully. I had no history of violent behavior. I was a rather friendly sort of guy, if a little on the shy side. And, to make things even worse, I had a bit of a crush on Elona. How could they judge me so harshly?

It was several decades before I found an answer to that question in the form of a ubiquitous social-cognitive bias called the fundamental attribution error, now a standard topic in introductory psychology textbooks. Briefly, the fundamental attribution error refers to the fact that when we try to explain the actions of other people, we have powerful tendency to overestimate the influence of internal, dispositional factors (personal traits), and underestimate—or completely ignore—the potential influence of contextual factors (the situation). When someone does something, we automatically see their behavior as a reflection of something about them as a person, rather than as a response to their larger circumstances. Interestingly, this bias is completely reversed when we attempt to explain our own behavior.

The explanation for this bias is twofold. First, there is a matter of knowledge access. When I observe another person acting, I have only their action itself to work with. I see them. I see what they do. What I don’t see is their situation, the larger context in which their action is embedded. I don’t know anything about the thought processes that led them to act in such a way. I don’t know what happened to them earlier that day. I don’t know what their physical or emotional state is. And I often don’t have a clear perspective on the details of their immediate environment. However, when I go to explain my own actions, I have extensive situational knowledge to draw upon. Second, when it comes to our own behavior, self-esteem preservation comes into play. This is especially true for actions that might be viewed negatively by others. So, we are motivated to rationalize our own negative actions in terms of external causes and influences, but we have no such motivation to preserve the self-esteem of other people.

Given their still-developing moral sensibilities and the limited information they had to work with, the other students’ reflexive desire to see me punished was entirely understandable. Elona’s float was in pieces as a result of a direct physical act on my part. The critical detail that the cutting board protruded into the space needed for the back of my chair was something only known to me—and even then, only after I sent her float crashing to the floor.  

The other day, while I was listening to a Sunday morning news show where political talking heads were babbling their inane party-line rhetoric, it suddenly occurred to me that the fundamental attribution error might be used as a way of interpreting many of the major policy differences between the political right and the political left. Part of the reason the two sides of the political divide are talking past each other might be because they are framing things according to a different attributional default. For conservatives on the right, human behavior is driven primarily if not entirely by personal values and internal motives. For progressives on the left, human behavior is largely a response to external systemic forces.

As an example, consider how the two sides of the political divide talk about the sources of poverty. For those on the right, poverty is a personal failing, a sign of laziness and lack of intelligence and initiative. The right’s focus on internal factors here is highlighted by the absurd idea that poor people need to “pull themselves up by their own bootstraps” (something literally impossible given the physical laws of our universe!). While for those on the left, poverty is a result of lack of access to education and opportunity—often exacerbated by systemic racism—and a side effect of an economic system that is designed specifically to promote the accumulation of wealth among the already wealthy.

Other policy differences seem to follow a similar pattern, with the right seemingly more concerned about individual responsibility and intra-personal factors (“We can’t extend unemployment benefits because then people would have no incentive to go back to work”) and the left more focused on external constraints and systemic forces (“If we don’t extend unemployment benefits, people will lose their homes”).

Even the insane controversy over wearing facemasks fits roughly with this internal-external distinction, with whack-jobs on the far right claiming their personal freedoms are being violated if they have to cover their mouth and nose in public, entirely ignoring that the purpose of wearing a mask is to protect other people by limiting the spread of the virus. These folks see wearing facemasks as an internally-driven personal choice, and consider the external context—a once-in-a-century global pandemic—as irrelevant (or worse: as a government deep-state plot to turn everyone into a communist).   

Demographic differences between the progressive left and the far right might explain some of this. For one thing, progressives tend to be more highly educated than those on the far right, and thus more likely to look beyond immediate surface details when attempting to explain things. Recall that a major source of the fundamental attribution error is the lack of access to situational knowledge. More education means a more nuanced ability to understand the broader situation.

In addition, the progressive end of the spectrum is the ideological home of more LGBTQ and people of color, people who have personal, firsthand experience with systemic discrimination, and thus an increased sensitivity to the subtle (and not-so subtle) ways the system can assert its influence. Meanwhile, on the right you have a higher concentration of conservative religious beliefs that preach the importance of internal factors: a person’s experience in the afterlife depends critically on their actions and personal transgressions while alive, and salvation requires an intimate, personal commitment to god.

My purpose here is merely to draw attention to the possible relationship between a common cognitive bias and a person’s political leanings. Differences between the political left and right are obviously too complex to be explained by a simple attributional error. There are no doubt other psychological biases at play as well, along with the potential differences in moral reasoning that I have discussed before.

And finally, I don’t want to give the impression that in casting the folks on the MAGA far right as uneducated virus-spreading religious cucks I am purposely trying to make them look bad. They do that quite well enough on their own. But I also don’t want to give the false impression that I wholeheartedly embrace the progressive view of things either. Because of their attention to the role of systemic forces, I see the “radical” left as the lesser of evils—lesser, but still evil. And I especially don’t want to suggest that I am in any way sympathetic with independents or libertarians or any other wimpy fence-riding faction.

The truth is that everyone on both the far left and far right—and all points in between—is making a fundamental and critical mistake in their judgment of things, a mistake that goes far beyond the fundamental attribution error. A mistake that I will be exploring in detail in part 2.

More on the fundamental delusion of civilization

Despite rampant anthropomorphizing—or, partially because of it—civilized humans are convinced that they are a species apart, that they are superior to the other animals, that they are somehow the apex of evolution and possess qualities that are unique in the animal kingdom. The most frequently cited of these qualities is intelligence. Human intellect is superior to that of other animals by orders of magnitude.

There is a basic logical problem with drawing such a conclusion, however, considering that humans are the ones who have set the criteria for intelligence to begin with.

For example, a knee-jerk argument for human intellectual superiority is what could be called the argument from technological prowess. Human technology clearly stands out as something far, far beyond what any other creature is capable of. Human technology has cured pandemic diseases and delivered people to and from the moon. Some other animals make and use tools, but nothing any other animal does can come close to the simplest human appliance.

A major problem with this argument is that it assumes that the creation and use of advanced technology is evidence of intelligence rather than evidence of its opposite.

A simple survey of the negative consequences that have fallen from a lifestyle based around advanced industrial technology should be enough to show that it is not a very intelligent way to live on the planet. Also, how is it, exactly, that helpless dependence on external devices demonstrates intelligence to begin with? The ability to fashion a crutch does not make you walk better than someone who has two strong legs, and the fact that a crutch is needed is direct evidence that something isn’t working right. Other animals are able to figure out how to do things for themselves. Other animals are able to function in the world just fine without external mechanical aids. To offer technology as evidence of human intellectual superiority reflects a narrow human-centric—check that: civilization-centric—definition of intelligence that simply can’t be generalized to the rest of the animal kingdom.

It is unlikely that any set of criteria for what counts as intelligence could be applied across species because what counts as intelligence is relative to the opportunities and demands of life as it is experienced. What might count as intelligence for a bee is something altogether different from what might count as intelligence for a dog, for example. The claim that humans are more intelligent than other creatures demonstrates a lack of understanding of other creatures—when it’s not simple chauvinism.

And it’s simple chauvinism almost always.

A further nail in the coffin of the technological prowess argument is the fact that humans themselves have lived technologically “primitive” lifestyles for millions of years, in conjunction with the fact that there are no meaningful brain differences between modern humans and those who were around 50,000 years ago, with the exception that human brains were on average a bit larger in the distant past due to a cooler average global climate.

It turns out that there is strong continuity among all vertebrates, and humans are not so distant from other mammals in terms of any major aptitude. Even language is not a defining human capacity. Other primates demonstrate all of the various characteristics of linguistic communication to one degree or another. And it is likely that the human ancestral tree is speckled throughout with creatures in possession of variations on the language capacities of homo sapiens—perhaps even superior variations. The fact that modern humans are the last surviving hominid that can talk doesn’t make them somehow superior to the ones no longer around; going extinct is an inevitability in a world with dynamic climate and geography, not a sign of general inferiority. Besides, humans show every sign that they are going to have a comparatively short run on the planet—and language might in the end prove to be an important reason for that.     

Language and the ability to conjure fictional symbolic worlds has likely been an important part of the human condition from the very start. But there is something about life in civilization that subverts these adaptive human capacities, reappropriates them, and directs them in ways that makes civilized humans think that they are distinctly different from all other forms of life on Earth.

This is delusion, of course. But it is a particularly pernicious delusion. It is a delusion that is possible only in a mind that can conjure symbolic worlds, but the ability to conjure symbolic worlds did not create this delusion. Humans have existed as non-delusional symbolic-world conjurers for a long time before civilization came along.

Combating the psychological inertia of the status quo

The naturalistic fallacy is a very common error in logic that was perhaps first articulated by Hume. It refers to the tendency for people to confuse what exists with what is good. In other words, the very fact that something is means that it is something that should be.

Psychologically, the naturalistic fallacy is supported by two closely related cognitive biases: existence bias, and status quo bias.

Existence bias is a ubiquitous judgment heuristic in which people assign goodness or value to a situation, event, or potential future outcome based on the belief that the situation, event, or outcome represents an existing state of affairs. All things being equal, if you tell someone that something is common or prevalent, the person will judge it more favorably than if you tell them that it is uncommon.

The status quo bias refers to our lopsided valuation of existing circumstances. We have a strong tendency to prefer maintaining the status quo even if there are clearly superior alternatives. People are willing to invest far more energy and expense in maintaining the status quo than they would have been willing to invest to bring those conditions about in the first place.

We have a natural distaste for change, as if our present situation—whatever that situation happens to be—carries a potent psychological inertia. It’s easier to keep doing the same thing than it is to try something different.

Even if what we are doing isn’t working out so well for us.

Even if our present circumstances are dreadful.

Even if something clearly better is right there in front of us.

So, we put up with a job that is not entirely satisfying, or a marriage that is not entirely fulfilling, or a political system that immiserates the majority while expanding the power of those already in power, or a technology-dependent lifestyle that is burning the planet down to its base granite.

The naturalistic fallacy and its heuristic underpinnings can help explain the passive, unquestioning acceptance of the civilized status quo. Existence bias can lead to a devaluation of past conditions simply because they are no longer present. The mere fact that civilization exists is seen as clear evidence of civilization’s superiority over the life-ways it has displaced. And status quo bias produces a strong reluctance to consider doing anything different.

So, are there ways to make anti-civ/green anarchy/anarcho-primitivist ideals and ideas more prevalent? Or even just appear to be more prevalent (when it comes to biases, belief has far more power than actual fact)? Are there ways to sneak the future primitive into the status quo?  

Pathologically unnatural

Barklay Lake, North Cascades

Tracing the source of the tension between what I know and what I can articulate about what it is that separates the natural from the pathologically unnatural, I invariably end up with an oversimplification.

To express it in words is already to drain experience of all its flesh and fluids, to offer up a brittle skeleton as a stand-in, hoping the other person can gather enough dusty bone fragments to work with. I want it to be simple and direct. It certainly feels simple and direct. It feels as obvious as a gunshot, as obvious as a child’s scream in the dead of the night.  

The problem starts and ends with power. The difference between the natural and the pathologically unnatural is a function of power.

Nature knows no power. Power is a reified abstraction, a superadded feature of the world that appears concrete only because relations among human beings have been forced into a technological template.

Power is a function of technology, a feature of mechanical systems, a characteristic of machines. Power finds its way into the human experience only after humans have been made into servomechanisms, forged into component parts fitted against other parts—other people—and aggregated into the consumptive drivetrain of civilization.

It is not technology itself; an obsidian ax, a forest clearing, a collection of small dwellings made from gathered materials, people sharing food around an evening fire: these things are natural.

It is when human interactions have been made technological—systematic; a shopping mall, a crowded highway, a school classroom, a man with a badge and a gun guiding a bullet through the skull of a young boy: these things are not natural. And the distinction between them, the natural and the pathological, is not a simple binary. There is a third variable in the equation: power.