Scott Alexander writes “Why I Am Not A Conflict Theorist”. I recommend reading the whole article before this one, as I’ll largely be commenting on his article.
Reading Scott’s article, I found myself concerned about messy definitions, non sequiturs, and bad epistemology, as well as the lack of attention paid to the scientific study of the subject matter of the article. I have a manuscript-textbook detailing this subject matter here, which my comments are largely based off of.
The Introduction
We begin with Scott’s introductory section, where he defines conflict theory.
Conflict theory is the belief that political disagreements come from material conflict. So for example, if rich people support capitalism, and poor people support socialism, this isn’t because one side doesn’t understand economics. It’s because rich people correctly believe capitalism is good for the rich …
Specifically, he says, conflict theorists think convincing others to change their politics is a waste of time:
Some people comment on my more political posts claiming that they’re useless. You can’t (they say) produce change by teaching people Economics 101 or the equivalent. Conflict theorists understand that nobody ever disagreed about Economics 101. Instead you should try to organize and galvanize your side, so they can win the conflict.
Conflict, however, is a loaded term. What does the effect of information on political behavior have to do with conflict, or mistakes (mistake theory is stood in dualistic contrast to conflict theory in Scott’s writing), necessarily? In order to make more sense of Scott’s definitions, I find myself defining Theory X and Theory Y. We use X and Y to strip the names of connotative baggage.
Theory X ("conflict") is the theory that political behavior is memetically resistant. This seems to most closely match what Scott says about conflict theorists — they think people aren’t convinced to change their minds in politics.
For theory Y, we have two sub theories. Scott draws an entirely different distinction between being fooled and believing in “truth.” Theory Y, then, is the theory that people respond to memes and ideas in politics. But Theory Y.1 is the theory that political actors believe no such meme M that T(M) = 0, where T is a truth function. We use a “truth function” in order to avoid what truth actually is — it’s not well defined by Scott, but by treating it abstractly we can avoid that discussion and it won’t make a difference for this article. We also discard improper connotations. Likewise, Theory Y.2 is the theory that they believe some M such that T(M) = 0.
Responding to “Conflict Theory Has A Free Rider Problem”
Scott starts the next section claiming that theory X or Y.1 (both are referred to as “conflict theory” by him, even though they are entirely different) probably can’t work in theory.
Before demonstrating that conflict theory doesn’t explain politics, let’s first notice that there are good theoretical reasons why it can’t work.
If you read the section, you will see that he basically claims that the expectation of financial reward given political behavior is always slim. He closes the section with this claim:
For everyone else, including the merely very-rich, there must be some motivation beyond [financial] self-interest.
Here he identifies financial motivation with “conflict theory”, however this is neither theory X nor theory Y.1. I suppose he thought maybe this relates to theory X because if people are memetically resistant, then what is left to motivate their behavior? Economics. Not economics, modus tollens, therefore not theory X. No scientific evidence supports the claims that not M implies E, though — quite the opposite, reading any literature on this topic at all will demonstrate that the proposition is quite false. So this argument totally falls through.
Responding to “The Salt Cap”
Next, Scott brings up a tax cut that Trump slashed, causing mostly coastal elites to pay more federal income tax. He concludes,
But why wasn’t it a bigger deal? The PMC ie coastal elites run the media, and more or less shape politics in their own image. This cap costs them 5% of their salary per year. If they cared at all about their own self-interest, or material conditions, it ought to be 1000x more important to them than wokeness or Ukraine or anything else. It should completely dominate the airwaves and Intertubes. Instead, crickets.
Now this could look like an argument for Y.2 if only the elites were deluded, and could be persuaded to act differently. But Scott is specific that the elites understood the “truth”:
In 2020, the Democrats - party of coastal elites! - came back in power. They considered undoing Trump’s SALT cap. But they thought it would look bad to cut taxes on themselves at the same time they were expanding government, so they decided against. … I know about this mostly because I noticed my taxes going up. I’ve seen a few articles about it here and there. But it’s not a big national issue.
So what is he arguing for? His point is that “And empirically, we find self-interest is surprisingly weak.” But he ends up providing weak evidence for Theory X, one of the “conflict” theories. Stated logically, he gives some probability 0 < P < 1, that M = { E[tax] = 1.05%}, T(M) = 1, but Effect(M) = 0. This is weak evidence for theory X, the theory that politics are resistant to ideas, and persuasion in general.
Responding to “The Vaccines”
In this section, Scott points out that somebody must be wrong about the vaccines. Either they work, or they don’t:
Unlike the SALT cap, there is no conflict here.
If the vaccines are good, nobody benefits from pretending that they’re bad. Anti-vaxxers aren’t protecting their material self-interest. They’re putting their kids at risk of deadly diseases for no reason.
And if the vaccines are bad, maybe a few pharma companies benefit from shilling them. But no real people do. None of the hundred million or so pro-vaxx Americans love pharma companies enough that they’d risk their kids’ health to help them out.
Here there’s no plausible explanation except that one side or the other - the hundred million people who really want themselves and their kids to be vaccinated, or the hundred million people who really don’t - is making a terrible, tragic mistake.
This would be weak evidence for Y.2 under the assumption that there is an unknown fixed effect of vaccines on children. What if, however, the effect of the vaccine on your child has an intrinsic random component? Then we have to examine people’s different risk tolerance profiles. It’s plausible that the average effect of a vaccine is to reduce the risk of sickness. Perhaps, for every 100 people vaccinated, only 20 get sick with the virus, compared to 30 among the unvaccinated. But, say 5 get very sick or almost die.
Perhaps for liberals, through some combination of wanting kids very little, and having so few that making another one is less of a hassle than for conservatives, plus some kind of genetic higher tolerance for death risk (seems to be the case in the literature), find it acceptable to take the extra death risk, if they lower the expected value of total harm for their child. Maybe they also don’t want their kid bringing home germs to them. And so on. Meanwhile, conservatives are willing to deal with a higher rate of moderate sickness if it means avoiding their fear of death and needles.
In this case, vaccines are objectively both harmful and beneficial, depending on your intrinsic judging mechanism. Neither side is making a mistake — instead, both sides are obtaining their ideal risk profile under the constraints of reality.
The rest of the article: “psychological factors”?
After this Scott shifts to part 2 of the article and proposes that it’s “psychological factors” that drive politics.
If you’ve read The Psychopolitics Of Trauma, you already know my answer to this: it’s all psychological. People support political positions which make them feel good. …
However, this is closest to theory X and contradicts what he said in the first half of the article, where he attempted to claim that people were motivated by memes and ideas. “Psychological factors” match much more onto instincts than memes. To support what makes you feel good requires no memetic variance at all in fact — everyone can be exposed to the exact same information, and come away with what they intrinsically desire out of it, just like in my explanation for the vaccine controversy. This is not theory Y.2.
He gives an example of Traumatic Psychopolitics:
The usual story is that the socialists wanted some extra money for social services, came up with the idea of taxing the rich, and - in order to defuse opposition to this idea - started talking about how the rich were parasites who didn’t deserve their money. This may have been true in some sort of original-position-state-of-nature that basically never happened. But if it was, it insulted (if you’ve read Psychopolitics Of Trauma, feel free to substitute “traumatized”) the rich, who then naturally reacted by lashing out and saying “No, you poors are the real parasites!” And this naturally insulted/traumatized the poor, who then redoubled their attacks on the rich to psychologically compensate. By the nth round of this cycle - ie all human history other than the original-state-of-nature - the mutual animus / self-defense / trauma-enactment was driving the cycle more than the original desire for money.
Or maybe it’s useful to think of this as happening in parallel rather than serially. In the old days, when lines of communication were few, this process only had a chance to go a couple of rounds before dying down; maybe things were a little more grounded in material reality. After the rise of the Internet and social media, everyone had the opportunity to instantaneously get attacked and insulted and traumatized by everyone at once, and to shoot back retorts of their own within seconds; the dynamic intensified. Anyone with an X account is living part of their life in a weird psychodrama where millions of bullies are brute-force-attempting to find the most enraging possible attack on the most intimate parts of their identity at all times. This will naturally multiply the importance of the psychological component of politics relative to the material one.
This sounds more like people behaving more according to animal instinct than rational thought. At no point does any of this support the idea that either side is misinformed — rather they all have the same information, but react differently due to different instincts.
He concludes:
So I think political persuasion is possible, both by reasoning about the issues themselves and by trying to address the underlying psychological needs.
This doesn’t follow from the article at all, and in fact, studies of political persuasion find it’s rather weak. Here is an article directly rebutting Y.2. Here is one rebutting Y.1 and Y.2, showing “communication is much less influential than widely claimed”. From the manuscript above,
ps is the upper bound of the proportion of people who change position after being exposed to a meme. It assumes totally random meme exposure when the reality is that people who were going to go along with the meme anyway are more likely to seek it out. The effects tend to be extremely small.
From extended family designs we have very high heritability for social attitudes and near parent to child cultural transmission.
Conclusion: update your priors to “genetic variance produces political variance”. While there is evidence for a short term political cycle, it’s probably an interaction between instinct, economics, and short term population demographics like age structure and population density. This is more or less outside of the cognitive window of “mistake vs conflict theory.”
How did we get here?
So why is Scott Alexander mistaken, in the logic layer as well as in the empirical layer? The answer is it’s hard to do logic with words — mathematics is basically the subset of language that one can perform logic on. And Scott is not engaging with the mathematical theory or empirical evidence in the field very deeply.
Why? Well, we get some epistemological comments from Scott:
Political positions need to be explained in historical terms. This doesn’t make such a theory disprovable - the examples above at least claim to discredit conflict theory. But debate would have to be at a similarly careful level of analysis and not just a simple predictive checklist.
I hope this theory is natural enough that most people will be less interested in demanding a formal test than in discussing whether it effectively captures a position which is already widely shared but rarely put into words.
“Historical terms” is close to “should be a story.” But stories are what metereology was in the days of Neptune. It’s not what science is. And science is definitely not “${rationalistCommunity} user consensus”, but this seems to be Scott’s epistemology when he says that he’s less interested in demanded a “formal test” than in capturing a postion that is “already widely shared but rarely put into words.”
But that is what bloggers do is it not? That’s why they’re light on math and data but heavy on “influencing people” with words. How do you influence people with a bunch of unevidenced claims? They better already agree with you deep down. And I’m sure it’s a great way to become a huge blogger — readers really hate it when they’re contradicted. It’s an instinct!
I for one don’t think that random people who are not publishing scientific work on this topic have any special insight into what is going on any more than they do in physics or chemistry or any other science. So, this epistemology is flawed. Instead, we should follow the scientific method: form a measurable hypothesis, measure, verify, falsify, or iterate.
There are some people doing that, as I’ve discussed, and their results are entirely different than conflict vs. mistake theory.