Book Review: The Scout Mindset
In the introduction to this book, Galef introduces us to the concept of the scout mindset (TSM): the motivation to see things as they are, not as we wish they were. Galef tell us that this book is about discussing the times in which we succeed in not fooling ourselves and what we can learn from those successes.
My path to this book began in 2009, after I quit graduate school and threw myself into a passion project that became a new career: helping people reason out tough questions in their personal and professional lives. At first, I imagined that this would involve teaching people about things like probability, logic, and cognitive biases, and showing them how those subjects applied to everyday life. But after several years of running workshops, reading studies, doing consulting, and interviewing people, I finally came to accept that knowing how to reason wasn't the cure-all I thought it was.
This reminded me of a quote from Ezra Klein's Why We're Polarized. In it, Klein states:
People invest their IQ in buttressing their own case rather than exploring the entire issue more fully and even-evenhandedly...People weren't reasoning to get the right answer, they were reasoning to get the answer that they wanted to be right...Among people who were already skeptical of climate change, scientific literacy made them more skeptical of climate change...It's a terrific performance of scientific inquiry. And climate change skeptics who immerse themselves in researching counter arguments, they end up far more confident that global warming is a hoax than people who haven't spent that time researching the issue. Have you ever argued with a 9/11 truther? I have and they are very informed about the various melting points of steel. More information can help us find the right answers, but if our search is motivated by aims other than accuracy, more information can mislead us, or more precisely, help us mislead ourselves. There's a difference between searching for the best evidence and the best evidence that proves us right.
She explains that her approach to adopting TSM has three aspects. The first is accepting that the truth isn't [necessarily] in conflict with our other goals, the second is to learn tools that make it easier to see clearly, and the third is to learn to appreciate the emotional rewards of TSM.
Chapter 1 opens with the story about Alfred Dreyfus. The book explains that Dreyfus was a Jewish member of the French military who was accused of leaking secrets to the German Embassy after a note was found by a cleaning lady indicating that someone was committing treason. Once this note was discovered, Dreyfus was used as a scapegoat and people started coming up with post-hoc rationalizations for why it was definitely Dreyfus who was leaking secrets. There was some flimsy evidence that came to light and enough people ran with it with enough conviction to the point that Dreyfus was sentenced to life imprisonment. Dreyfus maintained a declaration of his innocence throughout this time.
From this story, we are introduced to the concept of directionally motivated reasoning (or simply motivated reasoning), where our unconscious motives affect the conclusions we draw. Galef explains:
When we want something to be true...we ask ourselves, "Can I believe this?," searching for an excuse to accept it. When we don't want something to be true, we instead ask ourselves, "Must I believe this?," searching for an excuse to reject it.
Galef briefly mentions some military parlance that has made it's way into the way we talk/think about our beliefs (e.g. changing our minds can feel like "surrendering", we can become "entrenched" in our beliefs, etc.). This leads us to another concept - the soldier mindset (TDM).
Back to the Dreyfus affair - a man named Georges Picquart was assigned to a counterespionage department and tasked with accumulating additional evidence against Dreyfus, in case the conviction was questioned. As he went about this task, some evidence came to light that suggested Dreyfus wasn't the spy people thought he was. Picquart pursued this new evidence which eventually led to Dreyfus being fully pardoned and reinstated to the army.
Galef then discusses the motivated reasoning that was used in the original trial - Dreyfus wasn't particularly well-liked, he was Jewish, etc.. In contrast, Picquart was said to have demonstrated accuracy motivated reasoning, a thought process in which ideas are filtered through the lens of "Is it true?"
In our relationships with other people, we construct self-contained narratives that feel, from the inside, as if they're simply objective fact. One person's "My partner is coldly ignoring me" can be another person's "I'm respectuflly giving him space". To be willing to consider other interpretations - to even believe that there could be other reasonable interpretations besides your own - requires TSM.
This quote reminds me of Jonathan Haidt's The Righteous Mind. Unfortunately, I don't have a particular quote that I can cite from it, but the major (and frankly, pivotal) point that I got from it is that charitably, people value different things and typically act according to those values. If someone doesn't value, say, fairness the way that I do, I don't know what to do or say to convince them that they should value it the same way. Beating on about how X is unjust because it violates Y principle won't do much to convince someone if they just don't really care about Y principle to begin with. As a result, I think the best way to convince someone of something is to appeal to the moral foundations that they actually do value in a way that they perhaps haven't considered. I saw this play out once when a democrat and republican were talking about gay marriage, and the republican said she opposed gay marriage because she wanted the separation of church and state (after being initially confused as to why this would make her oppose gay marriage, I think her point is that she didn't want churches to be forced to perform gay weddings). The democrat explained that opposing gay marriage meant tying marriage to something fundamental in religion, which is not separating church and state. The republican ended up saying she hadn't thought about it like that before. Clip. Coming to this realization (meeting people where they are in terms of what they value) has led me to become much more charitable in my interpretations of people's actions. I still believe some people are hypocritical, short-sighted, etc., but I find myself thinking things like, "Well, if they value X, which I have reason to believe they do, it makes sense that this would be their position" far more often than I did before, which leads me to viewing people as being more consistent (note that this doesn't necessarily mean that they're correct in their beliefs) than I did previously.
In chapter two, Galef explores the reasons why people adopt TDM. These reasons include comfort (avoiding unpleasant emotions), self-esteem (feeling good about ourselves), morale (motivating ourselves to do hard things), persuasion (convincing ourselves so we can convince others), image (choosing beliefs that make us look good), and belonging (fitting in to our social groups).
Of note to me here is the example given for persuasion. Galef explains that Lyndon B. Johnson would, in an effort to convince people of something he needed them to believe when he didn't necessarily believe it himself, practice arguing "with passion, over and over, willing himself to believe it. Eventually, he would be able to defend it with utter certainty - because by that point, he was certain, regardless of what his views had been at the start." Galef later adds, "As Johnson used to say: 'What convinces is conviction.'" I have said previously that a "paucity of hedging indicates several things to me, virtually all unflattering" and I questioned if people really have "won" an argument or if they just feel they have won an argument if someone like me doesn't engage someone who is making statements with conviction. Applied here, was Johnson actually successful in convincing people of his positions, or were people letting him say what he wanted without confrontation, but weren't actually convinced? I legitimately don't know the answer, though I suspect both happened to some degree.
For the point about image, I am somewhat suspicious of attributing beliefs to people based on the assumption that they make that person look good (which, to be clear, isn't necessarily what Galef is suggesting). She references Robin Hanson's Are Beliefs Like Clothes in discussing this point. However, as I've said before, I've seen a lot of stuff be attributed to virtue signaling that I think people legitimately believe. I'm aware of preference falsification where "if public opinion reaches an equilibrium devoid of dissent, individuals are more likely to lose touch with alternatives to the status quo than if dissenters keep reminding them of the advantages of change" (from Timur Kuran's Private Truths, Public Lies: The Social Consequences of Preference Falsification) and so I believe virtue signaling can and does happen. However, it's unclear to me how one can determine if someone else is virtue signaling and so I tend towards believing that people believe what they say unless I have a reason to think otherwise.
Galef closes this chapter by stating that TDM is often our default strategy, but that doesn't necessarily mean it's a good strategy. However, there are reasons for its existence as discussed in this chapter, but next we will evaluate whether changing to TSM will allow us to "get the things that we value just as effectively, or even more so, without [TDM]."
In chapter three, Galef summarizes the functions of TSM vs. TDM. TSM allows people to see things clearly so they can make good judgment calls. TDM allows people to adopt and defend beliefs that provide emotional and social benefits. She makes the point that people can exemplify both mindsets at different times leading to trade-offs. For example, someone might trade off between judgment and belonging in a situation where they fight off any doubts about their community's core beliefs and values so as to not rock the boat. People make these trade-offs all the time and they tend to do so unconsciously, furthering "emotional or social goals at the expense of accuracy" or "seeking out the truth even if it turns out not to be what we were hoping for."
Next Galef explores whether people are actually any good at making trade-offs. She references Bryan Caplan's term rational irrationality to use in the analysis of whether people are good "at unconsciously choosing just enough epistemic irrationality to achieve our social and emotional goals, without impairing our judgment too much." Her hypothesis, as you may have guessed, is that no, most people aren't rationally irrational. The biases that lead us astray in our decision making cause us "to overvalue [TDM], choosing it more often than we should, and undervalue [TSM], choosing it less often than we should." She argues the major benefit of adopting TSM "is in the habits and skills you're reinforcing." She also makes the point that our instinct is to undervalue truth, but that shouldn't be surprising as "our instincts evolved in a different world, one better suited to the soldier." However, Galef believes that "more and more, it's a scout's world now."
In Chapter 4, we learn about the signs of a scout, and perhaps more importantly, about the things that make us feel like a scout even if we aren't.
The major points that she warns against are that feeling objective, and being smart and knowledgeable doesn't make you a scout. For the first point (feeling objective), she argues that people often think of themselves as objective because they feel objective, dispassionate, and unbiased, but being calm (for example) doesn't necessarily mean you're being fair. She warns "the more objective you think you are, the more you trust your own intuitions and opinions as accurate representations of reality, and the less inclined you are to question them." She provides the, IMO stunning, example of when physicist Lawrence Krauss, a close friend of Epstein, was interviewed regarding the accusations against Epstein:
As a scientist I always judge things on empirical evidence and he always has women ages 19 to 23 around him, but I've never seen anything else, so as a scientist, my presumption is that whatever the problems were I would believe him over other people.
Galef criticizes this, stating:
This is a very dubious appeal to empiricism. Being a good scientist doesn't mean refusing to believe anything until you see it with your own two eyes. Krauss simply trusts his friend more than he trust the women who accused his friend or the investigators who confirmed those accusations. Objective science, that is not. When you start from the premise that you're an objective thinker, you lend your conclusions an air of unimpeachability they usually don't deserve.
For the second point (being smart and knowledgeable), she argues that many people believe that other people (and perhaps even themselves) will come to the right (read: accurate) view on a topic if they gain more knowledge and reasoning ability. She raises a study done by Yale law professor Dan Kahan that surveyed Americans about their political views and beliefs surrounding climate change. In the survey, they found:
At the lowest levels of scientific intelligence, there's no polarization at all - roughly 33 percent of both liberals and consevatives believe in human-caused global warming. But as sicentific intelligence increases, liberal and conservative opinions diverge. By the time you get to the highest percentile of scientific intelligence, liberal belief in human-caused global warming has risen to nearly 100 percent, while conservative belief in it has fallen to 20 percent.
Galef explains that "as people become better informed, they should start to converge on the truth, wherever it happens to be. Instead, we see the opposite pattern - as people become better informed, they diverge. The results of this survey and Galef's point recalls Ezra Klein's quote mentioned earlier.
Next, Galef moves into things that you can do to make yourself a scout; namely, actually practicing TSM. She says that "the only real sign of a scout is whether you act like one" and explains the five signs of someone embodying TSM: telling other people when you realize they were right, reacting to personal criticism (e.g. acting upon it in a constructive manner, welcoming criticism without retaliation, etc.), trying to prove yourself wrong, taking precautions to avoid fooling yourself, and searching out good critics for your ideas.
In chapter five, Galef introduces five common thought experiments people can use to help them notice bias. These tests include the double standard test (are you judging a person/group by a different standard than you would use for another person/group), the outsider test (how you would evaluate the situation if it wasn't your situation), the conformity test (if other people no longer held this view, would you still hold it), the selective skeptic test (if this evidence supported the other side, how credible would you judge it to be), and the status quo bias test (if your current situation was not the status quo, would you actively choose it). However, Galef cautions that thought experiments aren't oracles and they can't tell you what's true or fair, or what decision you should make.
While I believe these tests are all useful (and use them myself from time to time!), I believe there are limitations that go beyond what are discussed in the book. For example, for the double standard test, Galef states, "If you notice that you would be more forgiving of adultery in a Democrat than a Republican, that reveals you have a double standard." On the one hand, this could be true for some people. On the other, there is a distinct difference between judging someone based on your beliefs and judging someone based on their beliefs. I asked a question here about hypocrisy recently that I think alludes to this. To the extent that I personally care about any politician remaining faithful, I think it is absolutely fair to care more about the infidelity of someone who, for example, says they espouse family values (and the people who say this trend Republican) because I think it's fair to care about hypocrisy.
I have seen the selective skeptic test in action many times in gender politics debates. For example, I've pointed out that I found it a little bit odd that pretty much all rape studies have been dissected for one reason or another by many non-feminists, but the one study that shows men and women are raped in roughly equal amounts is held as gospel by some of those same non-feminists despite the fact that other parts of that same study are routinely dismissed. Another example is prior to 2015, I saw many feminists (including myself!) touting around this study, and few on the non-feminist side paying much mind to it (I do recall a non-feminist acquaintance with an active interest in men's issues say it was a damning study, however). In 2015, this study came out and I saw many on the non-feminist side posting it basically everywhere I cared to venture online and few feminists (including myself!) mentioning it. Scott wrote about this, pointing out the differences in the studies. Regardless, I see this as a very poignant example of the selective skeptic test playing out in real time (a test I have failed myself...).
Chapter 6 is relatively brief. In it, Galef discusses quantifying our uncertainties about beliefs. She provides a test which you can test your calibration of your knowledge of your own uncertainties (if you're interested, my results are here. The orange line is where you should be if you're perfectly calibrated. Points above it indicate answering more questions correctly than expected, and points below it indicate answering more questions wrong than expected). She provides an example of using a bet to help you quantify your certainty about something. If someone were to offer to $100 if X were to happen within Y timeframe (the example given is self-driving cars coming to market within a year), would you take that $100, or would you rather take a bet of pulling a grey ball out of sack that has three other orange balls? How about if it has 7 other orange balls? If you feel like you'd prefer to take the car bet over the ball bet when your chance is 1/4, but not 1/8, you can narrow down your certainty about the car prediction to <25% but >12.5%.
In chapter 7, Galef talks about coping with reality and the differences in the ways scouts handle setbacks compared to soldiers. She starts with the example of Steven Callahan, whose ship capsized during a solo voyage in the Atlantic Ocean. Callahan did the only thing he could do; he set off for the nearest landmass - the Caribbean islands,
18 000 1 800 miles away. During this time, Callahan faced extremely difficult decisions several times a day; for example, should he use a flare gun if he saw a ship that could potentially see him in return, or should he wait for the chance at passing one at a closer distance? Eventually, Callahan made it to the shores of Guadeloupe and was rescued. Galef explains that:
The trait that saved Callahan was his commitment to finding ways of keeping despair at bay without distorting his map of reality. He counted his blessings...He reminded himself that he was doing everything possible...And he found ways to calm his fears of death, not by denying it, but by coming to terms with it.
These traits, Galef explains, are coping strategies that don't require self-deception. Soldiers, however, have some coping strategies of their own including self-justification, denial, false fatalism, and sour grapes.
To better train yourself to adapt TSM, you can hone different skills for dealing with setbacks and their accompanying emotions. These include making a plan, making a point to notice silver linings, focusing on a different goal, and recognizing that things could be worse.
Lastly, Galef discusses the research surrounding happiness and self-deception. Namely, she says:
The fact that the 'self-deception causes happiness' research is fatally flawed doesn't prove that self-deception can't cause happiness. It clearly can, in many cases. It just comes with the downside of eroding your judgment. And given that there are so many ways to cope that don't involve self-deception, why settle?
Chapter 8 is a relatively interesting chapter, though there isn't much to say about it in summary form. Galef discusses motivation without self-deception. She explains that an accurate picture of your odds can help you choose between goals. She encourages readers to consider the pursuit of a goal while asking, "Is this goal worth pursuing, compared to other things I could do instead?" She also states that an accurate picture of the odds can help you adapt your plan over time. She provides the example of Shellye Archambeau who was determined to become CEO of a major tech company. Archambeau was climbing the ranks around the time of the dot com bubble burst. She recognized the bad timing of trying to fulfill her original dream at a time when Silicon Valley was flooded with highly sought-after executives. She acknowledged this, and changed her goal - she became determined to become CEO of a tech company (dropping the requirement that it be top-tier). When she did this, she ended up being hired as CEO of Zaplet, Inc., which was almost bankrupt at the time. She eventually grew the company into MetricStream, which is now worth over $400 million. Galef says that an accurate picture of the odds can help you decide how much to stake on success. She also explains that accepting inevitable variance gives you equanimity. She states, "As long as you continue making positive expected value bets, that variance will mostly wash out in the long run."
Chapter 9 through to the end of Chapter 12 is where I found statements that I feel are of particular import. In this chapter, Galef differentiates between two types of confidence - epistemic confidence (certainty about what's true) and social confidence (self-assurance). She explains that we tend to conflate the two, assuming they come as a package deal. However, this isn't always (or even commonly!) the case. She provides the example of Benjamin Franklin, a man who was brimming with social confidence but displayed an intentional lack of epistemic confidence. Galef states:
It was a practice he had started when he was young, after noticing that people were more likely to reject his arguments when he used firm language like certainly and undoubtedly. So Franklin trained himself to avoid those expressions, prefacing his statements instead with caveats like "I think..." or "If I'm not mistaken..." or "It appears to me at present..."
This is a way of talking that I endorse and I find it particularly pleasant when engaging with others who do the same.
Next, Galef explains that people tend to judge others on social confidence, not epistemic confidence. That is, she assures the reader that saying something like "I don't know if this is the right call" has less of an impact on people's perception of your confidence compared to saying "This isn't right call" said without a confident and factual vocal tone. She also says that there are two different types of uncertainty and people react differently to them. The first type of uncertainty is due to your ignorance or inexperience (e.g. a doctor saying, "I've never seen this before") and the second type is due to reality being messy and unpredictable (e.g. a doctor saying, "Having X and Y puts you in a higher risk category for this disease, but it's not easy to determine which risk group given other factors such as A and B"). The best way to express uncertainty of the second kind is to show that the uncertainty is justified, give informed estimates, and have a plan to address other people's concern about the uncertainty itself. Doing so allows you to be inspiring without overcompromising.
In chapter 10, we move onto the broader topic of changing one's mind, and more specifically, how to be wrong. Galef mentions the work done by Philip Tetlock in measuring people's ability to forecast global events. There was a small group of people who did better than random chance - these people were dubbed superforecasters (which, incidentally, if you haven't read Superforecasting: The Art and Science of Prediction by Tetlock and Dan Gardner, I highly recommend it). Superforecasters have specific traits that allow them to be good at predicting things, even if they aren't necessarily experts in any particular field of relevance to the prediction. These traits include changing their mind a little at a time (making subtle revisions as they learn new information thereby effectively navigating complex questions as though they're captains steering a ship), recognizing when they are wrong (there is a tendency of some people to say things like they would have been right if conditions had been different, but superforecasters don't tend to think this way) and reevaluating their process, and learning domain-general lessons (working to improve their judgement in general). Galef goes on further to explain the difference between "admitting a mistake" vs. "updating". People tend to view saying "I was wrong" as equivalent to saying "I screwed up". However:
Scouts reject that premise. You've learned new information and come to a new conclusion, but that doesn't mean you were wrong to believe differently in the past. The only reason to be contrite is if you were negligent in some way. Did you get something wrong because you followed a process you should have known was bad? Were you willfully blind or stubborn or careless?...You don't necessarily need to speak this way. But if you at least start to think in terms of "updating" instead of "admitting you were wrong," you may find that it takes a lot of friction out of the process. An update is routine. Low-key. It's the opposite of an overwrought confession of sin. An update makes something better or more current without implying that its previous form was a failure.
Galef mentions one of Scott's posts Preschool: I Was Wrong where he provides an example of revising one's beliefs in response to new evidence and arguments. She states that if you're not changing your mind at times, you're doing something wrong and that "knowing that you're fallible doesn't magically prevent you from being wrong. But it does allow you to set expectations early and often, which can make it easier to accept when you are wrong."
Chapter 11 is also relatively short. Galef encourages readers to lean in to confusion. She wants people "to resist the urge to dismiss details that don't fit your theories, and instead, allow yourself to be confused and intrigued by them, to see them as puzzles to be solved."
She explains that if people's actions or behaviors surprise you, then shrugging off/explaining away the times when they violate your expectations is the exact wrong thing to do, but it is something people commonly do to avoid having to update.
Galef discusses the idea that while there are times in which a single observation can change one's worldiew, it is more often the result of accumulating many puzzling observations that changes one's mind - a paradigm shift, as described by Thomas Kuhn in The Structure of Scientific Revolutions. She provides the example of a woman involved in a multi-level marketing (MLM) scheme who began to notice that the promises and stories she was told didn't seem to be matching reality. The accumulation of these observations eventually led her to leave the MLM company she had joined.
Leaning in to confusion is about inverting the way you're used to seeing the world. Instead of dismissing observations that contradict your theories, get curious about them. Instead of writing people off as irrational when they don't behave the way you think they should, ask yourself why their behavior might be rational. Instead of trying to fit confusing observations into your preexisting theories, threat them as clues to a new theory.
In chapter 12, we learn about the importance of escaping our echo chambers, but also the importance of doing so in a mindful way. Galef starts by discussing "a Michigan magazine that attempted a two-sided version of the 'escape your echo chamber' experiment. It recruited one couple and one individual with very different views from each other who agreed to exchange media diets for one week." The liberals were two professors who were fans of NPR, the New York Times, and Jezebel. The conservative was a retired engineer who supported Donald Trump, and was a fan of the Drudge Rreport and The Patriot. In the experiement, the liberals were to consume the media of the conservative man and vice-versa for a week. What was the main takeaway from the participants? "Everyone had learned that the 'other side' was even more biased, inaccurate, and grating than they previously believed." Another similar study where liberal users were exposed to a conservative twitter bot and vice-versa for a month found that participant's views had not been moderated by the foray outside of their echo chambers. Instead, conservatives became dramatically more conservative, and liberals because slightly more liberal (though the effect wasn't statistically significant). Galef explains that the real takeaway isn't, "Don't leave your echo chamber", it's that:
To give yourself the best chance of learning from disagreement, you should be listening to people who make it easier to be open to their arguments, not harder. These people tend to be people you like or respect, even if you don't agree with them; people with whom you have some common ground (e.g. intellectual premises, or a core value that you share - even though you disagree with them on other issues. People whom you consider reasonable, who acknowledge nuance and areas of uncertainty, and who argue in good faith.
Sound familiar? :)
Galef moves into discussing a subreddit where I spent considerable amounts of time (at least at the time when this book was being written) - /r/femradebates. She explains some of the subreddit's rules that were successful early on in getting different gender politics groups to come together to debate and discuss issues - don't insult others, don't generalize, state specific disagreements with people or views, etc. I'll note that I've been less impressed with the subreddit the past few years, though this is most likely a result of the Evaporative Cooling of Group Beliefs.
Galef mentions another of Scott's posts - Talking Snakes: A Cautionary Tale in which Scott recalls a time in which he had a conversation with a woman who was shocked that he believed in evolution, like one of those "crazy people". After discussing the matter with her in greater detail, it became clear that the woman's understanding of evolution was not at all sound. Galef uses this story to ask:
...are you sure that none of the absurd-sounding ideas you've dismissed in the past aren't also misunderstandings of the real thing? Even correct ideas often sound wrong when you first hear them. The thirty-second version of an explanation is inevitably simplistic, leaving out important clarifications and nuance. There's background context you're missing, words being used in different ways than you're used to and more.
Next, we move into how beliefs become identities. Galef explains that there is a difference between agreeing with a belief and identifying with it. However, there are two things that can turn a belief into an identity: feeling embattled and feeling proud. She says, "Being mocked, persecuted, or otherwise stigmatized for our beliefs makes us want to stand up for them all the more, and gives us a sense of solidarity with the other people standing with us." The example she provides for this is the breastmilk vs. formula debate among certain parenting circles. She says:
Formula-feeders feel like they're constantly on the defensive, forced to explain why they're not breastfeeding and feeling judged as bad mothers, silently or openly...Breastfeeders feels embattled too, for different reasons. They complain about a society set up to make life difficult for them, in which most workplaces lack a comfortable place to pump breast milk, and in which an exposed breast in public draws offended stares and whispers. Some argue that this is a more significant form of oppression than that faced by the their side. "Because, let's face it...while you may feel some mom guilt when you hear 'breast is best', no one has ever been kicked out of a restaurant for bottle feeding their baby."
She explains that feeling proud and feeling embattled can play into each other; basically, some people might sound smug or superior talking about a particular belief they have, but that might be an understandable reaction to negative stereotypes under which they feel constantly barraged.
Galef explains that there are signs that indicate a belief might form a part of someone's identity. These signs include using the phrase "I believe", getting annoyed when their ideology is criticized, using defiant language, using a righteous tone, gatekeeping, schadenfreude, using epithets, and feeling like they have to defend their view. I found this section to be iffy given some previous parts of the book. In this writeup, I have linked to a defense of the use of couching terms like, "I think that...." or "I believe that..." as a way to signal an opinion and not a fact. While I don't think Galef is saying that anyone who says "I believe..." is following up with a piece of their identity, the way it is written seems to be contradictory to what she has defended herself. I also think there can be value in gatekeeping that doesn't come from a place of steeling one's identity. I have previously commented on the use of the word TERF to describe anyone who is vaguely transphobic. I think if you believe words have meaning, it is fair to critique someone's use of those words, particularly if the consequences of what is being said are high or can lead to confusion.
Galef explains that people should hold their identities lightly; that is, they should view their identities in a "matter-of-fact way, rather than as a central source of pride and meaning...It's a description, not a flag."
She mentions Bryan Caplan's ideological Turing test as a way to determine if you really understand someone's ideology. She explains that while the ideological Turing test is partially a test of knowledge, it also acts as an emotional test - are you viewing your own identity lightly enough to avoid caricaturing your ideological opponents? She says that a strongly-held identity prevents you from persuading others and that understanding the other side makes it possible to change minds. She provides a quote from Megan McArdle who said, "The better your message makes you feel about yourself, the less likely it is that you are convincing anyone else." Galef then discuses some examples of how different types of activism score on the identity/impact dimensions. For example, effective protests can have a fairly moderate impact on change, but they also strongly reinforce identity. Venting to like-minded people doesn't really have an impact on change, but it can lightly reinforce identity. She explains:
Holding your identity lightly does't mean always choosing cooperation over disruption...To be an effective activist you need to be able to perceive when it will be most impactful to cooperate, and when it will be most impactful to disrupt, on a case-by-case basis.
Chapter 15 and Conclusion
Chapter 15 and the Conclusion are also relatively brief. Galef ties together many of the thoughts she has explained in this book. She briefly discusses effective altruism and states that "It's freeing to know that among effective altruists, disagreeing with the consensus won't cost me any social points, as long as I'm making a good-faith effort to figure things out." This is a sentiment I have felt participating in /r/themotte (and the rare time I venture over, /r/slatestarcodex too). She explains that in turning to TSM, you may need to make some choices - choices regarding what kind of people you attract, your online communities, and your role models.
Galef concludes by saying that you don't necessarily need to give up happiness to face reality. If you work on developing TSM, you will develop tools that help you cope with fear and insecurity, persevere in the face of setbacks, and fight effectively for change, all while understanding and working with what's real.
In summary, I give this book a solid 4/5 stars. It was engaging and thoughtful, and it made me think about some things in ways I hadn't considered before. The book is also relatively short (273 pages, including appendices, notes, etc.), so it's easy to recommend to others. That said, I don't necessarily consider this a must-read, though I do consider it a should-read for anyone interested in this kind of content or who wants to refresh their understanding of epistemology. I think the biggest weakness of the book is that it is told almost exclusively through anecdotes. I don't fault Galef for this as the book isn't intended to be original research, but it does make me think about examples that didn't make it into the book (e.g. for the story about Callahan, how many people did exactly what he did, but didn't survive? There's a form of survivorship bias at play that goes undiscussed). Of course, the book should probably be finite, so at some point, Galef has to limit the examples she discusses, and the anecdotes are a large part of what make the book interesting to begin with. I also think there are some minor contradictions throughout the book, though I think those can largely be avoided with some care on the part of the reader.