Reversals
Key Takeaway: Preference reversals — choosing A over B in single evaluation but preferring B in joint evaluation — reveal that our judgments are not governed by stable preferences but by context-dependent evaluations: dolphins receive more donations than farmworkers in single evaluation (dolphins rank higher among endangered species than skin cancer ranks among public health issues), but the ordering reverses in joint evaluation when the 'human vs. animal' feature becomes salient; these reversals pervade jury awards, regulatory penalties, consumer choices, and every domain where items from different categories are evaluated one at a time.
Chapter 33: Reversals
← Chapter 32 | Thinking, Fast and Slow - Book Summary | Chapter 34 →
Summary
This chapter demonstrates that human preferences are not stable internal states but context-dependent constructions that change depending on whether options are evaluated alone (#singleevaluation) or together (#jointevaluation). The core finding: judgments are coherent within categories but potentially incoherent across categories, and life mostly presents us with single evaluations (between-subjects), not comparisons.
The dolphin-vs-farmworker example is the chapter's centerpiece. In single evaluation, dolphins receive larger donations than farmworkers because dolphins rank high among endangered species while skin cancer in farmworkers ranks low among public health issues. Each cause is evaluated within its own category, and the within-category ranking drives the dollar amount through intensity matching. But in joint evaluation, the decisive feature — farmworkers are human, dolphins are not — becomes salient and reverses the preference. "The narrow framing of single evaluation allowed dolphins to have a higher intensity score."
Hsee's #evaluability hypothesis explains the music dictionary reversal (Dictionary A: 10,000 entries, like new; Dictionary B: 20,000 entries, torn cover). In single evaluation, A is preferred because condition is evaluable (you know what "like new" means) while 10,000 vs. 20,000 entries is not (you don't know if 10,000 is a lot). In joint evaluation, the entry count becomes evaluable through comparison, and B's superiority on the more important dimension becomes obvious.
The legal implications are profound: mock jurors awarded the burned child less than the defrauded bank in single evaluation (anchored on the financial loss amount), but reversed when shown both cases together (sympathy for the child overwhelmed the financial anchor). Yet jurors are "explicitly prohibited from considering other cases" — the legal system mandates single evaluation, guaranteeing the incoherence that joint evaluation would correct.
Sunstein's analysis of regulatory penalties shows the same pattern at the institutional level: within each agency, penalties are sensible (more severe violations get larger fines). But across agencies, fines are incoherent: a "serious" worker safety violation is capped at $7,000, while a Wild Bird Conservation Act violation can reach $25,000. The fines are products of separate legislative processes (single evaluation by different committees at different times), not a comprehensive assessment of societal priorities.
For the library, the evaluability insight explains why specific, vivid claims outperform vague ones in every persuasion context. Hormozi's emphasis in $100M Offers on quantifying value ("this will save you $50,000/year") makes the benefit evaluable; a vague "this will improve your business" is not evaluable in isolation and gets underweighted. Berger's #triggers concept in Contagious works because triggered products are automatically placed in a comparison context (joint evaluation with the trigger), making their distinctive features salient.
Key Insights
Preferences Are Constructed, Not Retrieved — Evaluations depend on which features are salient, which in turn depends on the comparison context. The same person can prefer A to B in isolation and B to A when compared — not from confusion, but because different features dominate in each context. Life Is a Between-Subjects Experiment — We normally encounter options one at a time (single evaluation), which makes within-category ranking the dominant determinant. The cross-category comparisons that would correct incoherence require joint evaluation, which life rarely provides. Evaluability Determines Influence — Features that are meaningless in isolation (10,000 entries, 20,000 entries) become decisive in comparison. This means that attributes which should matter most (number of entries) may matter least when presented alone. The Legal System Mandates Incoherence — Jurors cannot consider other cases; regulatory penalties are set by separate agencies; compensation decisions are made case-by-case. Each institution produces internally coherent but globally incoherent outcomes.Key Frameworks
Single vs. Joint Evaluation — Single: options evaluated one at a time, within their own category, governed by emotional intensity and within-category ranking. Joint: options compared directly, revealing cross-category features that were invisible in single evaluation. Joint evaluation is generally more rational (broader frame) but single evaluation is how life usually works. The Evaluability Hypothesis (Hsee) — An attribute influences judgment only if it is "evaluable" — meaning the decision-maker has a reference frame for interpreting its value. Number of dictionary entries is not evaluable alone but becomes evaluable in comparison. Condition is always evaluable. Attributes that are most important objectively may be least influential in single evaluation because they're hard to evaluate without a comparison.Direct Quotes
[!quote]
"We normally experience life in the between-subjects mode, in which contrasting alternatives that might change your mind are absent."
[source:: Thinking, Fast and Slow] [author:: Daniel Kahneman] [chapter:: 33] [theme:: singleevaluation]
[!quote]
"It is often the case that when you broaden the frame, you reach more reasonable decisions."
[source:: Thinking, Fast and Slow] [author:: Daniel Kahneman] [chapter:: 33] [theme:: jointevaluation]
Action Points
- [ ] Make important attributes evaluable by providing comparison context: When presenting products, proposals, or data, include benchmarks that make abstract numbers meaningful. "10,000 entries" means nothing; "10,000 entries — 50% more than the industry standard" makes it evaluable.
- [ ] Force joint evaluation for consequential decisions: When setting prices, penalties, compensation, or resource allocation, compare across categories rather than evaluating each case in isolation. The global coherence check reveals absurdities that single evaluation hides.
- [ ] Beware of emotional intensity substituting for importance in single evaluation: Dolphins outrank farmworkers in single evaluation because they're more emotionally engaging, not because they're more important. In your own decisions, ask: "Would this priority survive comparison with alternatives from other categories?"
- [ ] Use the evaluability principle in persuasion: Make your key differentiators evaluable by providing the reference frame your audience needs. Don't assume they know what "99.9% uptime" means — show them the industry average.
Questions for Further Exploration
- If the legal system mandates single evaluation (jurors can't consider other cases), should sentencing guidelines serve as a form of forced joint evaluation?
- How should organizations structure budget allocation to prevent the within-category coherence / across-category incoherence problem?
- The evaluability hypothesis suggests that the most important features may be least influential in isolated decisions. How does this affect hiring, where candidates are often evaluated one at a time?
Personal Reflections
Space for your own thoughts, connections, disagreements, and applications.
Themes & Connections
Tags: #preferencereversals #singleevaluation #jointevaluation #evaluability #narrowframing #categories #contextdependence #punitivedamages Cross-book connections:- $100M Offers Ch 5-8 — Hormozi's value quantification makes benefits evaluable; vague claims remain unevaluable in single evaluation
- Contagious Ch 2 — Berger's triggers create joint evaluation contexts that make products salient
- Influence Ch 1-2 — Cialdini's contrast principle forces joint evaluation of sequential options