A Whirlwind Tour of LW Rationality: 6 Books in 32 Pages - Single Page Edition

(Base PDF Version, Current Version 1.0.0, 32 page count does not include the title and introduction.)

(Split Version of this tour, spread over seven shorter posts, for people who don’t like scrolling or want to be able to more easily link to parts of it)

(Feedback very welcome; I’m not the best writer, and contributions from anyone who has better ways to phrase things or even wants to rewrite entire sections are very welcome and I’d be happy to credit them. The best approach is probably to add comments to the source Google Document, or email me if you want to clear a larger bit of work before starting. The email address is “john”, at this website’s domain, excluding the subdomain.)

This is an attempt to summarise the propositions of the online ‘rationalist’ community, originally centred around Overcoming Bias and LessWrong, now largely dispersed to various communities and organisations like Slatestarcodex, the Center for Applied Rationality, the Machine Intelligence Research Institute, and the Effective Altruism movement, amongst others.

I have held off on writing this in the past out of a suspicion that I would not do it justice, but have decided that it is better done badly than not at all. My apologies to Eliezer Yudkowsky for mangling their work. I have reservations in parts, but I agree with or find it plausible in gist.

The structure is that of a whirlwind tour, with little narrative beyond ordering of the propositions, with citations to the source post for each, to permit drilling down into interesting or contentious parts and reading existing critique by the community.

This is to enable useful examination of the ideas and their assumptions by people who have things to do other than reading millions of words on the topic, to permit those who have picked up ideas from the community to see their surrounding context and related ideas, and to serve as an index to enable those who disagree to identify their points of departure.

Hurrying Along, What Is “LW Rationality”?

So, roughly a mixture of analytic philosophy and pop cognitive science. The basic attitude to human cognition is that of Kanheman’s Thinking Fast And Slow, which I recommend. The consensus reference to LW rationality itself is Eliezer Yudkowsky’s core Sequences, blog posts with examples and stories of a transhumanist, speculative flavour. They have since been collected into the book Rationality: AI to Zombies, which is available for free and is the best place to start if seeking a fuller understanding of the propositions here. A description of how it compares to and connects with academia, with references to related works and research, was written by lukeprog.

Some parts of these are relatively well accepted; others proved controversial with the community. The tour follows, roughly a page per sequence, using the book’s ordering of the sequences.

Map and Territory A: Predictably Wrong

Epistemic rationality is using new evidence to improve the correspondence between your mental map and the world. Instrumental rationality is effectively accomplishing your goals. (What Do We Mean By Rationality?)

Rationality does not conflict with having strong feelings about true aspects of the world. (Feeling Rational)

Epistemic rationality is useful if you are curious, if you want to be effective, or if you regard it as a moral duty, the last of which can be problematic. (Why Truth? And…) A bias is an obstacle to epistemic rationality produced by the ‘shape’ of our mental machinery. We should be concerned about any obstacle. (…What’s A Bias, Again?)

We use an availability heuristic to judge the probability of something by how easily examples of it come to mind. This is imperfect, creating the availability bias. Selective reporting is a major cause. (Availability)

We use a judgement of representativeness to judge the probability of something by how typical it sounds. This suffers from the conjunctive bias, where adding more details increases perceived probability. (Burdensome Details)

We tend to examine only the scenario where things go according to plan. This suffers from the planning fallacy, in which difficulties and delays are underestimated. (Planning Fallacy)

We use our own understanding of words to evaluate how others will understand them. This underestimates differences in interpretation, leading to the illusion of transparency. (Illusion Of Transparency: Why No One Understands You)

Inferential distance is the amount of explanation needed to communicate one person’s reasoning to another. We routinely underestimate it. This is because our background knowledge differs a lot more now than it used to in the past. (Expecting Short Inferential Distances)

A metaphor for the human brain is a flawed lens that can see its own flaws. (The Lens That Sees Its Flaws)

Map and Territory B: Fake Beliefs

A belief should be something that tells you what you expect to see; it should be an anticipation-controller. (Making Beliefs Pay Rent (In Anticipated Experiences))

Taking on a belief can acquire social implications, and this results in a variety of compromises to truth seeking. (A Fable Of Science And Politics) It is possible to believe you have a belief while still truly expecting to see the opposite; this is belief-in-belief. (Belief In Belief, Bayesian Judo) Holding a neutral position on a question is a position on it like any other. (Pretending To Be Wise)

Religious claims to be non-disprovable metaphor are a socially-motivated backdown from what were originally beliefs about the world, with claims to ethical authority remaining because they have not become socially disadvantageous. (Religion’s Claim To Be Non-Disprovable) At other times, we can see socially-motivated claims of extreme beliefs, as a way to cheer for something. (Professing And Cheering)

Belief as attire is belief that is professed in order to show membership of a group. (Belief As Attire)

Some statements exist simply to tell the audience to applaud and do not actually express any belief; we call these applause lights. (Applause Lights)

Map and Territory C: Noticing Confusion

When uncertain, we want to focus our anticipation into the outcome which will actually happen as much as possible. (Focus Your Uncertainty)

It means exactly what you think it means for a statement to be true. Evidence is an event, entangled by cause and effect, with what you want to know about. Things that react to that event can become entangled with what you want to know about in turn. Beliefs should be determined in a way that makes them entangled, as this is what makes them accurate. You must be conceivably able to believe otherwise given different observations. (What Is Evidence?)

Scientific evidence and legal evidence are subsets of rational evidence. (Scientific Evidence, Legal Evidence, Rational Evidence)

The amount of entanglement needed to justify a strong belief depends on how improbable the hypothesis was to begin with, which is related to the number of possible hypotheses. (How Much Evidence Does It Take?, Einstein’s Arrogance)

Occam’s Razor is the principle that the correct explanation is the simplest that fits the facts. The simplest explanation must be defined as the shortest length it takes to fully specify a program that simulates the explanation/a universe that performs the explanation rather than English sentence length. Solomonoff Induction is a formalisation of this; one variant predicts sequences by assigning a base probability to programs of 2-[bit length] and then weights based on how their predictions fit. This definition reduces probability of an explanation equally to the extent to which it simply embeds a copy of the observations, and only so rewards explanations which are compressed relative to the observations. (Occam’s Razor)

Your strength as a rationalist is your ability to notice confusion; your sense that your explanation feels forced. (Your Strength As A Rationalist)

Absence of evidence is evidence of absence. If something being present increases your probability of a claim being true, then its absence must decrease it, in amounts depending on how likely the presence was in either case. (Absence Of Evidence Is Evidence Of Absence) There is conservation of expected evidence. (Conservation Of Expected Evidence)

We have a hindsight bias which makes us think we already believed something when we read it. (Hindsight Devalues Science)

Map and Territory D: Mysterious Answers

A fake explanation is an explanation that can explain any observation. (Fake Explanations) Using scientific-sounding words in one is using science as attire, and not actually adhering to science. (Science As Attire) After seeing a thing happen, we tend to come up with explanations for how it was caused by a phenomenon, even when we couldn’t have predicted it ahead of time from our knowledge of that phenomenon. This is fake causality and made hard to notice by the hindsight bias. The hindsight bias is caused by failing to exclude the evidence we get from seeing a claim when evaluating how likely we thought it was before we saw it. (Fake Causality)

Positive bias is attempting to confirm rather than disconfirm theories, which fails to properly test them. (Positive Bias: Look Into The Dark)

There is a normal human behaviour when asked to proffer an explanation where we pull out phrases and offer them without a coherent model. We call this guessing the teacher’s password. (Guessing The Teacher’s Password) A good way to examine whether you truly understand a fact rather than have it memorised as a password answer is to ask whether you could regenerate it if forgotten. (Truly Part Of You)

It is not necessary to counter irrationality with irrationality, or randomness with randomness, despite this being the intuitive thing to do as a human. (Lawful Uncertainty)

A fake explanation often serves as a sign to end further examination despite containing no real understanding, in which case it is a semantic stopsign or curiosity-stopper. (Semantic Stopsigns) We should not expect answers to be ‘mysterious’, even for ‘mysterious questions’, such as the cause of fire or life. (Mysterious Answers To Mysterious Questions) Any time humans encounter a phenomenon, they can choose to try to explain it, worship it, or ignore it. (Explain/Worship/Ignore?)

The term ‘emergence’ is a contemporary fake explanation and semantic stopsign. (The Futility Of Emergence) The word ‘complexity’ in the sense of a desired addition can also be so. It is tempting to assign fake explanations to mysterious parts when trying to understand something. This must be resisted. (Say Not Complexity)

Eliezer failed at this in his earlier days, despite knowing to reject the standard ‘fake explanations’; it takes a lot of improvement to not simply find new, interesting mistakes instead of the old ones. (My Wild And Reckless Youth) Solving a mystery should make it feel less confusing, but it is difficult to learn what believing the old fake explanations felt like to recognise new ones. (Failing To Learn From History) Trying to visualise believing in ideas like “elan vital”, without being able to immediately see your error, may help. (Making History Available)

Explanations like ‘Science’ can serve as curiosity-stoppers, by telling us that someone else knows the answer. (“Science” As Curiosity-Stopper)

How To Actually Change Your Mind E: Overly Convenient Excuses

Humility is a complicated virtue, and we should judge it by whether applying it makes us stronger or weaker, and by whether it is an excuse to shrug. To be correctly humble is to take action in anticipation of one’s own errors. (The Proper Use Of Humility)

A package deal fallacy is where you assume things traditionally grouped together must always be so. A false dilemma is presenting only two options where more exist. Justifications for noble lies are usually one of the two; it is preferable to seek a third alternative, which may be less convenient. (The Third Alternative)

Human hope is limited and valuable, and the likes of lotteries waste it. (Lotteries: A Waste Of Hope, New Improved Lottery) There is a bias in which extremely tiny chances are treated as more than tiny in implication, and justify proclaiming belief in them. There is a tendency to arbitrarily choose to ‘believe’ or not believe a thing rather than reacting to probabilities. (But There’s Still A Chance, Right?)

The fallacy of grey is to regard all imperfection and all uncertainty as equal. Wrong is relative. (The Fallacy Of Grey) There is a sizeable inferential distance from thinking of knowledge as absolutely true to understanding knowledge as probabilistic. (Absolute Authority) Eliezer says he would be convinced that 2 + 2 = 3 by the same processes that convinced him that 2 + 2 = 4; a combination of physical observation, mental visualization, and social agreement, such as observing that putting two more objects down beside two objects produced three objects. (How To Convince Me That 2+2=3)

Because of how evidence works, a probability of 100% or 0% corresponds to infinite certainty, and requires infinite evidence to correctly attain. As a result it is always incorrect. (Infinite Certainty) 0 and 1 are [in a sense] not probabilities. (0 And 1 Are Not Probabilities)

It is reasonable to care how other humans think, as part of caring about how the future and present look. This is somewhat dangerous, and so must be tempered by a solid commitment to respond to bad thinking only with argument. (Your Rationality Is My Business)

How To Actually Change Your Mind F: Politics and Rationality

Politics is the mind-killer. People cannot think clearly about politics close to them. In politics, arguments are soldiers. When giving examples, it is tempting to use contemporary politics. Avoid this if possible. If you are discussing something innately political, use an example from historic politics with minimal contemporary implications if possible. (Politics Is The Mind-Killer)

Policy debates should not appear one-sided. Actions with many consequences should not be expected to have exclusively positive or negative consequences. If they appear to, this is normally the result of bias. They may legitimately have lopsided costs and benefits. (Policy Debates Should Not Appear One-Sided)

Humans tend to treat debates as a contest between two sides, where any weakness in one side is a gain to the other and visa versa, and whoever wins is correct on everything and whoever loses is wrong on everything. This is correct behaviour for a single, strictly binary question, but an error for any more complicated debate. (The Scales Of Justice, The Notebook Of Rationality)

The fundamental attribution error is a tendency in people to overly attribute the actions of others to innate traits, while overly attributing their own actions to circumstance as opposed to differences in themselves. Most people see themselves as normal. (Correspondence Bias) Even your worst enemies are not innately evil, and usually view themselves as the heroes of their own story. (Are Your Enemies Innately Evil?)

Stupidity causes more random beliefs, not reliably wrong ones, so reversing the beliefs of the foolish does not create correct beliefs; reversed stupidity is not intelligence. Foolish people disagreeing does not mean that you are correct. (Reversed Stupidity Is Not Intelligence)

Authority can be a useful guide to truth before you’ve heard arguments, but is not so after arguments. (Argument Screens Off Authority) The more distant from the specific question evidence is, the weaker it is. You should try to answer questions using direct evidence- hug the query. Otherwise learning abstract arguments, including about biases, can make you less rather than more accurate. (Hug The Query)

Speakers may manipulate their phrasing to alter what aspects of a situation are noticed. (Rationality And The English Language) Simplifying language interferes with this, and allows you to recognise errors in your own speech. (Human Evil And Muddled Thinking)

How To Actually Change Your Mind G: Against Rationalization

Because humans are irrational to start with, more knowledge can hurt you. Knowledge of biases gives you ammunition to use against arguments, including knowledge of this one. (Knowing About Biases Can Hurt People)

Expect occasional opposing evidence for any imperfectly exact model. You should not look for reasons to reject it, but update incrementally as it suggests. If your model is good, you will see evidence supporting it soon. (Update Yourself Incrementally) You should not decide what direction to change your opinion in by comparing new evidence to old arguments; this double-counts evidence. (One Argument Against An Army)

The sophistication with which you construct arguments does not improve your conclusions; that requires choosing what to argue in a manner that entangles your choice with the truth. (The Bottom Line)

Reaction to evidence that someone is filtering must include reacting to knowledge of the filtering. Knowing what is true can require looking at evidence from multiple parties. (What Evidence Filtered Evidence?)

Rationalization is determining your reasoning after your conclusion, and runs in the opposite direction to rationality. (Rationalization) You cannot create a rational argument this way, whatever you cite. (A Rational Argument)

Humans tend to consider only the critiques of their position that they know they can defeat. (Avoiding Your Belief’s Real Weak Points) A motivated skeptic asks if the evidence compels them to believe; a motivated credulist asks if the evidence allows them to believe. Motivated stopping is ceasing the search for opposing evidence earlier when you agree, and motivated continuation is searching longer when you don’t. (Motivated Stopping And Motivated Continuation)

Fake justification is searching for a justification for a belief which is not the one which led you to originally hold it. (Fake Justification) Justifications for rejecting a proposition are often not the person’s true objection, which when dispelled would result in the proposition being accepted. (Is That Your True Rejection?)

Facts about reality are often entangled with each other. (Entangled Truths, Contagious Lies, Of Lies And Black Swan Blowups) Maintaining a false belief often requires other false beliefs, including deception about evidence and rationality themselves. (Dark Side Epistemology)

How To Actually Change Your Mind H: Against Doublethink

In doublethink, you forget then forget you have forgotten. In singlethink, you notice yourself forgetting an uncomfortable thought and recall it. (Singlethink)

If you watch the risks of doublethink enough to do it only when useful, you cannot do it. If you do not, you will do it where it harms you. Doublethink is either not an option or harmful. (Doublethink (Choosing To Be Biased))

The above on doublethink not be a dispassionate reporting of the facts; Eliezer admits that they may have been tempted into trying to create a self-fulfilling prophecy. They then say that it may be wise to at least tell yourself that you can’t self-deceive, so that you aren’t tempted to try. (Don’t Believe You’ll Self-Deceive)

It is possible to lead yourself to think you believe something without believing it. Believing that a belief is good can lead you to false belief-in-belief. (No, Really, I’ve Deceived Myself, Belief In Self-Deception) We often do not separate believing a belief from endorsing a belief. Belief-in-belief can create apparently contradictory beliefs. (Moore’s Paradox)

How To Actually Change Your Mind I: Seeing With Fresh Eyes

Anchoring is a behaviour in which we take a figure we’ve recently seen and adjust it to answer questions, making results depend on the initial anchor. A strategy for countering it might be to dwell on an alternative anchor if you notice an initial guess is implausible. (Anchoring And Adjustment)

Priming is an aspect of our brain’s architecture. Concepts related to ideas we’ve recently had in mind are recalled faster. This means that completely irrelevant observations influence estimates and decisions. This is known as contamination. It supports confirmation bias; having an idea in our head makes compatible ideas come to mind more easily, making us more receptive to confirming than disconfirming evidence for our beliefs. (Priming And Contamination)

Some evidence suggests that we tend to initially believe statements, then adjust to reject false ones. Being distracted makes us more likely to believe statements explicitly labeled as false. (Do We Believe Everything We’re Told?)

The hundred-step rule is the principle that because neurons in the human brain are slow, any hypothesised operation can be very parallel but must complete in under a hundred sequential neuron spikes. It is a good guess that human cognition consists mostly of cache lookups.

We incorporate the thoughts of others into this cache, and alone could not regenerate all the ideas we’ve collected in a single lifetime. We tend to incorporate and then repeat or act on cached thoughts without thinking about their source or credibility. (Cached Thoughts)

“Outside the box” thinking is a box of its own, and along with stated efforts at originality and subversive thinking follows predictable patterns; genuine originality requires thinking. (The “Outside The Box” Box) When a topic seems to have nothing to be said, it can mean we do not have any related cached thoughts, and find generating new ones difficult. (Original Seeing)

The events of history would sound extremely strange described to someone prior to them. (Stranger Than History) We tend to treat fiction as history which happened elsewhere. This causes us to favour hypotheses which fit into fun narratives, over other hypotheses that might be likely. (The Logical Fallacy Of Generalization From Fictional Evidence)

A model which connects all things contains the same information as a model that connects none. Information is contained in selectiveness about connections, and the more fine-grained this is the more information is contained. The virtue of narrowness is the definition and use of narrow terms and ideas rather than broad ones. (The Virtue Of Narrowness)

One may sound deep by coherently expressing cached thoughts that the listener hasn’t heard yet. One may be deep by attempting to see for yourself rather than following standard patterns. (How To Seem And Be Deep)

We change our mind less often than we think, and are resistant to it. A technique to mitigate against this is to hold off on proposing solutions as long as possible. (We Change Our Minds Less Often Than We Think, Hold Off On Proposing Solutions)

Because of confirmation bias, we should be suspicious of ideas that originally came from sources whose output was not entangled with the truth. However, to disregard other evidence entirely in favour of judging the original source would be the genetic fallacy. (The Genetic Fallacy)

How To Actually Change Your Mind J: Death Spirals and the Cult Attractor

The affect heuristic is when subjective impressions of goodness/badness act as a heuristic. It causes the manner in which a problem is stated and irrelevant aspects of a situation to change the decisions we make. (The Affect Heuristic) The halo effect is this applied to people; when our subjective impression of a person in one regard, such as appearance, alters our judgement of them in others. (The Halo Effect)

We overestimate the altruism of those who run less risk compared to those who run more, and attribute less virtue to people who are generous for lesser as well as greater need. (Superhero Bias) We lionize messiahs for whom doing great things is easy over those for whom it is hard. (Mere Messiahs)

We tend to evaluate things against nearby points of comparison. (Evaluability And Cheap Holiday Shopping) When we lack a bounded scale to put our estimates within, we make one up, inconsistently between people. (Unbounded Scales, Huge Jury Awards, And Futurism)

An affective death spiral is a scenario in which a strong positive impression assigned to one idea causes us to improve our impressions of related ideas, which we then treat as confirmation of the original idea in a self-sustaining cycle. (Affective Death Spirals) We can diminish the effect of positive impressions enough to prevent this by splitting big ideas into smaller ones we treat independently, reminding ourselves of the conjunctive bias and considering each additional claim to be a burdensome detail, and following the suggestions in the Against Rationalization sequence. (Resist The Happy Death Spiral)

Considering it morally wrong to criticise an idea accelerates an affective death spiral. (Uncritical Supercriticality) Evaporative cooling of group beliefs is a scenario in which as a group becomes more extreme, moderates leave, and as they are no longer acting as a brake, the group becomes yet more extreme, in a cycle. This is another reason why tolerating dissent is important. (Evaporative Cooling Of Group Beliefs)

A spiral of hate is the mirror image of an affective death spiral, in which a strong negative impression of a thing causes us to believe related negative ideas, which we then treat as strengthening the original impression. You can correspondingly observe it become morally wrong to urge restraint or to object to a criticism. It, too, leads to poor choice of action. (When None Dare Urge Restraint)

Humans, once divided into opposing groups, will naturally form positive and negative stereotypes of the two groups and engage in conflict. (The Robbers Cave Experiment) Every cause has a natural tendency for its supporters to become focused on defending their group, even if they declare ‘rationality’ to be their goal. (Every Cause Wants To Be A Cult)

Beware being primarily a guardian of the truth rather than primarily a seeker of it. (Guardians Of The Truth) The Nazis can be understood as would-be guardians of the gene pool. (Guardians Of The Gene Pool)

There are things we know now which earlier generations could not have known, which means that from our perspective we should expect elementary errors even in our historic geniuses. This is a defining attribute of scientific disciplines. It feels unfair to consider things they could not have known to be flaws in their ideas, but nevertheless they are. It is foolish to declare a system of ideas to be closed to further development. We already have examples of people who declared themselves to be about being Rational who fell into that trap in history. (Guardians Of Ayn Rand)

Two ideas for countering a tendency towards affective death spirals around a group are to prefer using and describing techniques over citing authority, and to deliberately look foolish to reduce the positive affect you give to the techniques you describe, so they are judged on their own merits. (Two Cult Koans)

We tend to conform to the beliefs of those around us, and are especially inclined to avoid being the first dissenter, for social reasons. Being the first dissenter is thus a valuable service. (Asch’s Conformity Experiment) It can be correct if you do not believe you have any special advantage to believe that the majority opinion is more likely to be the true one, but it remains important to express your concerns . Doing so is generally just as socially discouraged as outright disagreement. (On Expressing Your Concerns)

Lonely dissent is often just a role people play in defined patterns. When it is real, it requires bearing the incomprehension of the people around you and discussing ideas that are not forbidden but outside bounds which aren’t even thought about. Doing this without a single other person is terrifying. Being different for its own sake is a bias like any other. (Lonely Dissent)

Cults vary from sincere but deluded and expensive groups, to “love bombing”, sleep deprivation, induced fatigue, distant communes, and daily meetings to confess impure thoughts. Lists of cult characteristics include things which describe other organisations, like political parties and corporations. The true defining aspect is the affective death spiral, which should be fought in any group, and judged independently of how weird the group is in other respects. (Cultish Countercultishness)

How To Actually Change Your Mind K: Letting Go

If we only admit small, local errors, we only make small, local improvements. Big improvements require admitting big errors. Rather than grudgingly admitting the smallest errors possible, be willing to consider that you may have made fundamental mistakes. (The Importance Of Saying “Oops”)

Reinterpreting your mistakes to make it so that you were right ‘deep down’, or morally right, or half-right, avoids the opportunity to see large errors in the route you are on and adjust. (The Crackpot Offer) Being ready to admit you lost lets you avoid turning small mistakes into bigger ones. (Just Lose Hope Already)

A doubt exists to potentially destroy a particular belief, on the basis of some specific justification. A doubt that fails to either be destroyed or destroy its belief may as well not have existed at all. Wearing doubts as attire does not make you more rational. (The Proper Use Of Doubt)

You can face reality. What is true is already so. Owning up to it doesn’t make it any worse. (You Can Face Reality)

Criticising yourself from a sense of duty leaves you wanting to have investigated, not wanting to investigate. This leads to motivated stopping. There is no substitute for genuine curiosity, so attempt to cultivate it. Conservation of expected evidence means any process you think may confirm your beliefs you must also think may disconfirm them. If you do not, ask whether you are looking at only the strong points of your belief. (The Meditation On Curiosity)

The laws governing evidence and belief are not social, but aspects of reality. They are not created by rationalists, but merely guessed at. No one can excuse you from them, any more than they may excuse you from the laws of gravity, regardless of how unfair they are in either case. (No One Can Exempt You From Rationality’s Laws)

When you have a cherished belief, ask yourself what you would do, assuming that it was false. Visualise the world in which it is false, without challenging that assumption. Answering this grants yourself a line of retreat- a calm, tolerable path forward- enabling you to consider the question. (Leave A Line Of Retreat)

When you are invested heavily and emotionally in a long-lived belief which is surrounded by arguments and refutations, it can be desirable to attempt to instigate a real crisis of faith about it, one that could go either way, as it will take more than an ordinary effort to displace if false. (Crisis Of Faith, The Ritual)

The Machine In The Ghost L: The Simple Math of Evolution

There are things which look purposeful in nature, which people historically treated as evidence of a designer. If you look at them without cherrypicking, you find parts which appear to be working at odds with other parts, inconsistent with the purposefulness you’d expect from a single designer. Similarly, you find a lot of the purposefulness seems cruel, inconsistent with benevolent design.

If evolution were able to explain anything, it would be useless. Evolution is consistent only with the kind of purposefulness which propagates a gene, with no filtering for kindness or any other kind of purposefulness. This is the kind of alien purposefulness we observe in nature. (An Alien God)

Evolution works incrementally. (The Wonder Of Evolution) Evolution is slow; a mutation multiplying the expected number of children by 1.03 has a 6% chance of reaching fixation, and takes an average of 768 generations to reach universality within a population of 100,000. The general formulae are 2 s for the chance of fixation, and 2 ln(N) / s for number of generations, where N is the population size, and s is the multiplier minus 1. Complex mutations take a very long time, as each step must reach fixation. (Evolutions Are Stupid (But Work Anyway))

Price’s Equation is a very general equation stating that the change in average characteristic is equal to the covariance of the characteristic and relative fitness. It operates only to the extent that characteristics are heritable across the generations. If characteristics aren’t passed down more than a few generations, you will only ever observe a few generations’ worth of selective pressure.

This means corporations do not significantly benefit from evolution. Similar for nanodevices with cryptographically protected replication instructions, as few changes would have high covariance. (No Evolutions For Corporations Or Nanodevices)

Selection being concerned only with competition between genes means genes that are better for the species can be outcompeted. Successful genes could make all descendants male, recursively, exist only to copy themselves, or cause the bystander effect. It is possible to evolve to extinction. (Evolving To Extinction)

Group selection overriding individual selection is generally mathematically implausible and was used to rationalise beliefs that outcomes would be what was better-for-the-species. (The Tragedy Of Group Selectionism)

Humans are very good at arguing that almost any optimisation criteria suggests almost any policy. Evolution is one of the few cases where we can examine what actually optimising for specific criteria with no rationalisation or bias would look like, in order to understand what that looks like. (Fake Optimization Criteria)

We don’t consciously have the deliberate goal of optimising for our genes’ genetic fitness; it was not genetically fit for that goal to be encoded in us. We are adaptation-executors, not fitness maximisers. (Adaption-Executers Not Fitness-Maximizers, Evolutionary Psychology) We want to optimise for other things. (Thou Art Godshatter)

Our psychological adaptations are tuned for success in the evolutionary environment. (An Especially Elegant Evpsych Experiment) The modern world contains things that match our desires more strongly than anything in the evolutionary environment. We call these superstimuli, and they may cause perverse behaviour. (Superstimuli And The Collapse Of Western Civilization)

The Machine In The Ghost M: Fragile Purposes

When observing an intelligent process, you can be certain about the expected end state while being uncertain about intermediary steps. This is because intelligence is an optimisation process. (Belief In Intelligence) We normally model intelligence by simulating it with our brain, and assume something analogous to our emotional architecture. This doesn’t work well for non-human intelligence. (Humans In Funny Suits)

Optimisation processes can find very small targets in large search spaces. Natural selection emerged accidentally, and is slow and stupid. Human brains are much better. Neither optimisation process is able to optimise itself. We could design an AI to do so. If the process did not require exponentially more optimisation power applied for each increase in optimisation power out, and the initial intelligence was sufficient, optimisation power could rise exponentially over time. (Optimization And The Singularity)

People tend to think of programming computers as if they contain a little ghost which reads and performs abstract instructions. Your instructions define the entirety of the logic performed. If you do not know how to define something in terms you can program, you cannot reference it. Conversely, there is no additional entity capable of deciding to not do what you defined. (Ghosts In The Machine) When we find a confusing gap in our knowledge, we should try to fill it rather than reason around it. (Artificial Addition)

Terminal values are ends, instrumental values are means. (Terminal Values And Instrumental Values) Any generalisations at the macroscopic level will have exceptions; they will be leaky abstractions. This extends to instrumental values. (Leaky Generalizations) We must make any sufficiently powerful and intelligent optimisation process optimise for our terminal values, as optimising for a described instrumental value may powerfully optimise for an easy exception we didn’t think of. (The Hidden Complexity Of Wishes)

Anthropomorphic optimism is where we expect non-human intelligent processes, such as natural selection, to choose a strategy that is one a human might choose, because we tend not to bring candidate strategies we know no person wants to the surface, and we’re good at rationalization. (Anthropomorphic Optimism)

Dysfunctional organisations incentivise many actions internally which are detached from any original purpose of the action, and this can be recognised. Civilisation in general does this. (Lost Purposes)

The Machine In The Ghost N: A Human’s Guide To Words

Statements are only entangled with reality if the process generating them made them so. (The Parable Of The Dagger)

The logical implications of a given definition of a word are the same in all conceivable universes, and so do not tell us anything about our universe. Correlations between attributes do, but only so far as observations and those correlations are reliable. (The Parable Of Hemlock)

If you define a word rigidly in terms of attributes, and then state that something is that word, you assert it has all those attributes. If you then go on to say say it thus has one of those attributes, you are simply repeating that assertion. The word only creates an illusion of inference. (Empty Labels)

If assigning a word a definition feels like it argues something, you may be making a hidden assertion of a connotation not in that definition. (Sneaking In Connotations) Alternatively, you may be incorrectly ignoring more direct evidence in favour of correlations between attributes represented by the words. (Arguing By Definition)

A concept is any rule for classifying things, and creates a category of things. The space of definable concepts is much larger than the space of describable things. We limit ourselves to relatively simple concepts in order to make their definition tractable. (Superexponential Conceptspace And Simple Words) Words are labels for concepts. (Words As Mental Paintbrush Handles)

Efficient communication uses shorter messages for common messages and longer messages for uncommon messages. We use shorter words for more common concepts and longer words for less common concepts. (Entropy And Short Codes) Creating a word defined by a list of attributes permits faster communication if and only if those attributes are correlated. Adding an uncorrelated attribute to a word means it takes more work to communicate accurately using that word than not using it, which will result in inaccurate communication. (Mutual Information And Density In Thingspace)

We automatically infer that the set of attributes that define a word are well correlated. We shouldn’t create definitions where that’s wrong. (Words As Hidden Inferences) Concepts can be misleading if they group things poorly. Using concepts that are similar to those used by others aids communication. (The Argument From Common Usage) Concepts dividing or excluding things on irrelevant criteria result in people assuming that there’s relevant differences correlated to those criteria. (Categorizing Has Consequences)

An intensional definition is a definition in terms of other words. An extensional definition is a definition provided by pointing at examples. The intension of a concept is the pattern in your brain that recognises it. The extension of a concept is everything matching that pattern. Neither type of definition fully describes its corresponding aspect.

Claiming that a concept with known extension includes a particular attribute ‘by definition’ hides the assertion that the things in its extension have that attribute. Claiming that a thing falls under a concept ‘by definition’ often hides the assertion that its attributes are typical of that concept. (Extensions And Intensions) Not all concept we have, have straightforward intensional definitions. Which concepts usefully divide the world is a question about the world. (Where To Draw The Boundary?)

You can think of any conceivable thing as described by a point in ‘thingspace’, whose dimensions include all possible attributes. Concepts describe clusters in thingspace. (The Cluster Structure Of Thingspace) These are similarity clusters. A dictionary is best thought as a set of hints for matching labels to these clusters. (Similarity Clusters) People regard some entities in these clusters as more or less typical of them. (Typicality And Asymmetric Similarity)

Asking if something ‘is’ in some category is a disguised query for whether it should be treated the way things in that category are treated, for some purpose. You may need to know that purpose to answer the question for atypical cases. (Disguised Queries)

You can reduce connections in a neural network design by introducing nodes for categories, then inferring attributes from categories and categories from attributes rather than all attributes from all other attributes. (Neural Categories) Our brain uses a structure like this. If only some attributes match a category, the way this feels from the inside is like there’s a permanently unresolved question of fact about whether the thing is ‘in’ or not ‘in’ the category, because the ‘node’ is unsettled. (How An Algorithm Feels From Inside)

Disputes over definitions are disputes over what cluster a given label points at, but feel like disputes over what properties the things in that cluster have. (Disputing Definitions) What intension is associated with what word feels like a fact about the wider world rather than just a fact about human brains. (Feel The Meaning)

If you are trying to discuss reality, and you find your meaning for a label differs from another person’s, you should taboo that concept and use others to communicate. (Taboo Your Words) You can also taboo concepts and try to describe the relevant parts of thingspace directly as an effective way to clarify anticipated experience and notice which aspects of the concepts are relevant. (Replace The Symbol With The Substance)

Our map of the world is necessarily smaller than the world, which means we necessarily must compress distinct things in reality into a single point in our map. From the inside, this feels like we’re observing only one thing, rather than that we’re observing multiple things and compressing them together. Noticing where splitting a category is necessary is a key challenge in reasoning about the world. A good hint is noticing a category with self-contradictory attributes. (Fallacies Of Compression) Correct statements about different things merged into a single point may be inconsistent with each other; this does not mean part of reality is inconsistent. (Variable Question Fallacies)

Two variables have mutual information if they are correlated, and are independent if not. Conditional independence is where mutual information is shared between three or more variables, and conditional on one of those variables, the other two become independent. Where we have mutual information between many possible attributes of a thing, we create concepts to represent mutual information between attributes, and then treat the attributes as conditionally independent once we know that something matches that concept, as a simplification.

If there is a great deal of mutual information remaining between attributes after knowing something matches a concept defined using those attributes, this is an error. (Conditional Independence And Naive Bayes)

Words can be defined wrongly, in many ways. (37 Ways That Words Can Be Wrong)

Mere Reality O: Lawful Truth

Apparently independent surface-level rules of reality follow from more basic common rules. This means you can’t have a consistent world in which some surface-level rules keep working for the same reasons they always worked and others don’t work. (Universal Fire)

The universe almost certainly runs on absolute laws with no exceptions, although we have a much greater degree of uncertainty as to what those laws are. This feels like an unreasonably uncompromising social move to people used to thinking about human or moral laws. (Universal Law)

Reality remains uncertain because we don’t know the laws, because it isn’t feasible to work out the exact consequences of the laws, and we don’t know which human in reality we will perceive ourselves as being. Reality is not fundamentally messy; only our perspective on it is. (Is Reality Ugly?)

Bayesian theorems are attractive because they’re laws, rather than because Bayesian methods are always the most practical tool. (Beautiful Probability) Mutual information is Bayesian evidence; anything which generates better than random beliefs must do so through processing Bayesian evidence. (Searching For Bayes-Structure)

A scientist who is not more selective in their beliefs outside the laboratory than a typical person has memorised rules to get by, but lacks understanding of what those rules mean. (Outside The Laboratory)

No part of a system can violate the first law of thermodynamics, conservation of energy, and so we reject systems claiming to. Liouville’s theorem says the space of possible states of a system is conserved; for any part whose state becomes more certain, another part becomes less certain.

The second law of thermodynamics, that total entropy cannot decrease, is a corollary. Maxwell’s demon is a hypothetical entity which lets only fast-moving gas molecules through a barrier without generating entropy, decreasing entropy. If you knew the state of the gas for free, you could create one. This means that knowing things about the universe without observing them and generating entropy in the process would be a violation of the second law of thermodynamics. (The Second Law Of Thermodynamics, And Engines Of Cognition)

When people try to justify something without evidence, they often construct theories complicated enough that they can make a mistake and miss it, similar to people designing perpetual motion machines. (Perpetual Motion Beliefs)

Mere Reality P: Reductionism 101

For some questions, we should, rather than trying to answer or prove them nonsensical, try to identify why we feel a question exists. The result should dissolve that feeling. (Dissolving The Question)

A cue that you’re dealing with a confused question is when you cannot imagine any observation that answers it. (Wrong Questions) One way forward is to ask “Why do I think [thing]?” rather than “Why [thing]?”. The new question will lead you to the entanglement of your beliefs with reality that generated the belief, if it is not confused, and an explanation of your mind otherwise. (Righting A Wrong Question)

The mind projection fallacy is treating properties of our perception of a thing as inherent attributes of it. (Mind Projection Fallacy) The probability of an event is a property of our perception, not the event. (Probability Is In The Mind) We call something chaotic when we can’t predict it, but miss that this is a fact about our ability to predict. This causes us to miss opportunities to improve. (Chaotic Inversion) Rather than viewing reality as weird, resist getting caught up in incredulity, and let intuition adjust to view reality as normal. (Think Like Reality)

Probability assignments are not well modelled as true or false, but as having a level of accuracy. Your beliefs about your own beliefs have different accuracy to those beliefs. Differing beliefs are only differing truths insofar as accurate statements about your own map differ; this is not accurate statements about reality differing between people, because the map is not the territory. (Qualitatively Confused) The concept of a thing is not the same as the thing. If a person thinks a thing is two separate things, described by separate concepts, those concepts may differ despite referring to the same thing. (The Quotation is Not The Referent)

Reductionism is disbelief in a particular form of the mind projection fallacy. It is useful for us to use different models for different scales of reality, but this is an aspect of what is useful for us, not an aspect of the different scales of reality, and does not mean that they are governed differently. (Reductionism)

Explaining and explaining away are different. Non-fundamental things still exist. Explaining away something only removes it from the map; it was never in the territory. (Explaining Vs Explaining Away) A thing is only reduced if you know the explanation; knowing one exists only changes literary genre. (Fake Reductionism) We can tell human stories about humans. A non-anthropomorphic view of the world helps broader stories. (Savanna Poets)

Mere Reality Q: Joy In The Merely Real

You should be able to care about knowable, unmagical things. The alternative is existential ennui, because everything is knowable. (Joy In The Merely Real) Taking joy only in discovering something no one else knows makes joy scarce; instead, find joy in all discoveries. (Joy In Discovery)

By placing hope in and celebrating true things, you direct your emotions into reality rather than fiction. (Bind Yourself To Reality) If we lived in a world with magic, it would seem as mundane as science. If you can’t be excited by reality or put in great effort to change the world here, you wouldn’t there. (If You Demand Magic, Magic Won’t Help)

Many of our abilities, such as ‘vibratory telepathy’ (speech) and ‘psychometric tracing’ (writing) would be amazing magical powers if only a few had them. Even more so for the ‘Ultimate Power’; possessing a small imperfect echo of the universe, and searching through probability to find paths to a desired future. We shouldn’t think less of them for commonality. (Mundane Magic)

Settled science is as beautiful as new science. Textbooks will offer you careful explanations, examples, test problems, and likely true information. Pop science articles offer wrong explanations of results the author likely didn’t understand, and have a high chance of not replicating. You cannot understand the world if you only read science reporting. (The Beauty Of Settled Science, Amazing Breakthrough Day: April 1st)

Irreligious attempts to imitate religious trappings and hymns always suck. However, a sense of awe is not exclusive to religion. There are things which would have been a good idea even if religion had never existed to imitate that can be awe-inspiring, such as space shuttle launches. For those things, the awe remains when they are mundane and explained. (Is Humanism A Religion-Substitute?)

Things become more desirable as they become less attainable; this is scarcity. Similarly, forbidden information appears more important. When something is attained it stops being scarce, leading to frustration. (Scarcity) If Science was secret, it would become fascinating. (To Spread Science Keep It Secret, Initiation Ceremony)

Mysteriousness, faith, unique incommunicability, separation of domains, and experientialism shield from criticism, and declare the mundane boring. We shouldn’t have them. (The Sacred Mundane)

Mere Reality R: Physicalism 201

Concepts such as ‘your hand’, describe the same part of the world as lower level concepts, such as ‘your palm and fingers’. They do not vary independently, but still ‘exist’. (Hands Vs Fingers) Concepts such as ‘heat’ and ‘motion’, can also refer to the same thing, even if you can imagine a world where they refer to separate things. (Heat Vs Motion) Concepts note only that a cluster exists, and do not define it exactly. (Reductive Reference)

Understanding how higher-level things such as ‘anger’ are created by lower-level things requires discovering the explanation, not just assertion. (Angry Atoms) Rationality is not social rules; rationality is how our brain works. (A Priori) Reality is that which sometimes violates expectations and surprises you. (Reductive Reference again)

The brain is a complex organ made of neurons. (Brain Breakthrough! It’s Made Of Neurons!) Before we realised that thinking involved a complex organ, Animism was a reasonable error. (When Anthropomorphism Became Stupid) A proposed entity is supernatural if it is irreducibly complex. Because our brains are reducible, no set of expectations can require irreducible complexity, but some expectations make irreducibility more likely than others. (Excluding The Supernatural, Psychic Powers)

A zombie, in the philosophical sense, is a hypothetical being which looks and behaves exactly like a human, including talking about being conscious, but is not conscious. It is alleged that if it is a coherent hypothetical, consciousness must be extra-physical. It is not coherent if ‘process which causes talking about consciousness’ and ‘consciousness’ refer to the same part of the world. We should believe they do, because the alternative is more complex. (Zombies! Zombies?, Zombie Responses, Zombies: The Movie) It is correct to believe in unobservable things if and only if the most succinct model of reality predicts them. (Belief In The Implied Invisible)

The generalised anti-zombie principle is that any change we shouldn’t expect to change the reasons we talk about consciousness is one we should expect to leave us still conscious. (The Generalized Anti-Zombie Principle) Conceivably, one could replace a human with a giant look-up table (GLUT) which would seem to violate this principle, but the process which selected the GLUT to use would need to have been conscious and make all the same decision-making choices as you in doing so. (GAZP Vs GLUT)

Mere Reality S: Quantum Physics and Many Worlds

(This sequence is controversial; mean probability assigned to MWI was 56.5% in the 2011 survey)

Quantum mechanics is not intuitive; this is a flaw in intuition. (Quantum Explanations)

Reality is comprised of configurations with complex-valued amplitudes, and rules for calculating amplitude flows into other configurations. We cannot measure amplitudes directly, only the ratio of absolute squares of some configurations. (Configurations And Amplitude) You sum all amplitude flows into a configuration to get its amplitude. Amplitude flows that put the same types of particle in the same places flow into the same configuration, even if the particles came from different places. Which configurations are the same is observable fact. If amplitude flows have opposite sign, they can cancel out to zero. If either flow had been absent, the configuration would have had non-zero amplitude. (Joint Configurations)

A configuration is defined by all particles. If amplitude flows alter a particle’s state, then they cannot flow into the same configuration as amplitude flows which do not alter it. Thus, measuring amplitude flows stops them from flowing to the same configurations. (Distinct Configurations)

Collapse theories propose that at some point before a measurement reaches a human brain, there is a waveform collapse leaving only one random configuration with non-zero amplitude, discarding other amplitude flows. Many Worlds proposes that this doesn’t happen; configurations where we observe and don’t observe a measurement both exist with non-zero amplitude, too different from each other for their amplitude flows to flow into common configurations; we have macroscopic decoherence. Collapse would be very different to other physics. (Collapse Postulates) Living in multiple worlds is the same as living in one; we shouldn’t be unsettled by it. (Living In Many Worlds)

Decoherence is simpler (Decoherence Is Simple), while making the same predictions. (Decoherence Is Falsifiable And Testable) Privileging the hypothesis is selecting an unlikely hypothesis for attention, causing confirmation bias. Historical accident has privileged collapse theories (Privileging The Hypothesis, If Many-Worlds Had Come First, Many Worlds, One Best Guess) because people didn’t think of themselves as made of particles. (Where Philosophy Meets Science, Thou Art Physics) Declaring equations to be meaningless is wrong; there is something described. (Quantum Non-Realism)

Mere Reality T: Science and Rationality

Science is supposed to replace theories when experiments falsify them in favour of new theories, and is uninterested in simpler theories making the same predictions. This leads to different results than application of probability theory. (The Dilemma: Science or Bayes?) Science is this way because it doubts that flawed humans debating elegance will reach truth if not forced to experiment. Science distrusts your rationality. (Science Doesn’t Trust Your Rationality)

Science doesn’t help you get answers to questions that are not testable in the present day. It is incorrect to dismiss theories answering those questions because they’re scientifically unproven. You must try to use your reason. (When Science Can’t Help) Science does not judge your choice of hypothesis, and only requires you react to overwhelming evidence. It accepts slow, generational progress. You must have a private epistemic standard higher than the social one, or else you will waste a lot of time. (Science Isn’t Strict Enough)

It is a flaw that the teaching of Science doesn’t practice resolving confused ideas (The Failures Of Eld Science), probability theory, awareness of the need for causal entanglement of belief with reality, or rationality more broadly. (Do Scientists Already Know This Stuff?) Teaching probability theory alone would not correct this. (The Definition Of Science)

There is nothing that guarantees that you are not a fool, not even Science, not even trying to use probability theory. You don’t know your own biases, why the universe is simple enough to understand, what your priors are, or why they work. The formal math is intractable. To start as a rationalist requires losing your trust that following any prescribed pattern will keep you safe. (No Safe Defense, Not Even Science)

The bulk of work in progressing knowledge is in elevating the right hypotheses to attention, a process Science depends on but does not specify, relying on normal reasoning. (Faster Than Science) Einstein did this well. Most will fail, but it remains valuable to practice. (Einstein’s Speed) Geniuses are not separate from humanity; with grit and the right choice of problem and approach, not all but many have potential. (Einstein’s Superpowers, Class Project)

We do not use the evidence of sensory data anywhere near optimally. (That Alien Message) Possible minds can be extremely smarter than humans. Basing your ideals on hypothetical extremely intelligent minds, rather than merely the best humans so far, helps you not shy away from trying to exceed them. (My Childhood Role Model)

Mere Goodness U: Fake Preferences

Human desires include preferences for how the world is, not just preferences for how they think the world is or how happy they are. (Not For The Sake Of Happiness Alone) People who claim their preferences reduce down to a single principle have some other process by which they choose what they want, and then find a rationalisation for how what they want is justified by that principle (Fake Selfishness). Simple utility functions fail to compress our values, and we suffer from anthropomorphic optimism about what they suggest. (Fake Utility Functions)

People who fear that humans would lack morality without an external threat, regard this as bad rather than liberating. This means they like morality, and aren’t just forced to abide by it. (Fake Morality)

The detached lever fallacy is the assumption that actions that trigger behaviour from one entity will trigger it from another, without any reason to think the mechanics governing the reaction are present in the second. The actions that make a human compassionate will not make a non-human AI so. (Detached Lever Fallacy) AI design is reducing the mental to the non-mental. Models of an intelligence which can’t predict what it will do other than by analogy to a human are incomplete. (Dreams Of AI Design) The space of possible minds is extremely large. Resist the temptation to generalise over all of mind design space. (The Design Space Of Minds-In-General)

Mere Goodness V: Value Theory

Justifying any belief leads to infinite regress. Rather than accepting any assumption, we should reflect on our mind’s trustworthiness using our current mind as best we can, and accept that. (Where Recursive Justification Hits Bottom) Approach such questions from the standpoint of whether we should want ourselves or an AI using similar principles to change how they choose beliefs. We should focus on improvement, not justification, and expect to change our minds. Don’t exalt consistency in itself, but effectiveness. Separate asking “why” an approach works from whether it “does”. We should reason about our own mind the way we do about the rest of the world, and use all available information. (My Kind Of Reflection)

There are no arguments compelling to all possible minds. For any system processing information, there is a system with inverted output which makes the opposite conclusion. This applies to moral conclusions, and regardless of the intelligence of the system. (No Universally Compelling Arguments, Sorting Pebbles Into Correct Heaps) A mind must have a process that adds beliefs, and a process that acts, or no argument can convince it to believe or act. (Created Already In Motion)

Some properties can be either thought of as as taking two parameters and giving a result, or as a space of one-parameter functions, with different people using different ones. For example, ‘attractiveness(admirer, admired) -> result’ vs ‘attractiveness_1…9999(admired) -> result’. Currying specifies that a two parameter function is equivalent to a one parameter function returning another function, and unifies these. For example, ‘attractiveness(admirer) -> attractiveness_712(admired) -> result’. This reflects the ability to judge a measure independently of the user, but also that the measure used is variable. (2-Place And 1-Place Words)

If your moral framework is shown to be invalid, you can still choose to act morally anyway. (What Would You Do Without Morality?) It’s important to have a line of retreat to be able to seriously review your metaethics. (Changing Your Metaethics) You must start from a willingness to evaluate in terms of your moral intuition in order to find valid metaethics. (Could Anything Be Right?) What we consider to be right grows out of a starting point. To get a system that specifies what is right requires it fit that starting point, which we cannot define fully. (Morality As Fixed Computation) Concepts that we develop to describe good behaviour are very complex. Depictions of them have many possible concepts that fit them, and an algorithm would pick the wrong one.You cannot fix a powerful optimisation process optimising for the wrong thing with patches. (Magical Categories) Value is fragile; optimising for the wrong values creates a dull future. (Value Is Fragile) Our complicated values are the gift that we give to tomorrow. (The Gift We Give To Tomorrow)

The prisoner’s dilemma is a hypothetical in which two people can both either cooperate (C) or defect (D), and each one prefers (D, C) > (C, C) > (D, D) > (C, D). The typical example involves two totally selfish prisoners, but humans can’t imagine this. A better example would have the first entity as humans trying to save billions, vs an entity trying to maximise numbers of paperclips. (The True Prisoner’s Dilemma)

We understand others by simulating them with our brains, which creates empathy. It was evolutionarily useful to develop sympathy. An AI wouldn’t use either approach, an alien might. (Sympathetic Minds)

A world with no difficulty would be boring, We prefer real goals to fake ones. We need goals which we prefer working on to having finished, or which have no end state. (High Challenge) A utopia with no problems has no stories. Pain can be more intense than pleasure. Pleasure that scaled like pain would trap us. We can be rid of pain that breaks or grinds down people, and pointless sorrow, and keep what we value. Whether we will get rid of pain entirely someday, EY does not know. (Serious Stories)

Mere Goodness W: Quantified Humanism

Scope insensitivity is ignoring the number of people or animals or area affected, the scope, when deciding how important an action is. Groups were asked how much they would pay to save 2000 / 20000 / 200000 migrating birds from drowning in oil ponds, and answered $80, $78, and $88. We visualise a single bird, react emotionally, and cannot visualise scope. To be an effective altruist, we must evaluate the numbers. (Scope Insensitivity) Saving one life feels as good as many, but is not as good. We do not treat saving lives as a satisficed virtue, such that once you’ve saved one you ignore others. (One Life Against The World)

The certainty effect is a bias where going from 99% chance to near 100% chance of getting what we want is valued more than going from, say, 33% to 34%. This causes the allais paradox, where we prefer a fixed prize over a 33/34 chance of a bigger prize, but prefer a 33% chance of a larger prize to a 34% chance of a smaller prize. This cannot be explained by non-linear marginal utility of money, permits extracting money from you, and shows a failure of intuition to steer reality. (The Allais Paradox, Zut Allais!)

A certain loss feels worse than an uncertain one. By changing the point of comparison so the certain outcome is a loss rather than a gain, you reverse intuition. You must multiply out costs and benefits, or you will fail at directing reality. This reduces nice feelings, but they are not the point. (Feeling Moral)

Intuition is what morality is built on, but we must pursue reflective intuitions or we won’t accomplish anything due to circular preferences. (The Intuitions Behind Utilitarianism) Making up probabilities can trick you into thinking they’re more grounded than they are, and override working intuitions. (When (Not) To Use Probabilities)

Ends don’t justify the means among humans. We run on corrupted hardware; we rationalise using bad means, past the point that benefits us, let alone anyone else. Otherwise we wouldn’t have developed ethical injunctions. Follow them as a higher-level consequentialist strategy. (Ends Don’t Justify Means Among Humans, Ethical Injunctions)

To pursue rationality effectively, you must have a higher goal that it serves. (Something To Protect) Newcomb’s problem is a scenario in which an entity that can predict you perfectly offers two boxes, and says that box A contains $1000, and box B contains $1,000,000 if and only if they predicted you would only take box B. Traditional causal decision theory says you should take both boxes, as the money is either already in the box or not. Rationally, you should take only box B. Doing so makes you win more, and rationality is about winning, not about reasonableness or any particular ritual of thought. (Newcomb’s Problem And Regret Of Rationality)

Becoming Stronger X: Yudkowsky’s Coming Of Age

Yudkowsky grew up in an environment which praised experience over intelligence as justification for everything, including religion. This led them to the opposite, an affective death spiral around intelligence as the solution to everything. They thought that being very intelligent meant being very moral. They tended to go too far the other way in reaction to someone else’s stupidity. (My Childhood Death Spiral)

Because previous definitions of intelligence had been lacking, they thought it could not be defined tidily. This led to avoiding premature answers. They believed the field of AI research was sick; this led to studying cognitive science. Errors which lead to studying more are better errors. (My Best And Worst Mistake) They regarded regulation of technology as bad, and this reduced attention to existential risks. When convinced risks existed, rather than reviewing mistakes, they just decided we needed AI first. (Raised In Technophilia)

They were good at refuting arguments, and felt they were winning the debate on whether intelligence implied morality. They had a rationale for proceeding with their best ideas, without resolving confusion. Reality does not care whether you are using your best ideas. You can’t rely on anyone giving you a flawless argument, and you can’t work around underlying confusion. (A Prodigy Of Refutation, The Sheer Folly Of Callow Youth)

An incongruous thought, coupled with some perfectionism, and viewing less than morally upright interactions as unacceptable, led to investigating seriously. Doing that, regardless of reason, led to pursuing a backup plan. (That Tiny Note Of Discord) That they were pursuing a backup plan gave them a line of retreat for their earlier views, but they only shifted gradually, without acknowledging fundamental errors. (Fighting A Rearguard Action Against The Truth)

They only saw the error when they realised that a mind was an optimisation process which pumps reality towards outcomes, and you could pump towards any outcomes. (My Naturalistic Awakening) They realised that they could have unrefuted arguments, and nature could still kill them if the choice was wrong. Their trust in following patterns broke, and they began studying rationality. (The Magnitude Of His Own Folly) We all need to lose our assumption of fairness. (Beyond The Reach Of God) They realised that an idea seeming very good didn’t permit being sure; it needed to be provably equivalent to any correct alternative, like Bayesian probability. (My Bayesian Enlightenment)

They recognise that there are people more formidable than them, and hope that their writings might find a younger one of them who can then exceed them. (The Level Above Mine)

Becoming Stronger Y: Challenging The Difficult

Wanting to become stronger means reacting to flaws by doing what you can to repair them rather than with resignation. Do not ritualistically confess your flaws unless you include what you intend to do about them. (Tsuyoku Naritai! (I Want To Become Stronger)) If you are ashamed of wanting to do better than others, you will not make a real effort to seek higher targets. You should always reach higher, without shame. (Tsuyoku Vs The Egalitarian Instinct)

The difference between saying that you are going to do something, and that you are going to try to do something, is that the latter makes you satisfied with a plan, rather than with success, and allows the part where the plan has to maximise your odds of success to get lost. Don’t try your best; either win or fail. (Trying To Try) People don’t make genuine efforts to win even for five minutes. (Use The Try Harder, Luke)

A desperate effort is a level above wanting to become stronger, where you try as though your life were at stake. And there is a step above that, an extraordinary effort; it requires being willing to go outside of a comfortable routine, tackle difficulties you don’t have a mental routine for, and bypass usual patterns, in order to achieve an outcome that is not the default that you care greatly about. It is riskier than even a desperate effort. (Make An Extraordinary Effort)

A problem being impossible sometimes only means that when we query our brain for a strategy, we can’t think of one. This is not the same as being proven to be impossible. Genuine effort over years can find routes forward. Reality can uncaringly demand the impossible. We should resist our urge to find rationalisations for why the problem doesn’t matter (On Doing The Impossible), and sometimes we should shut up and do the impossible; take success at the impossible as our goal and accept nothing less. (Shut Up And Do The Impossible!)

We need to ask ourselves what we want, what it will require to accomplish, and set out to do it with what we know. (Final Words)

Becoming Stronger Z: The Craft and the Community

The prevalence of religion, even in scientific circles, warns us that the baseline grasp of rationality is very low. Arguing against religion specifically fails to solve the underlying problem. We should also be trying to raise the sanity waterline. (Raising The Sanity Waterline)

A reason that people don’t want to learn more about rationality is that they don’t see people who know about it as happier or more successful. A large part of this is that even the people who know a lot about it still know very little, compared to experts in other fields; we have not systematised it as a field of study, subject to large-scale investment and experimentation. One reason for this is that traditional rationalists/skeptics do not see lack of visible formidability and say that we must be doing something wrong. We treat it as a mere hobby horse. (A Sense That More Is Possible) It can take more than an incremental step in the direction of rationality to get an incremental increase in winning. (Incremental Progress And The Valley)

Martial arts dojos suffer from epistemic viciousness; a treatment of the master as sacred, exaltation of historic knowledge over discovery, a lack of data, and a pretense that lack of data isn’t a real problem. Hypothetical rationality dojos risk the same problems. (Epistemic Viciousness) If an air of authority can substitute for evidence, traditions can proliferate and wield influence without evidence. (Schools Proliferating Without Evidence)

Verification methods can be stratified into three levels. Reputational verification is the basic practice of trying to ground reputations in some real world or competitive performance. Experimental verification is randomised, replicable testing, although this can involve very simple measures that are only correlated with the variables of interest. Organisational verification is that which, when everyone knows the process, is resistant enough to gaming to continue working. (3 Levels Of Rationality Verification)

Groups which do not concern themselves with rationality can praise agreement, encourage the less agreeing to leave, and enter an affective death spiral, which binds them all together and makes them cooperate. Typical rationalist groups do not cooperate; they speak and applaud disagreement but not agreement. If you are outperformed by irrational groups, then you are not rational, because rationality is about winning. Actual rationality should involve being better at coordinating, and we should work out how to be. Being half a rationalist is dangerous. (Why Our Kind Can’t Cooperate, Bayesians Vs Barbarians) Until atheist groups can outperform religious groups at mobilisation and output, any increase in atheism is a hollow victory. (Can Humanism Match Religion’s Output?) We need new models of community to replace the old, with new goals. (Church Vs Taskforce)

Do not punish people for being more patient than you; you should tolerate tolerance. (Tolerate Tolerance) We incentivise groups to improve by rejecting joining them if they don’t meet our standards. The non-conformist crowd tends to ask way too much. If joining a project is good, you should do it if the problems are not too distracting, or if you could fix the problems. If you don’t see a problem as worth putting in the time to fix, it is not worth avoiding a group for. If we want to get anything done, we need to move in the direction of joining groups and staying in them. (Your Price For Joining)

Many causes benefit from the spread of rationality. We should not think of other good causes as in competition for a limited pool of reasonable thinkers, but instead cooperate with them to increase the number of reasonable thinkers. We should think of ourselves as all part of one common project of human progress. (Rationality: Common Interest Of Many Causes) We are very bad at coordinating to fulfil aligned preferences of individuals. Large flows of money tend to be controlled by the incentives of organisations. (Helpless Individuals)

Donating time is inefficient compared to donating money. Allocating money is how we allocate resources. Money is the unit of caring. If you’ll never spend it, you don’t care. (Money: The Unit Of Caring) We enjoy having done kind things, but the things that bring us enjoyment often do much less good than calculated effort, and enjoyment and social status can be had much cheaper when you don’t try to achieve them through your giving. Get enjoyment, status, and results separately; purchase fuzzies and utilons separately. (Purchase Fuzzies And Utilons Separately)

The bystander effect is a bias in which a group is less likely to react to an emergency than a single individual. (Bystander Apathy) This applies to problems encountered over the Internet, where you are always observing them as part of a group of strangers. (Collective Apathy And The Internet)

When we write advice, we are not working from universal generalisations, but surface level tricks. This means it validly works for some people but not others. We should beware other-optimising, because we are not good at knowing what works for others, and beware assuming that other people are simply not trying what worked for us. (Beware Of Other-Optimizing) Practical advice based on established theories tends to be more generalisable. (Practical Advice Backed By Deep Theories)

The danger of underconfidence is missing opportunities and not making a genuine effort. Sticking to things you always win at is a way smart people become stupid. You should seriously try to win, but aim for challenges you might lose at. When considering a habit of thought, ask whether it makes you stronger or weaker. (The Sin Of Underconfidence)

There is more absent than present in these writings. Defeating akrasia and coordinating groups are particular absences. But, hopefully, there is enough to overcome the barriers to getting started in the matter of rationality without immediately going terribly wrong. The hope is that this art of answering confused questions will be enough to go and complete the rest. This will require drawing on many sources, and require having some specific motivating goal. Go forth and create the art, and return to tell others what you learned. (Go Forth And Create The Art!)

And A Few Third-Party Sequences and Primers

Yvain has a primer to game theory. Lukeprog has a sequence on scientifically-backed advice for winning at life, to the extent to which it is available. Orthnonormal has a primer on decision theory and the motivation for discussing alternative decision theories, and their implications, such as acausal trade. These three areas were popular topics for further discussion on Less Wrong.