(Base PDF Version, Current Version 1.0.0, 32 page count does not include the title and introduction.)
(Single Page Edition of this document)
(Feedback very welcome; I’m not the best writer, and contributions from anyone who has better ways to phrase things or even wants to rewrite entire sections are very welcome and I’d be happy to credit them. The best approach is probably to add comments to the source Google Document, or email me if you want to clear a larger bit of work before starting. The email address is “john”, at this website’s domain.)
This is an attempt to summarise the propositions of the online ‘rationalist’ community, originally centred around Overcoming Bias and LessWrong, now largely dispersed to various communities and organisations like Slatestarcodex, the Center for Applied Rationality, the Machine Intelligence Research Institute, and the Effective Altruism movement, amongst others.
I have held off on writing this in the past out of a suspicion that I would not do it justice, but have decided that it is better done badly than not at all. My apologies to Eliezer Yudkowsky for mangling their work. I have reservations in parts, but I agree with or find it plausible in gist.
The structure is that of a whirlwind tour, with little narrative beyond ordering of the propositions, with citations to the source post for each, to permit drilling down into interesting or contentious parts and reading existing critique by the community.
This is to enable useful examination of the ideas and their assumptions by people who have things to do other than reading millions of words on the topic, to permit those who have picked up ideas from the community to see their surrounding context and related ideas, and to serve as an index to enable those who disagree to identify their points of departure.
Hurrying Along, What Is “LW Rationality”?
- An empiricist, methodologically reductionist, materialist, atheist set of beliefs about epistemology, decision theory, cognition in general, and human cognition in particular, with proposed limitations and common errors resulting from the use of imperfect heuristics.
- A set of beliefs about how reductionism and materialism are grounded in epistemology.
- A set of beliefs about human values, in particular the belief that our true preferences are consequentialist, and that we pursuit our preferences ineffectively.
- A partial set of strategies for mitigating or avoiding proposed errors in human cognition.
- A very partial set of strategies for more effectively achieving our values.
- A coined jargon of labels for these beliefs, limitations, errors, and strategies, used to reference them quickly and debate them and their further implications.
So, roughly a mixture of analytic philosophy and pop cognitive science. The basic attitude to human cognition is that of Kanheman’s Thinking Fast And Slow, which I recommend. The consensus reference to LW rationality itself is Eliezer Yudkowsky’s core Sequences, blog posts with examples and stories of a transhumanist, speculative flavour. They have since been collected into the book Rationality: AI to Zombies, which is available for free and is the best place to start if seeking a fuller understanding of the propositions here. A description of how it compares to and connects with academia, with references to related works and research, was written by lukeprog.
Some parts of these are relatively well accepted; others proved controversial with the community. The tour follows, roughly a page per sequence, using the book’s ordering of the sequences.