(Back to “How To Actually Change Your Mind”)
The Machine In The Ghost L: The Simple Math of Evolution
There are things which look purposeful in nature, which people historically treated as evidence of a designer. If you look at them without cherrypicking, you find parts which appear to be working at odds with other parts, inconsistent with the purposefulness you’d expect from a single designer. Similarly, you find a lot of the purposefulness seems cruel, inconsistent with benevolent design.
If evolution were able to explain anything, it would be useless. Evolution is consistent only with the kind of purposefulness which propagates a gene, with no filtering for kindness or any other kind of purposefulness. This is the kind of alien purposefulness we observe in nature. (An Alien God)
Evolution works incrementally. (The Wonder Of Evolution) Evolution is slow; a mutation multiplying the expected number of children by 1.03 has a 6% chance of reaching fixation, and takes an average of 768 generations to reach universality within a population of 100,000. The general formulae are 2s for the chance of fixation, and 2 ln(N) / s for number of generations, where N is the population size, and s is the multiplier minus 1. Complex mutations take a very long time, as each step must reach fixation. (Evolutions Are Stupid (But Work Anyway))
Price’s Equation is a very general equation stating that the change in average characteristic is equal to the covariance of the characteristic and relative fitness. It operates only to the extent that characteristics are heritable across the generations. If characteristics aren’t passed down more than a few generations, you will only ever observe a few generations’ worth of selective pressure.
This means corporations do not significantly benefit from evolution. Similar for nanodevices with cryptographically protected replication instructions, as few changes would have high covariance. (No Evolutions For Corporations Or Nanodevices)
Selection being concerned only with competition between genes means genes that are better for the species can be outcompeted. Successful genes could make all descendants male, recursively, exist only to copy themselves, or cause the bystander effect. It is possible to evolve to extinction. (Evolving To Extinction)
Group selection overriding individual selection is generally mathematically implausible and was used to rationalise beliefs that outcomes would be what was better-for-the-species. (The Tragedy Of Group Selectionism)
Humans are very good at arguing that almost any optimisation criteria suggests almost any policy. Evolution is one of the few cases where we can examine what actually optimising for specific criteria with no rationalisation or bias would look like, in order to understand what that looks like. (Fake Optimization Criteria)
We don’t consciously have the deliberate goal of optimising for our genes’ genetic fitness; it was not genetically fit for that goal to be encoded in us. We are adaptation-executors, not fitness maximisers. (Adaption-Executers Not Fitness-Maximizers, Evolutionary Psychology) We want to optimise for other things. (Thou Art Godshatter)
Our psychological adaptations are tuned for success in the evolutionary environment. (An Especially Elegant Evpsych Experiment) The modern world contains things that match our desires more strongly than anything in the evolutionary environment. We call these superstimuli, and they may cause perverse behaviour. (Superstimuli And The Collapse Of Western Civilization)
The Machine In The Ghost M: Fragile Purposes
When observing an intelligent process, you can be certain about the expected end state while being uncertain about intermediary steps. This is because intelligence is an optimisation process. (Belief In Intelligence) We normally model intelligence by simulating it with our brain, and assume something analogous to our emotional architecture. This doesn’t work well for non-human intelligence. (Humans In Funny Suits)
Optimisation processes can find very small targets in large search spaces. Natural selection emerged accidentally, and is slow and stupid. Human brains are much better. Neither optimisation process is able to optimise itself. We could design an AI to do so. If the process did not require exponentially more optimisation power applied for each increase in optimisation power out, and the initial intelligence was sufficient, optimisation power could rise exponentially over time. (Optimization And The Singularity)
People tend to think of programming computers as if they contain a little ghost which reads and performs abstract instructions. Your instructions define the entirety of the logic performed. If you do not know how to define something in terms you can program, you cannot reference it. Conversely, there is no additional entity capable of deciding to not do what you defined. (Ghosts In The Machine) When we find a confusing gap in our knowledge, we should try to fill it rather than reason around it. (Artificial Addition)
Terminal values are ends, instrumental values are means. (Terminal Values And Instrumental Values) Any generalisations at the macroscopic level will have exceptions; they will be leaky abstractions. This extends to instrumental values. (Leaky Generalizations) We must make any sufficiently powerful and intelligent optimisation process optimise for our terminal values, as optimising for a described instrumental value may powerfully optimise for an easy exception we didn’t think of. (The Hidden Complexity Of Wishes)
Anthropomorphic optimism is where we expect non-human intelligent processes, such as natural selection, to choose a strategy that is one a human might choose, because we tend not to bring candidate strategies we know no person wants to the surface, and we’re good at rationalization. (Anthropomorphic Optimism)
Dysfunctional organisations incentivise many actions internally which are detached from any original purpose of the action, and this can be recognised. Civilisation in general does this. (Lost Purposes)
The Machine In The Ghost N: A Human’s Guide To Words
Statements are only entangled with reality if the process generating them made them so. (The Parable Of The Dagger)
The logical implications of a given definition of a word are the same in all conceivable universes, and so do not tell us anything about our universe. Correlations between attributes do, but only so far as observations and those correlations are reliable. (The Parable Of Hemlock)
If you define a word rigidly in terms of attributes, and then state that something is that word, you assert it has all those attributes. If you then go on to say say it thus has one of those attributes, you are simply repeating that assertion. The word only creates an illusion of inference. (Empty Labels)
If assigning a word a definition feels like it argues something, you may be making a hidden assertion of a connotation not in that definition. (Sneaking In Connotations) Alternatively, you may be incorrectly ignoring more direct evidence in favour of correlations between attributes represented by the words. (Arguing By Definition)
A concept is any rule for classifying things, and creates a category of things. The space of definable concepts is much larger than the space of describable things. We limit ourselves to relatively simple concepts in order to make their definition tractable. (Superexponential Conceptspace And Simple Words) Words are labels for concepts. (Words As Mental Paintbrush Handles)
Efficient communication uses shorter messages for common messages and longer messages for uncommon messages. We use shorter words for more common concepts and longer words for less common concepts. (Entropy And Short Codes) Creating a word defined by a list of attributes permits faster communication if and only if those attributes are correlated. Adding an uncorrelated attribute to a word means it takes more work to communicate accurately using that word than not using it, which will result in inaccurate communication. (Mutual Information And Density In Thingspace)
We automatically infer that the set of attributes that define a word are well correlated. We shouldn’t create definitions where that’s wrong. (Words As Hidden Inferences) Concepts can be misleading if they group things poorly. Using concepts that are similar to those used by others aids communication. (The Argument From Common Usage) Concepts dividing or excluding things on irrelevant criteria result in people assuming that there’s relevant differences correlated to those criteria. (Categorizing Has Consequences)
An intensional definition is a definition in terms of other words. An extensional definition is a definition provided by pointing at examples. The intension of a concept is the pattern in your brain that recognises it. The extension of a concept is everything matching that pattern. Neither type of definition fully describes its corresponding aspect.
Claiming that a concept with known extension includes a particular attribute ‘by definition’ hides the assertion that the things in its extension have that attribute. Claiming that a thing falls under a concept ‘by definition’ often hides the assertion that its attributes are typical of that concept. (Extensions And Intensions) Not all concept we have, have straightforward intensional definitions. Which concepts usefully divide the world is a question about the world. (Where To Draw The Boundary?)
You can think of any conceivable thing as described by a point in ‘thingspace’, whose dimensions include all possible attributes. Concepts describe clusters in thingspace. (The Cluster Structure Of Thingspace) These are similarity clusters. A dictionary is best thought as a set of hints for matching labels to these clusters. (Similarity Clusters) People regard some entities in these clusters as more or less typical of them. (Typicality And Asymmetric Similarity)
Asking if something ‘is’ in some category is a disguised query for whether it should be treated the way things in that category are treated, for some purpose. You may need to know that purpose to answer the question for atypical cases. (Disguised Queries)
You can reduce connections in a neural network design by introducing nodes for categories, then inferring attributes from categories and categories from attributes rather than all attributes from all other attributes. (Neural Categories) Our brain uses a structure like this. If only some attributes match a category, the way this feels from the inside is like there’s a permanently unresolved question of fact about whether the thing is ‘in’ or not ‘in’ the category, because the ‘node’ is unsettled. (How An Algorithm Feels From Inside)
Disputes over definitions are disputes over what cluster a given label points at, but feel like disputes over what properties the things in that cluster have. (Disputing Definitions) What intension is associated with what word feels like a fact about the wider world rather than just a fact about human brains. (Feel The Meaning)
If you are trying to discuss reality, and you find your meaning for a label differs from another person’s, you should taboo that concept and use others to communicate. (Taboo Your Words) You can also taboo concepts and try to describe the relevant parts of thingspace directly as an effective way to clarify anticipated experience and notice which aspects of the concepts are relevant. (Replace The Symbol With The Substance)
Our map of the world is necessarily smaller than the world, which means we necessarily must compress distinct things in reality into a single point in our map. From the inside, this feels like we’re observing only one thing, rather than that we’re observing multiple things and compressing them together. Noticing where splitting a category is necessary is a key challenge in reasoning about the world. A good hint is noticing a category with self-contradictory attributes. (Fallacies Of Compression) Correct statements about different things merged into a single point may be inconsistent with each other; this does not mean part of reality is inconsistent. (Variable Question Fallacies)
Two variables have mutual information if they are correlated, and are independent if not. Conditional independence is where mutual information is shared between three or more variables, and conditional on one of those variables, the other two become independent. Where we have mutual information between many possible attributes of a thing, we create concepts to represent mutual information between attributes, and then treat the attributes as conditionally independent once we know that something matches that concept, as a simplification.
If there is a great deal of mutual information remaining between attributes after knowing something matches a concept defined using those attributes, this is an error. (Conditional Independence And Naive Bayes)
Words can be defined wrongly, in many ways. (37 Ways That Words Can Be Wrong)