> The explanation is something like "we extrapolate from the way that rationalists think and realize that their philosophy leads to dangerous conclusions."
I really like the depth of analysis in your comment, but I think there's one important element missing, which is that this is not an individual decision but a group heuristic to which individuals are then sensitized. Individuals don't typically go so far as to (consciously or unconsciously) extrapolate others' logic forward to decide that it's dangerous. Instead, people get creeped out when other people don't adhere to social patterns and principles that are normalized as safe in their culture, because the consequences are unknown and therefore potentially dangerous; or when they do adhere to patterns that are culturally believed to be dangerous. This can be used successfully to identify things that are really dangerous, but also has a high false positive rate (people with disabilities, gender identities, or physical characteristics that are not common or accepted within the beholder's culture can all trigger this, despite not posing any immediate/inherent threat) as well as a high false negative rate (many serial killers are noted to have been very charismatic, because they put effort into studying how to behave to not trigger this instinct). When we speak of something being normalized, we're talking about it becoming accepted by the mainstream so that it no longer triggers the ‘creepy’ response in the majority of individuals. As far as I can tell, the social conservative basically believes that the set of normalized things has been carefully evolved over many generations, and therefore should be maintained (or at least modified only very cautiously) even if we don't understand why they are as they are, while the social liberal believes that we the current generation are capable of making informed judgements about which things are and aren't harmful to a degree that we can (and therefore should) continuously iterate on that set to approach an ideal goal state in which it contains only things that are factually known to be harmful.
As an interesting aside, the ‘creepy’ emotion, (at least IIRC in women) is triggered not by obviously dangerous situations but by ambiguously dangerous situations, i.e. ones that don't obviously match the pattern of known safe or unsafe situations.
> Sometimes people don't or can't practice this protection for various reasons, and that's fine; it's a local problem that can be solved locally. But it's very insidious to turn around and justify not practicing it: "actually it's better not to behave morally; it's better to allocate resources to people far away; it's better to dedicate ourselves to fighting nebulous threats like AI safety or other X-risks instead of our neighbors".
The problem with the ‘us before them’ approach is that if two neighbourhoods each prioritize their local neighbourhood over the remote neighbourhood and compete (or go to war) to better their own neighbourhood at the cost of the other, generally both neighbourhoods are left worse off than they started, at least in the short term: both groups trying to make locally optimal choices leads (without further constraints) to globally highly suboptimal outcomes. In recognition of this a lot of people, not just capital-R Rationalists, now believe that at least in the abstract we should really be trying to optimize for global outcomes.
Whether anybody realistically has the computational ability to do so effectively is a different question, of course. Certainly I personally think the future-discounting ‘bias’ is a heuristic used to acknowledge the inherent uncertainty of any future outcome we might be trying to assign moral weight to, and should be accorded some respect. Perhaps you can make the same argument for the locality bias, but I guess that Rationalists (generally) either believe that you can't, or at least have a moral duty to optimize for the largest scope your computational power allows.
yeah, my model of the "us before them" question is that it is almost always globally optimal to cooperate, once a certain level of economic productivity is present. The safety that people are worried about is guaranteed not by maximizing their wealth but by minimizing their chances of death/starvation/conquest. Up to a point this means being strong and subjugating your neighbor (cf most of antiquity?) but eventually it means collaborating with them and including them in your "tribe" and extending your protection to them. I have no respect for anyone who argues to undo this, which is I think basically the ethos of the trump movement: by convincing everyone that they are under threat, they get people to turn on those that are actually working in concert with them (in order to enrich/empower themselves). It is a schema problem: we are so very very far away from an us vs. them world that it requires delusions to believe.
(...that said, progressivism has largely failed in dispelling this delusion. It is far too easy to feel as though progressivism/liberalism exists to prop up power hierarchies and economic disparities because in many ways it does, or has been co-opted to do that. I think on net it does not, but it should be much more cut-and-dry than it is. For that to be the case progressivism would need to find a way to effectively turn on its parasites, that is, rent-extracting capitalism and status-extracting moral elitism).
re: the first part of your reply. I sorta agree but I do think people do more extrapolation than you're saying on their own. The extrapolation is largely based on pattern-matching to known things: we have a rich literature (in the news, in art, in personal experience and storytelling) of failure modes of societies, which includes all kinds of examples of people inventing new moral rationalizations for things and using them to disregard personal morality. I think when people are extrapolating rationalists' ideas to find things that creep them out, they're largely pattern-matching to arguments they've seen in other places. It's not just that they're unknowns. And those arguments are, well, real arguments that require addressing.
And yeah, there are plenty of examples of people being afraid of things that today we think they should not have been afraid of. I tend to think that that's just how things go: it is the arc of social progress to figure out how to change things from unknown+frightening to known+benign. I won't fault anyone for being afraid of something they don't understand, but I will fault them for not being open-minded about it or being unempathetic or being cruel or not giving people chances to prove themselves.
All of this is rendered much more opaque and confusing by the fact that everyone places way too much stock in words, though. (e.g. the OP I was replying to who was taking all these criticisms of the rationalists at face-value). IMO this is a major trend that fucks royally with our ability as a society to make moral progress: we have come to believe that words supplant emotional intuition in a way that wrecks out ability to actually understand what people are upset about (I like to blame this trend for much of the modern political polarization). A small example of this is a case that I think everyone has experienced, which is a person discounting their own sense of creepiness from somebody else because they can't come up with a good reason to explain it and it feels unfair to treat someone coldly on a hunch. That should never have been possible: everyone should be trusting their hunches.
(which may seem to conflict with my preceding paragraph... should you trust your hunches or give people the chance to prove themselves? well, it's complicated, but it also really depends on what the result is. Avoiding someone personally because they creep you out is always fine, but banning their way of life when it doesn't affect you at all or directly harm anyone is certainly not.)
I really like the depth of analysis in your comment, but I think there's one important element missing, which is that this is not an individual decision but a group heuristic to which individuals are then sensitized. Individuals don't typically go so far as to (consciously or unconsciously) extrapolate others' logic forward to decide that it's dangerous. Instead, people get creeped out when other people don't adhere to social patterns and principles that are normalized as safe in their culture, because the consequences are unknown and therefore potentially dangerous; or when they do adhere to patterns that are culturally believed to be dangerous. This can be used successfully to identify things that are really dangerous, but also has a high false positive rate (people with disabilities, gender identities, or physical characteristics that are not common or accepted within the beholder's culture can all trigger this, despite not posing any immediate/inherent threat) as well as a high false negative rate (many serial killers are noted to have been very charismatic, because they put effort into studying how to behave to not trigger this instinct). When we speak of something being normalized, we're talking about it becoming accepted by the mainstream so that it no longer triggers the ‘creepy’ response in the majority of individuals. As far as I can tell, the social conservative basically believes that the set of normalized things has been carefully evolved over many generations, and therefore should be maintained (or at least modified only very cautiously) even if we don't understand why they are as they are, while the social liberal believes that we the current generation are capable of making informed judgements about which things are and aren't harmful to a degree that we can (and therefore should) continuously iterate on that set to approach an ideal goal state in which it contains only things that are factually known to be harmful.
As an interesting aside, the ‘creepy’ emotion, (at least IIRC in women) is triggered not by obviously dangerous situations but by ambiguously dangerous situations, i.e. ones that don't obviously match the pattern of known safe or unsafe situations.
> Sometimes people don't or can't practice this protection for various reasons, and that's fine; it's a local problem that can be solved locally. But it's very insidious to turn around and justify not practicing it: "actually it's better not to behave morally; it's better to allocate resources to people far away; it's better to dedicate ourselves to fighting nebulous threats like AI safety or other X-risks instead of our neighbors".
The problem with the ‘us before them’ approach is that if two neighbourhoods each prioritize their local neighbourhood over the remote neighbourhood and compete (or go to war) to better their own neighbourhood at the cost of the other, generally both neighbourhoods are left worse off than they started, at least in the short term: both groups trying to make locally optimal choices leads (without further constraints) to globally highly suboptimal outcomes. In recognition of this a lot of people, not just capital-R Rationalists, now believe that at least in the abstract we should really be trying to optimize for global outcomes.
Whether anybody realistically has the computational ability to do so effectively is a different question, of course. Certainly I personally think the future-discounting ‘bias’ is a heuristic used to acknowledge the inherent uncertainty of any future outcome we might be trying to assign moral weight to, and should be accorded some respect. Perhaps you can make the same argument for the locality bias, but I guess that Rationalists (generally) either believe that you can't, or at least have a moral duty to optimize for the largest scope your computational power allows.