Hacker News new | past | comments | ask | show | jobs | submit login

The overlap between the Effective Altruism community and the Rationalist community is extremely high. They’re largely the same people. Effective Altruism gained a lot of early attention on LessWrong, and the pessimistic focus on AI existential risk largely stems from an EA desire to avoid “temporal-discounting” bias. The reasoning is something like: if you accept that future people count just as much as current people, and that the number of future people vastly outweighs everyone alive today (or who has ever lived), then even small probabilities of catastrophic events wiping out humanity yield enormous negative expected value. Therefore, nothing can produce greater positive expected value than preventing existential risks—so working to reduce these risks becomes the highest priority.

People in these communities are generally quite smart, and it’s seductive to reason in a purely logical, deductive way. There is real value in thinking rigorously and in making sure you’re not beholden to commonly held beliefs. But, like you said, reality is complex, and it’s really hard to pick initial premises that capture everything relevant. The insane conclusions they get to could be avoided by re-checking & revising premises, especially when the argument is going in a direction that clashes with history, real-world experience, or basic common sense.






Intelligence and rational thought is useful, but like any strategy it has its tradeoffs and limitations. No amount of intelligence can overcome the chaos of long time horizons, especially when we're talking about human civilization. IMHO it's reasonable to pick a long-term problem/risk and focus on solving it. But it's pure hubris to think rationality will give you anything approaching high confidence of what the biggest problems and risks actually are on a 20-50 year time horizon, let alone 200-500 years or longer.

The whole reason we even have time to think this way is because we are at the peak of an industrial civilization that has created a level of abundance that allows a lot of people a lot of time to think. But the whole situation that we live in is not stable at all, "progress" could continue, or we could hit a peak and regress. As much as we can see a lot of long-term trajectories (eg. peak oil, global warming), we really have no idea what will be the triggers and inflection points that change the social fabric in ways that are unforeseeable and quickly invalidate whatever prior assumptions all that deep thinking was resting upon. I mean 50 years ago we thought overpopulation was the biggest risk, and that thinking has completely flipped even without a major trajectory change for industrial civilization in that time.


I think one can levy a much more specific critique of rationalism: rationalism is in some sense self-defeating. If you are rational you will necessarily conclude that the fundamental dynamic that drives the (interesting parts of) the universe is Darwinian evolution, which is not rational. It blindly selects for reproductive fitness at the expense of all else. If you are a gene, you can probably produce more offspring in an already-industrialized environment by making brains that lean more towards misogyny and sexual promiscuity than gender equality and intellectual achievement.

The real conflict here is between Darwinism and enlightenment ideals. But I have yet to see any self-styled Rationalists take this seriously.


I always liken this to that we’re all asteroids floating in space. There’s no free will and everything is determined. We just see the whole thing unfold from one conscious perspective.

Emotionally I don’t subscribe to this view. Rationally I do.

My critique for rational people is that they don’t seem to fully take experience into account. It’s assumptions + rationality + experience/data + whatever strong inclinations one has that seems to be the full picture for me.


> no free will

That always seemed like a meaningless argument to me. To an outside observer free will is indistinguishable from a random process over some range of possibilities. You aren’t going to randomly go to sleep with your hand in a fire, there’s some hard coded biology preventing that choice but that only means human behavior isn’t completely random, hardly a groundbreaking discovery.

At the other end we have no issues making an arbitrary decision where there’s no way to predict what the better choice is. So what exactly does free will bring to the table that we’re missing without it? Some sort of mystical soul, well what if that’s also deterministic? Unpredictability is useful in game theory, but computers can get that from a hardware RNG based on quantum processes like radioactive decay, so it doesn’t mean much.

Finally, subjectively the answer isn’t clear so what difference does it make?


> That always seemed like a meaningless argument to me.

Same as that is not the lived experience. I notice that I care about free choice.

The idea that there's no free will may be a pessimistic outlook to some but to me it's a strictly neutral one. It used to be a bit negative, until I looked more closely that there's a difference between looking at a situation objectively and having a lived experience. When it comes to my inclinations and how I want to live life, lived experience takes precedence.

I don't have my thoughts sharp on it, but I don't think the concept even exists philosophically, but I think that's also what you're getting at. It's a conceptual remnant from the past.


"Free choice" is the first step towards the solution to this paradox: free will is what a deterministic choice feels like from the inside. The popular notion of free will is that our decisions are undetermined, which must imply that there is a random element to them.

But though that is the colloquial meaning, it doesn't line up with what people say they want: you want to make your choice according to your own reasons. You want free choice. But unless your own reasoning includes a literal throw of the dice, your justifications deterministically decide the outcome.

"Free will" is the ability to make your own choices, and for most people most of the time, those choices are deterministic given the options and knowledge available. Free will and determinism are not only compatible, but necessarily so. If your choices weren't deterministic, it wouldn't be free will.


This is the position that is literally called compatibilist.

But when you probe people, while a lot of people will argue in ways that a philosopher might call compatibilist, my experience is that people will also strongly resist the notion that the only options are randomness and determinism. A lot of people have what boils down to a religious belief in a third category that is not merely a combination of those two, but infuses some mysterious third options where they "choose" that they can't explain.

Most of the time, people who believe there is no free will (and can't be), like me, take positions similar to what you described, that - again - a proponent of free will might describe as compatibilist, but sometimes we oppose the term for the reason above: A lot of people genuinely believe in a "third option" for choices are made.

And so there are really two separate debates on free will: Does the "third option" exist or not, and does "compatibilist free will" exist or not. I don't think I've ever met anyone who seriously disagrees that "free will" the way compatibilists define it exists, so when compatibilists get into arguments over this, it's almost always a misunderstanding...

But I have met plenty of people who disagree with the notion that things are deterministic "from the outside".


I'm a regular practitioner of magic, have written essays about it on Quora, and I can identify this mysterious third option as "the universe responding to your needs." You can use any number of religious terms to refer it to, like serendipity and the like, but none of them can capture the full texture of precisely how free will operates.

Approaching this subject from a rational perspective divorces you from subject and makes it impossible to perceive. You have to immerse yourself in it and one way to do that is magical practice. Having direct experience of the universe responding to your actions and mindset eventually makes it absurdly clear that the universe bears intelligence and it's in this intelligence that free will operates.

I'd never thought before now to connect magic this directly to free will. Thanks for the opportunity to think this through! If you're interested in a deeper discussion, happy to jump on a call.


lmao, magic? seriously?

You betcha.

It is stronger than compatibilism. Compatibilism argues that free will and determinism are orthogonal. The argument I summarized is that free will is and must necessarily imply determinism..

I think that is a distinction without difference in as much as it's an excuse not to deal with it. But compatibilist "free will" must imply determinism unless some "magic" third alternative exists, because there isn't another option, and there is no evidence to suggest such a third alternative exists, so in practice every compatibilist I've had this discussion with have fallen back on arguing free will is compatible with determinism.

Definition doesn't tell all implications. In practice compatibilists do deeper analysis that reveals that determinism is required for free will.

Opposing the term gives a wrong result too, as people jump to hard determinism.

However "hard" your determinism, there is no support for the notion of agency, and that is all that matters. Without agency, free will is nothing but an illusion, with same moral consequences.

This isn’t true. The position I argued for above is that agency derives from determinism because it is necessarily causal: you have agency because you make decisions in line with your goals. If you didn’t make decisions that were deterministically selected from your goals, you’d actually lack agency!

And agency as inner motivation exists and is determined by your character, otherwise it would be not yours.

I think a reasonable interpretation of the colloquial sense of incompatibilist free will is that people want to be (or have the experience that they are) their own causal origins or prime movers. That they originate an action that is not (purely) the effect of all other actions that have occurred, but in such a way that they decided what that action was.

From the outside, this is indistinguishable from randomness. But from the inside, the difference is that the individual had a say in what the action would be.

Where this tends to get tangled up with notions of a soul, I think, is that one could argue that such a free choice needs some kind of internal state. If not, then the grounds by which the person makes the choice is a combination of something that is fixed and their environment, which then intuitively seems to reduce the free-will process to a combination of determined and random. So the natural thing to do is then to assign the required "being-ness" (or internal state if you will) to a soul.

But there may exist subtle philosophical arguments that sidestep this dilemma. I am not a philosopher: this is just my impression of what commonsense notions of free will mean.


My point is that from the outside this doesn’t look like randomness at all, unless you are mistaking ignorance of their motives as a random oracle. If you can infer what set of goals drives their decision making, and the decision making process itself (e.g. ADHD brain vs careful considered action) you can very much predict their decisions. Marketing and PR people do this every single day. People don’t behave like random oracles, they behave like deterministic decision makers with complex, partly unknown goals so our predictions of their behavior are not always correct. That’s not the same thing as random.

People get emotional about free will because if you come to believe there is no free will it makes you question a lot of things that are emotionally difficult.

E.g. punishment for the sake of retribution is near impossible to morally justify if you don't believe in free will because it means you're punishing someone for something the had no agency over.

Similarly, wealth disparities can't be excused by someone choosing to work harder, because they had no agency in the "decision".

You can still justify some degree of punishment and reward, but a lack of free will changes which justifications are reasonable very substantially.

E.g. punishment might still be justified from the point of view of reducing offending and reoffending rates, but if that is the goal then it is only justified to the extent that it actually achieves those goals, and that has emotionally difficult consequences. For example, for non-premeditated murders carried out out of passion rather than e.g. gang crimes, the odds of someone carrying out another is extremely low and the odds that the fear of a long prison sentence is an actual deterrence is generally low, and and so long prison terms are hard to justify once vengeance is off the table.

And so holding on to a belief in free will is easier to a lot of people than the alternative.

My experience is that there are few issues where people so easily get angry than if you suggest we don't have free will once they start thinking through the consequences (and some imagined ones...).


I think that whether or not you have free will is not so important when making these considerations.

Whether or not you have a choice and free will, you can influence and be influenced by other stuff, since that is how anything is doing.

> punishment might still be justified from the point of view of reducing offending and reoffending rates, but if that is the goal then it is only justified to the extent that it actually achieves those goals

I do agree with that, and I think that whether or not you have free will is not significant. Being emotionally difficult is not what makes it good or bad in this case (and it also does not seem to be so emotionally difficult to me, anyways). Reducing reoffending rates is what is important.

(Another issue is knowing if they are actually guilty (you shouldn't arrest people who are not actually guilty of murder); this is not always certain, either.)

I also think that it should mean that prisoners should not be treated badly and that prison sentences should not be too long. (Also, they shouldn't take up too much space by the prisons, since they should have free space for natural lands and for other buildings and purposes, but that is not quite the same issue, though.)

However, there may be cases where a fine might be appropriate, in order to pay for damages (although if someone else is willing to forgive them then such a fine may not be required). This does not justify a prison sentence or stuff like that, though.

Also, some people will just not like them anymore if they are accused of murder, even if they are not put in prison and not fined. This is not the issue for police and legal things; it is just what it will be. And, if it becomes known, people who disagree with the risk assessment can try to avoid someone.

And, if someone does commit a crime again and may have opportunity to do again in future, then this can be considered as being wrong the first time and this time hopefully you can know better.


That’s more effective as an argument to get rid of the most extreme forms of punishment (eg drawn and quartered) not all forms of retribution.

In a world without free will crimes of passion are simply the result of the situation which means that person would always chose murder in that situation. People who would respond with murder in an unacceptably wide range of situations is an edge case worth consideration without free will. Alternatively if we want nobody to respond with murder in a crime of passion situation evolutionary pressure could eventually work even without free will.

> E.g. punishment might still be justified from the point of view of reducing offending and reoffending rates, but if that is the goal then it is only justified to the extent that it actually achieves those goals, and that has emotionally difficult consequences. For example, for non-premeditated murders carried out out of passion rather than e.g. gang crimes, the odds of someone carrying out another is extremely low and the odds that the fear of a long prison sentence is an actual deterrence is generally low, and and so long prison terms are hard to justify once vengeance is off the table.

That’s assuming absolute certainty about what happened. Punishments may make sense as a logical argument even if it’s only useful in a subset of cases if you can’t be absolutely sure which case something happened to be.

Uncertainty does a lot to align emotional heuristics and logical actions.


Whether or not you have free will is not relevant, as I had described in other comments.

> In a world without free will crimes of passion are simply the result of the situation which means that person would always chose murder in that situation. People who would respond with murder in an unacceptably wide range of situations is an edge case worth consideration without free will.

This is a significant argument. However, there is also worth considering if that is actually accurate, and if such a situation will occur (in a case where whoever would be killed would not effectively protect themself from this).

> That’s assuming absolute certainty about what happened. Punishments may make sense as a logical argument even if it’s only useful in a subset of cases if you can’t be absolutely sure which case something happened to be.

It is true that you do not have absolute certainty, but neither should you arrest someone who is not guilty.

> Uncertainty does a lot to align emotional heuristics and logical actions.

In some cases, yes, but it is not always valid. But, even if it is, this does not mean that you should not consider it logically if you are able to do so.


If there is no free will, thoughts about free will are predetermined and so is punishment. The punishers don’t have agency either. You seem to say that punishers do have free will, but criminals don’t?

I didn't say anything about whether free will exists or not, actually. The comment was specifically worded to explain why some people react to coming to believe there is no free will.

But, sure, I personally do not believe in free will. I'm saying there is no rational basis for thinking anyone has free will ever. I'm saying there is no evidence to suggest free will is possible. In fact, I'll go so far as to say that believing in free will is a religious belief with no support.

But that doesn't mean that events does not have effects on what happens next, just that we don't have agency. That an IF ... THEN ... ELSE ... statement is purely deterministic for deterministic inputs does not mean that changing the inputs won't affect the outputs.

If you "choose" to lay down and do nothing because you decide nothing matters because you don't have free will, you will still lose your job and starve. That it wasn't a "true" "free" choice does not change the fact that it has consequences.

One of the consequences of coming to accept that free will is an illusion is that you need to come to terms with what that means for your beliefs about a wide range of things.

Including that vengeance which might seem moral to some extent if the person who did something to you or others had agency suddenly become immoral. But we still have the feelings and impulses. Reconciling that is hard for a lot of people, and so a lot of people in my experience when faced with a claim like the one I made above that we have no free will tend to react emotionally to the idea of the consequences of it.


Are there deterministic solutions to the three body problem? Or the double pendulum? Or can you tell the t° at any point on earth for say, a millisecond in, say, 6h (feel free to chose a prefered point and time)? And what precision could you realistically produce in that?

If there are non deterministic processes that can be proven to exist, and those interact with deterministic processes, doesn't it follow that the deterministic process becomes non deterministic (since the result of the interaction is necessarily non deterministic), and that it is not continually deterministic.

So - can you see how nothing can be deterministic other than in isolation (or thought experiment really)?

Edit0: typo


There are deterministic solutions to the three body problem or the double pendulum in Newtonian mechanics.

We can’t measure things to arbitrary precision due to quantum mechanics, but Philosophy isn’t bound by the actual physical universe we inhabit. Arbitrary physical models allow for the possibility of infinite precision in measurement and calculation resulting in perfect prediction of future states forever. Alternatively, you could have a universe of finite precision (think Minecraft) which also allows for perfect calculation of all future states from initial starting conditions.


I agree, and indeed there are solutions to chaotic systems - the problem being precision as you mentionned. To me the precision problem it is important : it reframes the "mechanical universe" as being way out of our grasp not because of our understanding but because of it's structure. You got me!

Not certain that philosophy is not bound by our universe - is that something you could elaborate (or lend a link) on?

To apply these hypotheticals to our universe implies (from my understanding) that the density of information present at any and all times since it's inception was present (while compressed) at it's creation/whatever - which I imagine I can find some proof of theoretical maximum information density and information compression compare that to the first state of the universe we can measure to have a better idea if it tracks.


> Not certain that philosophy is not bound by our universe - is that something you could elaborate (or lend a link) on?

I simply mean it’s happy to assume perfect information, perfect clones, etc. The trolly problem generally ignores the possibility that choosing a different track could with some probability result in derailment because the inherent question is simplified by design. We don’t need for the possibility of perfect cloning to exist to consider the ramifications of such etc.


I think I see a distinction in that a hypothetical universe with perfect information is pertinent precisely because it is comparable to our measurable universe and could be tested against.

I guess that's the point of any hypothetical, exploring a simplified model of something complex, but it's not easy to simplify the fabric of reality itself.


I don't find the consequences very hard to bear:

For example

> E.g. punishment for the sake of retribution is near impossible to morally justify if you don't believe in free will because it means you're punishing someone for something the had no agency over.

and

> E.g. punishment might still be justified from the point of view of reducing offending and reoffending rates, but if that is the goal then it is only justified to the extent that it actually achieves those goals

are simply logical to me (even without assuming any lack of free will).

So what is emotionally difficult about this, as you claim?


I agree; they seem logical to me too, whether or not you have free will.

However, it would seem that not everyone believes that, though.

(It is not quite as simple as it might seem, because the situation is not necessarily always that clear, but other than that, I would agree that it is logical and reasonable, that punishment is only justified from the point of view of reducing offending and reoffending rates and only if it actually achieves those goals.)


Then you're highly unusual (in a good way). Look at the amount of comments on social media with outcries over "too short" sentences for example, and the lack of political support for shortening sentences or improving prison standards.

I'm saying it's emotionally difficult to people because I've had this discussion many times over then last 30+ years and I've seen first hand how most people I have this conversation with tend to get angry and agitated over the prospect of not having moral cover for vengeance.


> Then you're highly unusual (in a good way). Look at the amount of comments on social media with outcries over "too short" sentences for example, and the lack of political support for shortening sentences or improving prison standards.

I live in Germany.

When I observe the whole societal and political situation in the USA from the outside, it seems to me that it is rather "two blocks where in each of these there is somewhat an internal consensus regarding a quite some political positions. On the other hand, each of these two blocks is actively fighting the other one."

On the other hand, for Germany, I would claim that the opinions in society rather consist of "lots of very diverse stances (though in contrary to the USA less pronounced on the extreme ends) on a lot of topics that make it hard to reach a larger set of followers or some consensus in a larger group, i.e. in-fighting about all kinds of topics without these positions forming political camps (and the fractions for different opinions can easily change when the topic changes)."

Thus, in the given example, this means for a person out-crying "too short" sentences on social media, you will very likely find one who is out-crying the opposite position.


“ E.g. punishment for the sake of retribution is near impossible to morally justify if you don't believe in free will because it means you're punishing someone for something the had no agency over.”

False, the punisher also has no will, so it doesn’t matter.


I have much less patience for C++ than I would in a world with free will.

Since there's no free will, outcomes are determined by luck, and what matters is how lucky we can make people through pit-of-success environments. Rust makes people luckier than C++ does.

I also have much less patience for blame than I do in a world with free will. I believe, for example, that blameless postmortems lead to much better outcomes than trying to pretend people had free will to make mistakes, and therefore blaming them for those mistakes.

You can get to these positions through means other than rejection of free will, but the most robust grounds for them are fundamentally deterministic.


If there is no free will, then all arguments about what should be done are irrelevant, since every outcome is either predetermined or random, so you have no influence on whether the project at work will choose Rust or C++. This choice was either made 13 billion years ago at the Big Bang, or it is an entirely random process.

> If there is no free will, then all arguments about what should be done are irrelevant, since every outcome is either predetermined or random, so you have no influence on whether the project at work will choose Rust or C++.

This is not correct. Whether or not you have free will, stuff influences and is influenced by other stuff, so these arguments are not meaningless or worthless.

> This choice was either made 13 billion years ago at the Big Bang, or it is an entirely random process.

I had thought of this before, and what I had decided is that both of these are also independent of having free will. For example, if the initial state includes unknown and uncomputable transcendental numbers which can somehow "encode" free will and then the working of physics is deterministic, then it is still possible (although not necessarily mandatory) to have free will, even though it is deterministic.


Lack of free will doesn’t prevent logical arguments from seeming to work.

Depends on whether you consider facts or theory. Facts don't prevent logical arguments from seeming to work, but lack of free will is theory. When theory doesn't match facts, theory is wrong.

We have built systems that don’t have free will and respond to logical arguments, so no theory is required here.

Random processes can’t use logic.

Fuzzy logic deals with truth values between 0 and 1. You can for example map water temperatures in such systems without having arbitrarily important cutoff points.

Such system often deal with uncertainty quite well including random noise on their inputs. The output ends up a function of both logic and randomness, but can still be useful.


Agreed, I don’t believe a system like that can access the platonic realm of mathematical truths. It’s clear that an electron doesn’t carry the laws of physics with it as it travels.

Why not? The human brain is hardly a perfect system of logic but can emulate such.

This is a strawman argument extended by those who rely on supernatural explanations. In reality, people's utterances and actions are part of the environment that determines future actions, just like everything else.

Sure, but that still doesn't matter: the fact that I wrote my previous comment is what caused you to write your response, but it's not like I had a choice to write that comment or some other: the fact that I wrote that comment, as well as everything that led to me writing it (conversations with teachers, my parents letting me watch English cartoons so I learned English, etc), were predetermined the moment the Big Bang happened, or they're just a quantum fluctuation.

What I'm saying is that there's no logical point to the concept "should" unless you have some concept of free will: everything that happens must happen, or is entirely random.


Randomness based free will still gives you a non-inevitable future.

So does a computer without free will acting on a physical RNG. Therefore it’s the RNG that matters not free will.

For naturalistic libertarians , free.will is.partly constituted by indtermimksm, not something entirely different.

Divorced from a religious context, it doesn't make any difference.

Which religious context, and why?

If you get down to the quantum level there is no such thing as objective reality. Our perception that the world is made of classical objects that actually exist at particular places at particular times and have continuity of identity is an illusion. But it's a really compelling illusion, and you won't go far wrong treating it as if it were the truth in 99% of real-world situations. Likewise, free will is an illusion, nothing more than a reflection of our ignorance of how our brains work. But it is a really compelling illusion, and you won't go far wrong treating it as if it were the truth, at least some of the time.

> If you get down to the quantum level there is no such thing as objective reality.

What do you mean by that? It still exists doesn't it? Albeit in a probabilistic sense that becomes non-probabilistic at larger scales.

I don't know much about quantum other than the high level conceptual stuff.


> It still exists doesn't it?

It's controversial, but here is the argument that the answer is "no": See https://flownet.com/ron/QM.pdf

Or if you prefer a video: https://www.youtube.com/watch?v=dEaecUuEqfc


That's non sequitur.

>Under QIT, a measurement is just the propagation of a mutually entangled state to a large number of particles.

eyeroll so it's MWI in disguise, but MWI is quantum realism. Illusion they talk about is that the observed macroscopic state is a part of the bigger superposition (incomplete observation). But that's dumb, even if it's a part of a bigger state, it's still real, because it's not made up, but observed.


> it's MWI in disguise

That's kind of like saying that GRW is Copenhagen in disguise. It's not wrong, but only because it's making the word "disguise" do some pretty heavy lifting.

> MWI is quantum realism

No, it isn't because it can't account for the Born rule. See:

https://blog.rongarret.info/2019/07/the-trouble-with-many-wo...


It's a strange conclusion. You seemingly consider one measurement and expect to see Born rule, and when it doesn't manifest, then MWI is wrong? But Born rule doesn't manifest at sample size one in any interpretation, it manifests only in a long string of measurements. If you consider a long string of measurements, you will see Born rule as <Ψ|Born rule> = 1 - O(exp(-N)), which is basically a definition of empirical tendency.

Well, now I see that QIT isn't quite there. You say classical behavior emerges by tracing, mathematically, not as a physical process? In MWI classical behavior emerges as a physical process, not by tracing. That "look at part of the system (in which case you see classical behavior)" is provided by linear independence of different branches, so each observer naturally observes their branch from inside, and it looks isolated from other branches.


> You seemingly consider one measurement and expect to see Born rule

Huh??? No, of course not. The Born rule is about probabilities. It cannot manifest in a single measurement.

> classical behavior emerges by tracing, mathematically, not as a physical process?

No. The mathematical description of classical outcomes emerges by tracing, which is to say, by discarding information. The physical interpretation of that is left open.

> In MWI classical behavior emerges as a physical process

That's right. MWI commits to a physical interpretation of the math. But there is no scientific or philosophical justification for this, and in fact, when you dig into the details all kinds of problems emerge that are swept under the rug by its proponents. Nonetheless, many MWI proponents insist that it is the One True Interpretation, including some who really ought to know better.

> each observer naturally observes their branch from inside, and it looks isolated from other branches.

Yes, I know. But this doesn't solve the problem. In order to get a mathematical description of me I have to trace the wave function in my preferred basis, which is to say, I have to throw out all of the other branches. And this is not just a computational hack. It's mathematically necessary. Discarding information is the only way to get classical, irreversible processes (like measurement) out of the unitary dynamics of the wave function. So a reasonable interpretation of the math is that I exist only if parallel universes don't. And I'm pretty sure I exist.

I'm not telling you this because I expect you to accept it, merely to show you that the MWI is not self-evidently the One True Interpretation.


If you insist that MWI must mean "a discrete number of clearly separated worlds", then yes, such interpretation would have a problem with the Born rule.

(That is apparently the definition the author of the linked article uses, guessing by his reaction: "Wait, what??? There is no 'well defined notion of how many branches there are?'")

I can only say that I have never met a proponent of MWI who meant this.


I am the author.

> I can only say that I have never met a proponent of MWI who meant this.

What can I say? There are a lot of MWI proponents who profess to believe this. Here, for example, is Sean Carroll answering the question, "How many parallel universes are there?"

https://www.youtube.com/watch?v=7tQiy5iCX4o

Of course, he doesn't actually give a concrete answer, but he very strongly implies that the question has an answer, i.e. that the question is a meaningful one to ask, and that implies that the MWI does in fact mean that there is a discrete number of clearly separated worlds.

In fact, I challenge you find a single example of a prominent MWI proponent saying something in public (which is to say, in a public forum or a publication whose target audience is the general public) that even implies that the many-worlds of the MWI are not distinct, countable entities. I only know of one example, and it is very well hidden.

There is a more fundamental problem: if the MWI does not mean "a discrete number of clearly separated worlds" then it fails as an interpretation of QM, i.e. as a solution to the measurement problem. The whole point is that measurements appear to produce discrete outcomes despite the fact that the math says that everything is one big quantum superposition. If all you have to say about this is, "Yeah, it's all one big quantum superposition" then you have failed to solve the problem. You have simply swept the hard part under the rug.


> Of course, he doesn't actually give a concrete answer, but he very strongly implies that the question has an answer, i.e. that the question is a meaningful one to ask, and that implies that the MWI does in fact mean that there is a discrete number of clearly separated worlds.

In the video, Sean Carroll talks to a non-expert audience, so he must simplify some things, and then it is your or my guess about what the unsimplified version was supposed to be. He says something like: "we don't know, even whether it is finite or infinite, but if it is finite it is a very large number such as 10^10^123". But notice that he also uses as an analogy an interval from 0 to 1, which can be split to half as many times as you need.

You see this as him believing in discrete separated universes, of which there is a definite number (potentially infinite). Yes, that makes sense.

I see another possible understanding, that he is talking about "meaningfully different" universes, because that is what we care about on the macro level. To explain what I mean, imagine that we observe two particles. Any of them can be in a huge number of possible positions, moving in a huge number of possible directions, at a huge number of possible speed. But if we ask whether those two particles hit each other and transformed into another particle, that kinda collapses this huge possibility space into a "yes / no" question. Out of practically infinity, two meaningfully different options.

On a macro level, either the cat is alive or it is dead. Those are two meaningfully different states. If we focus on one particle in the cat's body, there is a continuum of where precisely that particle could be, and what momentum it has. So from the particle's perspective, there is a continuum of options. But from the cat's perspective, and the cat's owner's perspective, this continuum does not matter; unless it changes the macro state, i.e. the particle kills the cat, or at least maybe hits its neuron and makes it do something differently. So it seems possible to me that Sean Carroll talks about the number of worlds that are different from human perspective.

Then there is another problem in physics that we don't know how/whether the very space and time are quantized. We use the mathematical abstraction of a "real number" that has an infinite number of digits after the decimal dot, but of course that infinite number of digits can never be observed experimentally. We don't know. Maybe it is something like what Wolfram says, that on a deep level, spacetime is a discrete graph evolving according to some rules. If something like that would be the case, that would reduce the possible number of states in the universe, even on the micro level, to a huge but finite number. And the mixed state of the multiverse would consist of this finite number of branches, each of them assigned a tiny complex amplitude. So that's another way how things could get finite.

And I am saying this just as a random guy who never studied these things, I just sometimes read something on the topic, and some ideas feel to me like obvious consequences of the stuff that is "in the water supply". So I believe that if I see a solution to a problem, then if it makes sense, someone like Sean Carroll is 10000x more likely to notice the problem and the solution, and develop it much further than I ever could. Or when you make a survey, and a half or a third of people who study quantum physics for living say that some version of MWI seems like the correct interpretation to them, I don't believe there is a simple devastating argument against it that all of these people have simply missed.


> I am saying this just as a random guy who never studied these things

OK, well, let me tell you as a non-random guy who has studied these things extensively that the MWI is very commonly misrepresented. It is not a case of simplification for a lay audience, it is flat-out lying, at least most of the time. The math does not say that there are parallel universes. All the math tells you is that in order to recover the results of experiments you have to throw away some of the information contained in the wave function. MWI proponents interpret this by saying that the discarded information has to correspond to something real, and they call that thing "parallel universes". But there are three problems with this. First, the MWI does not explain the Born rule. Second, the math doesn't tell you whether or not the discarded parts of the wave function describe something real or not. It is possible that mathematical operation of discarding parts of the wave function actually corresponds to real physical phenomenon, i.e. that whatever is described by the discarded parts of the wave function actually ceases to exist. This is a tenable scientific hypothesis. It's not easy to actually make it work, but it can be done and has been done. It's called GRW collapse [1]. So anyone who tells you that the MWI is the only possible scientifically tenable interpretation of QM is lying. And anyone who leaves open even the possibility that the "parallel universes" contained in the wave function are discrete is also lying. The only MWI proponent I've ever seen being intellectually honest about this.David Deutsch in his book "The Beginning of Infinity" chapter 11.

The third problem with the MWI is something called the "preferred basis problem". This one is harder to describe succinctly, and some people claim it has been solved, but I don't agree with them. In a nutshell, all two-state QM experiments rely on some macroscopic apparatus to split a particle into a superposition of two states. But if you model the entire universe as a quantum system, this apparatus is itself a quantum system that can be in a superposition of states, so you can't say, "The polarizing beam splitter is aligned vertically or it is aligned horizontally" any more than you can say "the cat is alive or it is dead" without begging the question.

---

[1] https://en.wikipedia.org/wiki/Ghirardi%E2%80%93Rimini%E2%80%...


It’s probabilistic at all length scales. For example our solar system may suddenly come undone according to simulations.

There is no local realism. That doesn't at all add up to all-in-the-head idealism.

That's true. There is a metaphysical reality "out there", but it is radically different from what we perceive. Hence: an illusion. Note that an illusion is emphatically NOT the same thing as a delusion. Illusions are real sensory experiences common to nearly all humans. They just happen not to correspond to reality.

It can't be that different, either, or our senses would be of no practical use.

That's not true. What our senses perceive (classical reality) is an emergent phenomenon of the underlying metaphysical truth (quantum mechanics). Those two things are about as radically different as you can get. That's why the measurement problem is a thing. But that doesn't mean our senses are of no practical use.

How would you know? If all that's known is either known through the senses or drawn out by reason from what is known through the senses, then by declaring that sense data do not reflect reality, you've cut yourself off form the possibility of knowing reality altogether.

> How would you know?

Because that is the best explanation for what I observe.

> by declaring that sense data do not reflect reality, you've cut yourself off form the possibility of knowing reality altogether

That is true, but only in the uninteresting sense that I can never completely eliminate the possibility that I am living in the Matrix. So yes, it's possible that I'm wrong about the existence of objective reality. But if objective reality is itself an illusion, it's a sufficiently compelling illusion that I'm not going to go far wrong by acting as if it were real.


> That is true, but only in the uninteresting sense that I can never completely eliminate the possibility that I am living in the Matrix. [...] But if objective reality is itself an illusion, it's a sufficiently compelling illusion that I'm not going to go far wrong by acting as if it were real.

That seems squishy, as what constitutes "going far wrong" is not meaningful under skeptical assumptions.

A better stance is one of cognitive optimism that avoids the irrationality of skepticism. Skepticism is irrational, because it leads to incoherence, and because there is no rational warrant to categorically doubt the senses. For doubt to be rational, there must be a reason for it. To doubt without reason is not to be rational, but to be willful, and willful beliefs cannot be reasoned with; they are not the product of evidence or inference — and they certainly aren't self-evident — but rather the product of arbitrary choice. The logical possibility of living in the Matrix is no reason for doubting the senses, just as the logical possibility of there being poison in your sandwich is no reason for doubting you'll survive eating it.

The difference between our positions is that I begin from a position of natural trust toward the senses and toward reason as the only rational possibility and default. I have no choice but to reason well or to reason poorly. I recognize that my senses and my inferences can err, but it does not follow that they always err. Indeed, the very claim that they can err presumes I can tell when they do.

So, if my inferences lead me to a position that undermines their own coherence, then I must conclude that my inferences are wrong (including those that led me to adopt a certain interpretation of, say, scientific measurements).

> Because that is the best explanation for what I observe.

But if your explanation involves contradiction of what you observe, then that is not only not the best explanation, but no explanation at all! An explanation cannot deny the thing it seeks to explain. Thus, by denying the objective reality of what you perceive, you are barred from inferring that denial.


> what constitutes "going far wrong" is not meaningful under skeptical assumptions.

I can be more precise about this. It means that the predictions I make on the basis of this assumption are very likely to be correct.

> Skepticism is irrational

No, it isn't. The vast majority of my beliefs about the world are not a result of direct observations, but nth-hand accounts. I believe, for example, that the orbit of Mercury precesses, but not because I've ever measured it myself, but rather because I heard it from a source that I consider credible. But assessing the credibility of a source is hard and error-prone, especially nowadays. There is always the possibility that a source is mistaken or actively trying to deceive you. And even for things you observe first-hand there are all kinds of cognitive biases you have to take into account. So skepticism is warranted.

> I begin from a position of natural trust toward the senses

That will lead you astray because your senses are unreliable.

> if your explanation involves contradiction of what you observe

But it doesn't. At worst it involves a contradiction of what I think I observe.


“ There’s no free will and everything is determined.”

Objects without free will aren’t able to come to conclusions like this.


I'd like to believe that there is no such thing as free will, but I just can't decide.

Look into Chaos Theory - the universe is not deterministic, you're good.

Or for a tldr look for the three body problem or try to find a solution to a double pendulum!


To the contrary, here's a series of essays on the subject of evolutionary game theory, the incentives created by competition, and its consequences for human wellbeing:

https://www.lesswrong.com/s/kNANcHLNtJt5qeuSS

"Moloch hasn't won" is a lengthy critique of the argument you are making here.


That doesn't seem to be on point to me. I'm not talking about being "caught in bad equilibria". My assertion is that rationalism itself is not stable, that the (apparent) triumph of rationalism since the Enlightenment was a transient, not an equilibrium. And one of the reasons it was a transient is that self-styled rationalists believed (and apparently still believe) that rationalism will inevitably triumph because it is rational, because it is in more intimate contact with reality than religion and superstition. But this is wrong because it overlooks the fact that what triumphs in the long run is simply reproductive fitness. Being in contact with reality can be actively harmful to reproductive fitness if it leads you to, say, decide not to have kids because you are pessimistic about the future.

> it overlooks the fact that what triumphs in the long run is simply reproductive fitness.

Why can't that observation be taken into account? Isn't the entire point of the approach accounting for all inputs to the extent possible?

I think you are making invalid assumptions about the motivations or goals or internal state or etc of the actors which you are then conflating with the approach itself. That there are certain conditions under which the approach is not an optimal strategy does not imply that it is never competitive under any.

The observation is then that rationalism requires certain prerequisites before it can reliably out compete other approaches. That seems reasonable enough when you consider that a fruit fly is unlikely to be able to successfully employ higher level reasoning as a survival strategy.


> Why can't that observation be taken into account?

Of course it can be. I'm saying that AFAICT it generally isn't.

> rationalism requires certain prerequisites before it can reliably out compete other approaches

Yes. And one of those, IMHO, is explicit recognition that rationalism does not triumph simply because it is rational, and coming up with strategies to compensate. But the rationalist community seems too hung up on things like malicious AI and Roko's basilisk to put much effort into that.


This argument proves too much. If rationalism can't "triumph" (presumably over other modes of thought) because evolution makes moral realism unobservable, then no epistemic framework will help you - does empirically observing the brutality of evolution lead to better results? Or perhaps we should hypothesise that it's brutal and then test that prediction against what we observe?

I'm sympathetic to the idea that we know nothing because of the reproductive impulse to avoid doing or thinking about things that led our ancestors to avoid procreation, but such a conclusion can't be total because otherwise it is self defeating because is is contingent on rationalist assumptions about the mind's capacity to model knowledge.


The point being made is that rationalism is a framework. Having a framework does not imply competent execution. At lower levels of competence other strategies win out. At higher levels of competence we expect rationalism to win out.

Even then that might not always be the case. Sometimes there are severe time or bandwidth or energy or other constraints that preclude carefully collecting data and thinking things through. In those cases a heuristic that is very obviously not derived from any sort of critical thought process might well be the winning strategy.

There will also be cases where the answer provided by the rational approach will be to conform to some other framework. For example where cult type ingroup dynamics are involved across a large portion of the population.


> Having a framework does not imply competent execution,

Exactly right. It is not rationalism per se that is the problem, it is the way that The Rationalists are implementing it, the things they are choosing to focus their attention on. They are worried about things like hostile AI and Roko's Basilisk when what they should be worried about is MAGA, because that is not being driven by rationalism, it is being driven by Christian nationalism. MAGA is busily (and openly!) undermining every last hint of rationalism in the U.S. government, but the Rationalist community seems oddly unconcerned with this. Many self-styled Rationalists are even Trump supporters.


> Being in contact with reality can be actively harmful to reproductive fitness if it leads you to, say, decide not to have kids because you are pessimistic about the future.

The fact that you can write this sentence, consider it to be true, and yet still hold in your head the idea that the future might be bad but it's still important to have children suggests that "contact with reality" is not a curse.


Couple of point to that, mainly why should one have to follow darwinian evolution just because we are a product of that. It's similar to the natural law argument against homosexuality, that unnatural sex is wrong. The argument against that is natural biology does not inform what is good or what we should do.

I'm sure you would be able to predict what a rationalise will say when you ask them what future they prefer: one where we maximises for the number of humans or one with fewer humans but better lives


> why should one have to follow darwinian evolution

That depends on what you mean by "follow". You have to "follow" Darwinian evolution for the same reason you have to "follow", say, the law of gravity. That doesn't mean you can't build airplanes and spacecraft, but you still have to acknowledge and "follow" the law of gravity. You can't just glue feathers to your arms, jump off a cliff, and hope for the best. (Actually, rationalists aren't even gluing feathers to their arms. They are doing the equivalent of jumping off a cliff because they just don't believe gravity applies to them.)

[UPDATE]

> unnatural sex is wrong

The problem with that argument is that homosexuality is not unnatural. Many, many species have homosexual relations. Accounting for this is a little bit challenging, but the fact is undeniable.

https://en.wikipedia.org/wiki/Homosexual_behavior_in_animals


Pointing to other species can be futile since they also eat their young and are horrible partners :)

Well, yeah, the whole "against nature" argument is bogus to begin with. But I think a counter-argument is stronger if it can be made even while accepting the other side's premises.

An issue with this line of reasoning is that Darwinian evolution fails to accurately describe real evolutionary processes.

A counterexample is meiotic drive, where alleles disrupt the meiotic process in order to favour their own transmission, even if the alleles in question ultimately produce a less fit organism.

Whilst this is not an inherently positive observation, I think it does illustrate that the fatalistic picture you're painting here is incorrect. There's room for tentative optimism.


> Darwinian evolution fails to accurately describe real evolutionary processes.

That is not correct. Darwin did make a mistake, but it was not the fundamental dynamics of the process, but that he chose the wrong unit of selection. Darwin thought that selection selected for individuals or species when in fact it selects for genes. Richard Dawkins is the person who figured this out, but Darwin knew nothing about genes (OoS was published only three years after Gregor Mendel's work) so he still gets the credit nothwithsanding this mistake.


But rationalism , if the modern sort, isn't supposed to be descriptive, it's supposed to be normative -- rationality lets you win.

The problem is not with rationalism, it's with Rationalism (with a capital R), the cult-like phenomenon that has grown up around rationalism that fetishizes things like Bayes's theorem, hostile AI, Roko's basilisk, and the MWI.

Darwinism isn't a weakness of rationality, teleology has fine tuning problem, while darwinism is minimally fine tuned to work from scratch, which can be said to be optimal. Also darwinism doesn't select for reproductive fitness, it's only a proxy goal; true goal is survival, so you can produce more offspring only in a way compatible with true goal.

> Darwinism isn't a weakness of rationality

I didn't say it was. I said that the Rationalist community is not taking the implications of Darwinism into account when they choose where to focus their attention. This is what leads them to fixate on hostile AI and the MWI when what they should be worried about is the rise of MAGA. But not only is that not what they are worried about, many self-styled Rationalists are Trump supporters.


I hesitate to nitpick, but Darwinism (as far as I know) is not really the term to use because Darwin's theory was limited to life on earth. Only later was the concept generalised into "natural selection" or "survival of the fittest".

I'm not sure I entirely understand what you're arguing here, but I absolutely do agree that the most powerful force in the universe is natural selection.


The modern understanding of Darwin's theory (even the original theory, not necessarily neo-Darwinian extensions of it) apply to the origins of life and non-biological systems as well. Darwin himself was largely concerned with biology and restricted his published writings to that topic, but even he saw the application to the origin of life, and implications for religion. Even if he hadn't, we generally still use the discoverer's name to a theory even when applied to a domain outside their original area of concern.

The term "survival of the fittest" predates Darwin's Origin of Species, and was adopted by Darwin within his lifetime, btw.


The term "Darwinian evolution" applies to any process that comprises iterated replication with random mutation followed selection for some quality metric. Darwin himself would not have defined it that way, but he still deserves the credit for being the first to recognize and document the power of this simple process.

> If you are a gene, you can probably produce more offspring in an already-industrialized environment by making brains that lean more towards misogyny and sexual promiscuity than gender equality and intellectual achievement.

The fact that humans are intelligent at all and Enlightenment peoples currently dominate the world suggests otherwise.


> Enlightenment peoples currently dominate the world

Huh??? How do you figure? AFAICT the world is dominated by Donald Trump, Xi Jinping, and Vladimir Putin (if you reckon by power) or Christians and Muslims (if you reckon by population). None of these individuals or groups can be properly categorized as "Enlightenment peoples", certainly not with a capital E.

https://en.wikipedia.org/wiki/Age_of_Enlightenment


Western nations have dominated the world for centuries, in science, technology and cultural dominance. Russia is a bit player and has been for some time. Even if China were to ascend to equal prominence as the US, it would be an outlier in an otherwise clear trend.

> Western nations have dominated the world for centuries

I guess that depends on what you consider "the world". It makes no sense to even talk about the West dominating "the world" before 1492. The first truly global Western empire was Britain, but it was also the last. It was replaced by the U.S. but it was never really global. Even at the height of its power after WW2 the USSR was a credible rival. After the fall of the USSR in 1991 the U.S. was the sole undisputed superpower for a little while, but that came to an abrupt end on September 11, 2001 and the subsequent wars in Afghanistan and Iraq.

I think you are over-extrapolating the past into the future. The mindset and culture that produced U.S. hegemony in the 20th century seems to me to be mostly extinct. The U.S. was indeed ruled by rationalism (more or less) from the time of its founding through the mid-to-late 20th century, but there is precious little of that left today. Certainly the power structure in the U.S. today is actively hostile to rationalism, and I don't see a whole lot of rationalism in play in the opposition either.


You got Darwinism exactly backwards. Darwinism and nature do not select like an algorithm. There is no cost function in reality and no population selection and reproduction algorithm. What you're seeing is the illusion of selection due to selection bias.

If gender equality and intellectual achievement don't produce children, then that isn't "darwinism selecting rationality out". You can't expect the continued existence of finite lifespan organisms if there are no replacement organisms. Raising children is hard work. The people who believe in gender equality and intellectual achievement made the decision to not want more of themselves, particularly when their belief in gender equality entails not wanting male offspring. The alternative is essentially freeloading and expecting others, who do not share the beliefs, to produce children for you and also to teach them the "enlightened" belief of forcing "enlightened" beliefs onto others (note the circularity, the initial conditions are usually irrelevant and often just a fig leaf to perpetuate the status quo).


> no population selection and reproduction algorithm

I never said there was. Darwin said it because he didn't know anything about genes, but that mistake was corrected by Dawkins.

> If gender equality and intellectual achievement don't produce children, then that isn't "darwinism selecting rationality out".

Why not?

> The people who believe in gender equality and intellectual achievement made the decision to not want more of themselves

That's the wrong way to look at it. Individuals are not the unit of reproduction. Genes are. Genes that build brains that want to have (and raise) children are more likely to propagate than genes that build brains that don't, all else being equial. So it is not rationality per se that is the problem -- rationality can provide a reproductive advantage because it lets you, for example, build technology. The problem is that non-rational brains can parasitically benefit from the phenotype of rational brains, at least for a while. But in the long run this is not a stable equilibrium.


I used to think that massive, very long term droughts might cause serious instability, but I have since changed my mind. In highly developed nations, the amount of irrigation infrastructure built in the last 100 years is simple stunning. Plus, national agriculture research programmes are always researching how to use less water and grow the same amount of product. About drinking water: Rich places just build desalination plants. Sure, it is much more expensive than natural water sources (rivers, lakes, aquifers), but not expensive enough to cause political instability or serious economic harm. To be clear: Everything I wrote is from the perspective of highly developed nations. In middle income nations and below, droughts are incredibly challenging to overcome. The political and economic impacts can be enormous.

'America Is Using Up Its Groundwater Like There’s No Tomorrow' (https://waterwatch.org/america-is-using-up-its-groundwater-l...) to support that stunning irrigation infrastructure.

You should not interpret that historical success to imply future success as it depended on non-sustainable groundwater extraction.

Eg, https://en.wikipedia.org/wiki/Ogallala_Aquifer

> Many farmers in the Texas High Plains, which rely particularly on groundwater, are now turning away from irrigated agriculture as pumping costs have risen and as they have become aware of the hazards of overpumping.

> Sixty years of intensive farming using huge center-pivot irrigators has emptied parts of the High Plains Aquifer.

> as the water consumption efficiency of the center-pivot irrigator improved over the years, farmers chose to plant more intensively, irrigate more land, and grow thirstier crops rather than reduce water consumption--an example of the Jevons Paradox in practice

How will the Great Plains farmers get water once the remaining groundwater is too expensive to extract?

Salt Lake City cannot simply build desalination plants to fix its water problem.

I expect the bad experiences of Okies during the internal migration of the Dust Bowl will be replicated once the temporary (albeit century-long) relief of using fossil water is exhausted.


> Therefore, nothing can produce greater positive expected value than preventing existential risks—so working to reduce these risks becomes the highest priority.

Incidentally, the flaw in this theory is in thinking you understand what all the existential risks are.

Suppose you clock "malicious AI" as a huge risk and then hamper AI, but it turns out the bigger risk is not doing space exploration, which AI would have accelerated, because something catastrophic yet already-inevitable is going to happen to the Earth in a few hundred years and if we're not sustainably multi-planetary by then it's all over.

The thing evolution teaches us is that diversity is a group survival trait. Anybody insisting "nobody anywhere should do X" is more likely to cause an ELE than prevent one.


> Incidentally, the flaw in this theory is in thinking you understand what all the existential risks are.

Rationalist community understands that very well. They even know how to put bounds on the unknowns and their own lack of information.

> The thing evolution teaches us is that diversity is a group survival trait. Anybody insisting "nobody anywhere should do X" is more likely to cause an ELE than prevent one.

Right. Good thing they'd agree with you 100% on this.


> They even know how to put bounds on the unknowns and their own lack of information.

No they don't. They think they can do this because they've accidentally reinvented the philosophy "logical positivism", which philosophers gave up on because it doesn't work. (This is similar to how they accidentally reinvented reconstructing arguments and called it "steelmanning".)

https://metarationality.com/probability-limitations



The nature of unknowns is that you don't know them.

What's the probability of AI singularity? It has never happened before so you have no priors and any number you assign will be pure speculation.


Same is true about anything you're trying to forecast, by definition of it being in the future. And yet people have figured out how to make predictions more narrow than shrugging.

"It is difficult to make predictions, especially about the future."

Most of the time we make predictions based on how similar events happened in the past. For completely novel situations it's close to impossible to make a prediction and reckless to base policy on such a prediction.


That's strictly true, but I feel like you're misunderstanding something. Most people aren't actually doing anything truly novel, hence very few people ever actually have to even attempt to predict things in this way.

But it was necessary at the beginning of flight and the flight to the moon would've never been possible either without a few talented people being able to make predictions about scenarios they knew little about.

There are just way too many people around nowadays, which is why most of us never get confronted with such novel topics and consequently we don't know how to reason about it


>> It has never happened before

> Same is true about anything you're trying to forecast, by definition of it being in the future

There might be some flaws in this line of reasoning...


Yes, but making it more nuanced still doesn't change the point.

Singularity obviously never happened before, and if anyone bothered to read up on what they're talking about, they'd realize that no one is trying to predict what happens then, because the singularity is defined as the time when changes accelerate to such a degree that we have no baseline to make any predictions whatsoever.

So when people speculate on when that is, they're trying to forecast the point forecasting breaks; they do it by extrapolating from known examples and trends, to which we do have baselines.

Or, in short: we know how it is to ride an exponent, we just never rode one long enough to fall off of it; predicting singularity is predicting when the exponent gets steep enough we can't follow, which is not unlike predicting any other trend people do. Same methods and caveats apply.


"And the general absolutist tone of the community. The people involved all seem very... Full of themselves ?"

>And yet people have figured out how to make predictions more narrow than shrugging

And?


That's only one flaw in the theory.

There are others, such as the unproven, narcissistic and frankly unlikely-to-be-true assumption that humanity continuing to exist is a net positive in the long run.


"net positive" requires a human being existing to judge.

> "net positive" requires a human being existing to judge.

This is effectively a religious belief you are espousing.


What do you mean? Religion seems like a total nonsequitur here. If anything I would argue the opposite: you would need religion or something like it for moral statements to have any meaning without the existence of sentient beings.

A net positive for whom?

Everything. of which humanity is a miniscule part.

In what sense are people in those communities "quite smart"? Stupid is as stupid does. There are plenty of people who get good grades and score highly on standardized tests, but are in fact nothing but pontificating blowhards and useless wankers.

They're members of a religion which says that if you do math in your head the right way you'll be correct about everything, and so they think they're correct about everything.

They also secondarily believe everyone has an IQ which is their DBZ power level; they believe anything they see that has math in it, and IQ is math, so they believe anything they see about IQ. So if you avoid trying to find out your own IQ you can just believe it's really high and then you're good.

Unfortunately this lead them to the conclusion that computers have more IQ than them and so would automatically win any intellectual DBZ laser beam fight against them / enslave them / take over the world.


If only I could +1 this more than once! I have learned valuable things occasionally from people in the rationalist community but this overall lack of humility —and strangely blinkered view of humanities and important topics like say history of science relevant to “STEM”—ultimately turned me off to the movement as a whole. And I love science and math! It just shouldn’t belong to people with this (imo) childish model of people, IQ, etc.

According to rationalists, humans don't work together, so you can't add up their individual intelligence to get more intelligence. Meanwhile building a single giant super AI is technologically feasible, so they weigh the intelligence of a single person vs all AIs operating as a collective hivemind.

> According to rationalists, humans don't work together, so you can't add up their individual intelligence to get more intelligence.

An actual argument would be that intelligence doesn't work like that. Two people with IQ 100 cooperating together does not produce an IQ 200 solution.

There is the "wisdom of crowds". If a random member of a group is more than 50% likely to be correct, the average of the group is more likely to be correct than its members individually. But that has a few assumptions, for example that each member tries to figure out things independently (as opposed to everyone waiting for the highest-status member to express their opinion, and then agreeing with it -- in that case the entire group is only as smart as the highest-status member).

But you cannot leverage this to simply invite 1000 random people in your group and ask them to invent a Theory of Everything; because the assumption that each member is more than 50% likely to be correct does not apply in this case. So that is one of the limits of people working together.

(And this already conveniently ignores many other problems found in real life, such as conflict of interests, etc.)


For anyone who takes Effective Altruism or Rationalism seriously I strongly recommend reading "You Are Not a Gadget" It was written more than ten years ago and was so prescient about the issues of social media and also contained one of the most devastating problems with EA, the idea of a circle of empathy.

You don't have to agree with any of this. I am not defending every idea the author has. But I recommend that book.


Thanks for the reco.

Technically "long-termism" should lead them straight to nihilism. Because, eventually, everything will end. One way or another. The odds are just 1. At some point, there are no more future humans. The number of humans are zero. Also, due to the nature of the infinite, any finite thing is essentially a rounding error and not worth concerning oneself with.

I get the feeling these people often want to seem smarter than they are, regardless of how smart they are. And they want to get money to ostensibly "consider these issues", but really they want money for nothing.

If they wanted to do right by the future masses, they should be looking to the things that are affecting us right now. But they treat those issues as if they'll work out in the wash.


> Technically "long-termism" should lead them straight to nihilism. Because, eventually, everything will end. One way or another. The odds are just 1. At some point, there are no more future humans. The number of humans are zero. Also, due to the nature of the infinite, any finite thing is essentially a rounding error and not worth concerning oneself with.

The current sums invested and donated in altruist causes are rounding errors themselves compared to GDPs of countries, so the revealed preferences of those investing and donating to altruist causes is to care about the future and the present also.

Are you saying that they should give a greater preference to help those who already exist rather than those who may exist in the future?

I see a lot of Peter Singer’s ideas in modern “effective” altruism, but I get the sense from your comment that you don’t think that they have good reasons for doing what they do, or that their reason leads them to support well-meaning but ineffective solutions. I am trying to understand your position without misrepresenting your point or goals. Are you naysaying or do you have an alternative?

https://en.wikipedia.org/wiki/Peter_Singer


I think it's essentially a grift. An excuse to do nothing while looking like you care and reaping rewards.

If they wanted to help, they should be focused on the now. Global poverty, climate change, despotic world leaders. They should be aligning themselves against such things.

But instead what we see is essentially not that. Effective altruism is a lot like the Democratic People's Republic of Korea, a bit of a misnomer.


Agreed.

To be pedantic, DPRK is run via the will of the people to a degree comparable to any country. A bigger misnomer is the west calling liberal “democracy”, just democracy.


Even if I understand what you mean, DPRK is very far away from democratic countries. It's not comparable at all.

My point is that most people who see that think a country like America or Europe are democracies and that is partly why it would be a misnomer. When it is arguable that DPRK state is more a will of the people than the west.

The elite letting the people choose between a few candidates is not a democracy. There are no “democratic” countries in that way.


“ When it is arguable that DPRK state is more a will of the people than the west.”

Ba da Ching


What do people like you use to prove the DPRK is auth, a dictatorship, and so bad while so smugly believing they have “freedom” and “democracy” better than the Global South? Western state depts and western state directed media manufacturing consent - ba da ching

Don’t forget — the enemies of the west are all bad and evil. And the west is free and fair and democratic and not a scourge on the rest of the world.


Because when doctors go there to cure the blind from easily treatable malnutrition the people drop to the floor in fear praising the moon lord leader.

> When it is arguable that DPRK state is more a will of the people than the west.

It's not arguable, it's simply wrong. But to understand that you would have to understand much more about the DPRK.

> The elite letting the people choose between a few candidates is not a democracy.

It is, but more importantly, your framing is wrong. Every democracy has several levels of democratic institutions (local, state, federal etc.), there are often 'surprise' election winners; there are also non-elective ways for non-elite people to influence policy. DPRK has none of this.


They have aligned themselves in favor of global poverty, climate change and despotic world leaders.

A lot of them argue that poor countries essentially don't matter. Climate change is not an extinction event and there should an authoritarian world government to prevent nuclear conflict to minimize the risk of nuclear extinction.

>In his dissertation On the Overwhelming Importance of Shaping the Far Future (2013), supposedly “one of the best texts on existential risks,”[9] Nicholas Beckstead meditates on the “ripple effects” a human life might have for future generations and concludes “that saving a life in a rich country is substantially more important than saving a life in a poor country” due to the higher level of innovation and economic productivity attained in these countries.[10]

https://umbau.hfg-karlsruhe.de/posts/philosophy-against-the-...


The site you reference quotes Beckstead out of context, and either reading the context or looking at what he spent the next decade of his life working on would make it clear that he thinks the marginal dollar is better spent on saving lives in poor countries than rich ones. He well understands that his "other things being equal" in his dissertation essentially never holds in practice, and was writing for a philosophy audience where this kind of hypothetical is expected.

Meanwhile, on an actual EA website: https://www.givewell.org/charities/top-charities

> But they treat those issues as if they'll work out in the wash.

there's something pathologically, virally sick about wealth accumulation's primary function being the further accumulation of wealth. for a movement rooted in "rationalism," EA seems pretty irrationally focused on excusing antisocial behavior.


> wealth accumulation's primary function being the further accumulation of wealth

Then we are lucky that EA promotes giving more to charity as the primary function of accumulation of wealth.

> EA seems pretty irrationally focused on excusing antisocial behavior.

Is a guy getting a well-paid job at Microsoft and donating half of his salary to African charities really your best example of antisocial behavior?


What it promotes and what happens are two different things.

“The purpose of a system is what it does.”


> Then we are lucky that EA promotes giving more to charity as the primary function of accumulation of wealth.

no, we are not lucky. EA-good-because-charity-good is a brain-pretzel way of lobbying against equitable taxation.

> Is a guy getting a well-paid job at Microsoft and donating half of his salary to African charities really your best example of antisocial behavior?

you're inventing a competitive debate regarding a hypothetical "best example of antisocial behavior". i didn't target anyone specifically with any part of my post.


Well, I base my opinion about EA on specific people I happen to know, such as https://www.jefftk.com/news/ea , but of course that shouldn't stop anyone from making up edgy interpretations and posting them as a fact. Because what is the point of discussing actual effective altruists when imaginary villains are a much more interesting topic.

> then even small probabilities of catastrophic events wiping out humanity yield enormous negative expected value. Therefore, nothing can produce greater positive expected value than preventing existential risks—so working to reduce these risks becomes the highest priority.

This is the logic of someone who has failed to comprehend the core ideas of Calculus 101. You cannot use intuitive reasoning when it comes to infinite sums of numbers with extremely large uncertainties. All that results is making a fool out of yourself.


This summarizes what irks me about the community.

They use technical terms (eg expected value, KL divergence) in their verbal reasoning only to sound rational, but don’t ever mean to use those terms technically.


What's the seductive-but-wrong part of EA? As far as I can tell, the vast majority of the opposition to it boils down to "Maybe you shouldn't donate money to pet shelters when people are dying of preventable diseases" vs "But donating money to pet shelters feels better!"

That hasn't defined EA for at least 10 years or so. EA started with "you should donate to NGOs distributing mosquito nets instead of the local pet shelter".

It then moved into "you should work a soulless investment banking job so you can give more".

More recently it was "you should excise all expensive fun things from your life, and give 100% of your disposable income to a weird poly sex cult and/or their fraudulent paper hedge fund because they're smarter than you."


> That hasn't defined EA for at least 10 years or so.

Meanwhile, on an actual EA website: https://www.givewell.org/charities/top-charities

* Medicine to prevent malaria

* Nets to prevent malaria

* Supplements to prevent vitamin A deficiency

* Cash incentives for routine childhood vaccines


GiveWell is an example of the short-termist end of EA. At the long-termist end people pay their friends to fantasize about Skynet at 'independent research institutes' like MIRI and Apollo Research. At the "trendy way to get rich people to donate" end you get buying a retreat center in Berkley, a stately home in England, and a castle in Czechia so Effective Altruists can relax and network.

Its important to know which type of EA organization you are supporting before you donate, because the movement includes all three.


I assume that GiveWell is the most popular of them. I mean, if you donate to MIRI, it is because you know about MIRI and because you specifically believe in their cause. But if you are just "hey, I have some money I want to donate, show me a list of effective charities", then GiveWell is that list.

(And I assume that GiveWell top charities receive orders of magnitude more money, but I haven't actually checked the numbers.)


Even GiveWell partnered with the long-termist/hypothetical risk type of EA by funding something called Open Philanthropy. And there are EA organizations which talk about "animal welfare" and mean "what if we replaced the biosphere with something where nothing with a spinal cord ever gets eaten?" So you can't trust "if it calls itself EA, it must be highly efficient at turning donations into measurable good." EA orgs have literally hired personal assistants and bought stately homes for the use of the people running the orgs!

This is what some EAs believe, I don't think there was ever a broad consensus on those latter claims. As such, it doesn't seem like a fair criticism of EA.

You can’t paint a broad brush of a whole movement, but it’s true if the leadership of the EA organization. Once they went all-in on “AI x-risk”, there ceased being a meaningful difference between them and the fringe of the LW ratsphere.


The people who run the forum you linked to.

That same link puts AI risk under the "far future" category, basically the same category as "threats to global food security" and asteroid impact risks. What's unreasonable about that?

It's not all bad, some conclusions are ok. But it also has ideas like "don't donate to charity, it's better to invest that money in, like, an oil fund, and grow it 1000x, and then you can donate so much more! Bill Gates has done much more for humanity than some Red Cross doctor!". Which is basically just a way to make yourself feel good about becoming richer, much like "prosperity gospel" would for the religious.

This is not a commonly-held position in EA.

It might be the parts that lead a person to commit large scale fraud with the idea that the good they can do with the stolen money outweighs all the negatives. Or, at least, that’s the popular idea of what happened to Sam Bankman-Fried. I have no idea what was actually going through that man’s mind.

In any case, EA smells strongly of “the ends justify the means” which most popular moral philosophies reject with strong arguments. One which resonates with me is that there are no “ends.” The path itself is the goal.


> the ends justify the means” which most popular moral philosophies reject with strong arguments.

This is a false statement. Our entire modern world is built on the basis of the ends justify the means. Every time money is spent on long term infrastructure vs giving poor kids food right now, every time a war is fought, every time a doctor triages injuries at a disaster.


I don't think it's useful to conflate "the ends justify the means" with "cost-benefit analysis". You sometimes use the latter to justify certain means, but you don't have to, that's why they're different. When you believe that the ends justify the means, you can also just give no consideration at all to the ethics of the means. No doctor triaging patients would ever shoot a patient in the head so he could move onto one they thought was more important. Yes they might let patients die, but that's different than actively killing them.

In the framework of modern civilian medicine sure.

I'm sure exactly what you described was done plenty of times in ww1 and similar around that era, and seen as perfectly moral and rational.


> there are no “ends.” The path itself is the goal.

How does this apply to actual charity? "Curing malaria is not the goal. Our experiences during voluntourism are the true goal."


I've noticed this with a lot of radical streamers on both sides. They don't care about principles, they care about "winning" by any means necessary.

Winning at things that align with your principle is a principle. If you don't care about principles, you don't care about what you're winning at, thereby making every victory hollow and meaningless. That is how you turn into a loser at everything you do.


Yes, but they’re smug about it, which means they’re wrong.

Of course it sounds ridiculous when you spell it out this way.


Or something like focusing on high paying job and donating money does absolutely nothing to solve any root or structural problems which defeats the supposed purpose of caring about future people or being effective altruist in the first place

Of course the way your comment is written makes criticism sound silly.


Pet shelters is just the pretty facade. The same ideology would have you step over homeless elderly ladies and buy mosquito nets instead, because the elderly lady cannot have children so mosquito nets maximize the number of future humans more efficiently.

This tracks based on my limited contact with LessWrong during the whole Roko's Basilisk thing.

I quickly lost interest in Roko's Basilisk, but that is what brought me in the door and started me looking around the discussions. At first, it was quite seductive. There was a strange fearlessness there, a willingness to say and admit some things about humanity, our limitations and how we tend to think that other great thinkers maybe danced around in the past. After awhile it became clear that while there were a select few individuals who had found some balance between purely rational thinking and how reality actually works, most of the rest had their heads so far up their asses that they'd fart and call it a cool breeze. Reminded me of my brief obsession with Game Theory and realizing that even it's creators knew it's utility was not quite as advertised to the layman (as in it would not really help you predict or plan for anything at all, just model how decisions might be made).


Peardon the "it's" autocorrect, I can't seem to edit on mobile (Harmonic)

Just musing out loud. "Utility" is like, for

Physics postgrads: "gauge"

Physics undergrads: "wavefunction"

Grade schoolers: "temperature"

These concepts definitely useful for the hw sets, no understanding needed (or expected!)


I'm not familiar with any of these communities. Is there also a general bias towards one side between "the most important thing gets the *most* resources" and "the most important thing gets *all* the resources"? Or, in other words, the most important thing is the only important thing?

IMO it's fine to pick a favorite and devote extra resources to it. But that turns less fine when one also starts working to deprive everything else of any oxygen because it's not your favorite. (And I'm aware that this criticism applies to lots of communities.)


It's not the case. Effective altruists give to dozens of different causes, such as malaria prevention, environmentalism, animal welfare, and (perhaps most controversially) extinction risk. It can't tell you which root values to care about. It just asks you to consider whether the charity is impactful.

Even if an individual person chooses to direct all their donations to a single cause, there's no way to get everyone to donate to a single cause (nor is EA attempting to). Money gets spread around because people have different values.

It absolutely does take some money away from other causes, but only in the sense that all charities do: if you give a lot to one charity, you may have less money to give to others.


The general idea is that on the margin (in the economics sense), more resources should go to the most effective+neglected thing, and.the amount of resources I control is approximately zero in a global sense, so I personally should direct all of my personal giving to the highest impact thing.

And in their logic the highest impact is to donate money, take high paying jobs regardless of morality, and not focusing on any structural or root issues.

Yeah, the logic is basically "sure there are lots of structural or root issues, but I'm not confident I can make a substantial positive impact on those with the resources I have whereas I am confident that spending money to prevent people (mostly kids who would otherwise have survived to adulthood) from dying of malaria is a substantial positive impact at ~$5000 / life saved". I find that argument compelling, though I know many don't. Those many are free to focus on structural or root issues, or to try to make the case that addressing those issues is not just good, but better than reducing the impact of malaria.

The other weird direction it leads is space travel.

If you assume we eventually figure out long distance space travel and humanity spreads across the galaxy, there could in the future be quadrillions of people, growing at some kind of exponential rate. So accelerating the space race by even an hour is equivalent to bringing billions of new souls into existence.


I don't see how bringing new souls (whatever those are) into existence should naturally qualify as a good thing?

Perhaps you're arguing as an illustration of the way this group of people think, in which case I understand your point.


You can make arguments for it starting with "killing billions of people would definitely be bad" or "some fraction of those people will likely share my genes, which I have a biological drive to pass on".

It encodes a slight bias towards human existence being a positive thing for us humans, but I don't think it's the shakiest part of that reasoning.


Nitpick: it bottoms out at quadratic growth in the limit, not exponential.

I think most everyone can agree with this: Being 100% rigorous and rational, reasoning from first principles and completely discarding received wisdom is a great trait in a philosopher but a terrible trait in a policymaker. Because for the former, exploring ideas for the benefit of future generations is more important than whether they ultimately reach the right conclusion or not.

The big problem is that human values are similar to neural network weights that can't be clearly defined into true/false axioms like "human life is inherently valuable" and "murder is wrong". The easiest axiom to break is something like "every human life is equally valuable", or even "every human life is born equal".

> Being 100% rigorous and rational, reasoning from first principles

It really annoys me when people say that those religious cultists do that.

They derive their bullshit from faulty, poorly thought out premises.

If you fuck up the very firsts calculations of the algorithm, it doesn't matter how rigorous all the subsequent steps are. There results are going to be all wrong.


It seems self-evident to me that epistemic humility _requires_ temporal discounting. We should not be confident we can predict the future well enough to integrate utility centuries forward.

EA always rubbed me the wrong way.

(1) The kind of Gatesian solutions they like to fund like mosquito nets are part of the problem, not part of the solution as I see it. If things are going to get better in Africa, it will be because Africans grow their economy and pay taxes and their governments can provide the services that they want. Expecting NGOs to do everything for them is the same kind of neoliberal thinking that has rotted state capacity in the core and set us up for a political crisis.

(2) It is one thing to do something wrong, realize it was a mistake, and then make amends. It's another thing to do plan to do something wrong and to try to offset it somehow. Many of the high paying jobs that EA wants young people to enter are "part of the problem" when it comes to declining stage capacity, legitimation crisis, and not dealing with immediate problems -- like the fact that one of these days there's going to be a heat wave that is a mass causality event.

Furthermore

(3) Time discounting is a central part of economic planning

https://en.wikipedia.org/wiki/Social_discount_rate

It is controversial as hell, but one of the many things the Soviet Union got wrong before the 1980s was planning with a discount rate of zero, which led to many economically and ecologically harmful projects. If you seriously think it should be zero you should also be considering whether anybody should work in the finance industry at all or if we should have dropped a hydrogen bomb on Exxon's headquarters yesterday. At some point speculations about the future are just speculation. When it comes to the nuclear waste issue, for instance, I don't think we have any idea what state people are going to be in 20,000 years. They might be really pissed that buried spent nuclear fuel some place they can't get at it. Even the plan to burn plutonium completely in fast breeder reactors has an air of unreality about it, even though it happens on a relatively short 1000 year timescale we can't be sure at all that anyone will be around to finish the job.

(4) If you are looking for low-probability events to worry about I think you could find a lot of them. If it was really a movement of free thinkers they'd be concerned about 4,000 horsemen of the apocalypse, not the 4 or so that they are allowed to talk about -- but talk about a bunch of people who'll cancel you if you "think different". Somehow climate change and legitimation crisis just get... ignored.

(5) Although it is run by people who say they are militant atheists, the movement has all the trappings of a religion, not least "The Singularity" was talked about by Jesuit Priest Teilhard de Chardin long before sci-fi writer Vernor Vinge used it as the hinge of a mystery novel.


> When it comes to the nuclear waste issue, for instance ...

Nuclear waste issues are 99.9% present-day political/ideological. Huge portions of the Earth are uninhabitable due to climate and/or geology. Lead, mercury, arsenic, and other naturally-occurring poisons contaminate large areas. Volcanoes spew CO2 and toxic gasses by the megaton.

Vs. when is the last time you heard someone get excited over toxic waste left behind by the Roman Empire?


I agree with your lead (the issue is 99% political) but man does that last bit demand a rebuttal. Waste left behind by the roman empire isn't even remotely comparable to long term radioactive material. I suggest having a look through the wikipedia list of orphan source incidents to get an idea of what happens when people unknowingly come across radioactive material. https://en.wikipedia.org/wiki/List_of_orphan_source_incident...

I think of the widespread heavy metal contamination in China which existed even before the modern age of manufacturing. Some might be naturally occurring but you had people like this guy

https://en.wikipedia.org/wiki/Mausoleum_of_Qin_Shi_Huang


That's an interesting list - but the injury & death rates are several orders of magnitude below even lightning strikes. https://en.wikipedia.org/wiki/Lightning_injury#Epidemiology

Also, PaulHoule's original comment said "in 20,000 years". Cobalt 60 (for example) has a half-life of 5 1/4 years - so there really won't be any of it left by then.


What's the rate conditioned on being near an incident though? And these are small, isolated incidents. How does what we see extrapolate to large scale nuclear waste storage, a state that failed a few hundred years ago, and someone inadvertently digging it up?

No one is talking about stuffing cobalt 60 in yucca mountain (at least as far as I know).


Compared to abandoned/forgotten mines (that eventually cave in) and mega-scale chemical waste dumps/sites/spills, nuclear waste sites - especially ones that'll still be seriously dangerous centuries or millennia from now - are profoundly rare.

And the tech to detect that you're digging into radioactive stuff is far simpler than the tech to detect that you're digging into some sort of chemical waste, or a failing old mine or tunnel.

If millennia-in-the-future humans care all that much about what we did with our nuclear waste, it'll either be political/ideological, or (as PaulHoule suggested) just one more "they didn't leave it somewhere really convenient for us" deal.


>when is the last time you heard someone get excited over toxic waste left behind by the Roman Empire?

For archeologists, pretty much every time.


(3) "Controversial" is a weasel word AFAIC :)

The difficulty is in deriving any useful utility function from prices (even via preferences :), and as you know, econs can't rid themselves of that particular intrusive thought

https://mitsloan.mit.edu/sites/default/files/inline-files/So...

E: know any econs taking Habermas seriously ? Not a rhetorical q:

http://ecoport.org/storedReference/558800.pdf


What makes it really bad is that you can't add different people's utility functions, or for that matter, multiply the utility function of some imagined galactic citizen by some astronomical amount. The question of "what distribution of wealth maximizes welfare" [1] is unanswerable in that framework and we're left with the Randian maxim that any transaction freely entered into is "fair" because nobody would enter into it if it didn't increase their utility function.

[1] Though you might come to the conclusion that greeder people should have the money because they like it more


Comic sophism ain't gonna make hypernormies reconsider lol

(Aside from the semi-tragic one to consider additive dilogarithms..)

One actionable (utility agnostic) suggestion: study the measureable consequences of (quantifiable) policy on carbon pricing, because this is already quite close to the uncontroversial bits


E.g. precisely how inflationary can we make carbon credits?

E: by "uncontroversial", I meant amongst the Orthodox econs, so not Graeber & sympathetic heterodox.


> "The Singularity" was talked about by Jesuit Priest Teilhard de Chardin long before sci-fi writer Vernor Vinge used it as the hinge of a mystery novel.

Similarly, Big Bang was talked about by Catholic priest Georges Lemaître, and Bayes' Theorem was invented by Presbyterian minister Thomas Bayes. Does that prove anything beyond the fact that there are many smart religious people?


> If things are going to get better in Africa, it will be because Africans grow their economy and pay taxes and their governments can provide the services that they want

The problem with this argument is that the path to achieve this is unclear, and everyone who has tried has failed. In the absence of a clear path, it seems rational to set aside lofty ideals and do whatever good you can now.

> If you are looking for low-probability events to worry about I think you could find a lot of them.

Name one that hasn't already been considered that would be a serious threat to modern technological civilization.

> Although it is run by people who say they are militant atheists, the movement has all the trappings of a religion

Supposing you're correct this thought is incomplete. Take it to its logical conclusion: a religion centered around open rational debate is bad because...?


>People in these communities are generally quite smart, and it’s seductive to reason in a purely logical, deductive way. There is real value in thinking rigorously and in making sure you’re not beholden to commonly held beliefs. But, like you said, reality is complex, and it’s really hard to pick initial premises that capture everything relevant. The insane conclusions they get to could be avoided by re-checking & revising premises, especially when the argument is going in a direction that clashes with history, real-world experience, or basic common sense.

They don't even do this.

If you're reasoning in purely logical and deductive way - it's blatantly obvious that living beings experience way more pain and suffering, than pleasure and joy. If you do the math, humanity getting wiped out in effect is the best thing that could happen.

Which is why accelerationism ignoring all the AGI risks is correct strategy presuming the AGI will either wipe us out (good outcome) or provide technologies that improve the human condition and reduce suffering (good outcome).

Logical and deductive reasoning based on completely baseless and obviously incorrect premises is flat out idiotic.

You can't deprive non-existent people out of anything.

And if you do, I hope you're ready for purely logical, deductive follow up - every droplet of sperm is sacred and should be used to impregnate.




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: