It’s moments that I read articles like this that make me wonder why I’m putting so much energy into making a better SPA or trying to engineer solutions to faster more performant web animations. I feel like I’m not using any of my technical acumen to advance human civilization but instead to line my own coffers and those of the corporation I work for.
With that said, it is always good to keep in mind just how fragile civilization as a whole can be to things like global catastrophic events, as this article highlighted these are events that could have happened but did not, thankfully, however if they did the world would be very different today
I write Haskell, I make computer games, I make web apps. Mostly because it's fun and satisfying. I'm also quite good at it and it comes easily to me.
I remember when I decided to go into engineering, a peer of mine from high school said "whateveracct, you're top of your class. Why aren't you going into medicine in order to do something more Worthwhile with your life?"
Stuck with me ever since. I was repulsed by the mindset but I couldn't word why at the time. I later realized that it smelled of a deeply nihilistic (as in Nietzsche's ideas) view of the world. Ressentiment comes to mind.
Spending my conscious hours working with computers is a less nihilistic use of my time. I am not deferring this life's happiness and agency in order to "have made an impact" when my life is over and useless to me.
If you want more philosophy, consider Plato's Republic. An ideal society doesn't necessarily have everyone doing Most Important and Dire Work. It doesn't even have them doing what they're most "skilled" at! Instead, it has everyone living in alignment with their souls' desires and preferences. (e.g. A frail person with a Warrior's Soul should be a soldier before a strong person with an Artisan's Soul.)
You're looking for balance of "saving the world" (i.e. our responsibility with civilization and all lifeforms) and "enjoy your life". Clearly if everyone is concerned exclusively with enjoying their lives (i.e. hyper-hedonism), society collapses; that's deeply irresponsible. If everyone is also hyper-focused on self-propagation of our species with zero regard to our actual experience as conscious beings with rich inner lives, then clearly there's the risk of indeed making our inner lives much worse than they could be.
A system I've seen recommended here to think about it is (I've seen it related to Ikigai, a Japanese concept): you need to find a balance between your needs and experience, your skills and potential, and what's good for society at large (in a soft max-min).
I think overall, however, if we give it a little thought it's easy to find something aligned with our interests and potential that can really make a good impact. If you're interested I recommend the Effective Altruism community for a take on this (they're largely focused on more tangible things like Earning to Give) and 80000 hours. In all likelihood, just by being a functional member of our society (and giving what you can), if you don't work for some obviously evil enterprise (idk, making hyper-addictive things, oil field discovery, or something like that), you're probably helping society.
I encourage a different path as well: if you can program (or develop technology) and you're entrepreneurial (many people around here?) you can most likely make something that will make a good impact on society and even civilization at large. Furthering education with online tools, making educational games (or otherwise that promote growth and reflection), making tools more accessible, ... , improving the robustness and reliability of our systems, ..., the list goes on -- why not fulfill your potential to the best you can? Invent the future, Hack the planet.
> why not fulfill your potential to the best you can? Invent the future, Hack the planet.
This is exactly what I'm talking about though. If I just use my programming skills to automate some common tasks, make some nice webapps, make some fun or artful video games, etc..that feels like enough.
As far as "why not?"..because the things I list are more interesting and fun for me. And they're already constrained enough by capitalism to "provide value."
And to act like they are not worthwhile life pursuits because they aren't pushing humanity forward is a nihilistic viewpoint because it's focused on spending my life so that after-my-life benefits as if that makes it more morally correct.
Structurally, that's the same mindset as "do good deeds so you go to heaven." Just with a secular greater good. Nihilism nonetheless.
> And to act like they are not worthwhile life pursuits because they aren't pushing humanity forward is a nihilistic viewpoint because it's focused on spending my life so that after-my-life benefits as if that makes it more morally correct.
What's so special of minds that exist at a later time? Note that under special relativity, spatial separation and temporal separation are relative to frame velocity. In effect, they're basically the same -- other minds, at other places in spacetime. If you accept that minds other than yours are important (as important as yours -- otherwise, what makes you so special to be the only consciousness that matters?), then you should help whether they are now or at some point in the future (of course, there's the question of reliability of impacting the future, but those are more practical questions).
As others mentioned, I don't see how this ethics identifies with nihilism and even less Nietzche's nihilism.
Your view seems more like akin to solipsism: the denial that other minds exist/have value as well. There are strong arguments against solipsism from computational theory of mind (which I may have discovered, I need to publish those).
If you're interested in this kind of stuff, I'm working on a formalization of ethics that aims to answer this sort of question more helpfully and precisely than we've been historically (for now all I have to refer is a subreddit...).
> A nihilist is a man who judges of the world as it is that it ought not to be, and of the world as it ought to be that it does not exist. According to this view, our existence (action, suffering, willing, feeling) has no meaning: the pathos of 'in vain' is the nihilists' pathos – at the same time, as pathos, an inconsistency on the part of the nihilists.
Where is Nietzsche's nihilism in what you describe? How is your peer's worldview nihilistic when they place a high value on the effect of your potential actions on the world?
Surely for a nihilist, "to have made an impact" is a non-goal.
> Surely for a nihilist, "to have made an impact" is a non-goal.
Not when the reason for making that impact is for, say, leaving a legacy after your death, getting into heaven (Christianity is very nihilistic according to Nietzsche), "having done 'something' with your life." Those are all very nihilistic-as-in-Nietzsche.
Nihilism is first and foremost deferring this life for something else, and I think my peer's views definitely fell into that definition.
Hm it was how my professors in school taught nihilism and Nietzsche. It was definitely a high level takeaway from many readings we did and not a single quote or citation. But it was on the exam ;)
I don't have any sources behind that atm, but I'd guess it would be somewhere around the discussion of the afterlife and how it is a Christian driver of nihilism.
Yes, I genuinely do enjoy the work and I derive happiness in my field, in fact I want to do more not less to drive the field forward.
That does not mean I don’t sometimes feel the guilt of not being apart of something that drives humanity forward, though, even if all I could contribute was working the website that raises awareness instead! I just know what I’m capable of, but the reality vs the ideal is the philosophical struggle here for me.
I instead try to make conscious investments to try and move things forward and make personal choices consciously to issues like this as best as I reasonably can.
I still feel guilty sometimes I’m not working on something like raising the perception of the safety of nuclear energy ya know
> I later realized that it smelled of a deeply nihilistic (as in Nietzsche's ideas) view of the world.
Who says you can't enjoy work in a different field, e.g. medicine? Perhaps you can even combine those skills with computer programming. E.g. build an exoskeleton that makes partially paralyzed people walk again.
I know how you feel. You might be interested in the 80,000 hours podcast as a way to gain a better understanding of the problems humanity faces: https://80000hours.org/podcast/
In terms of what to _do_ about it as a software developer, I'm still trying to figure that one out. I currently work at a BCorp which tends to make me feel better about the work I'm doing which at a minimum isn't doing harm to the world. You could try looking for a meaningful job at https://techjobsforgood.com/
I share the same thoughts - and I hope one day we reach a level of automation/AI/cheap energy that everyone gets "for free" some sort of living allowance to pursue their own objectives in science or art.
I’m not saying I’m going to crack some crazy software problem like general AI or something, but theoretically I’d love to be working on something that pushes renewable energy forward.
In what capacity I don’t know. I’m pretty convinced we are overlooking geothermal energy in the USA at least. I often wonder if we could both relieve potentially dangerous pressure bellowing Yellowstone while simultaneously harnessing its geothermal energy for example
But alas, I think the reality is I lack the expertise to even talk about this with any authority
Not saying you should but it is possible to gain a pretty respectable level of knowledge in almost any area within just a couple of years. Take online classes or even complete a fully online MS program. There are many examples of engineers and entrepreneurs who started in CS and then transitioned to more personally meaningful areas. (that’s my plan anyway!)
Easiest solution to this is finding a company that can put your skills towards something that advances humanity. If no such company exists in your view, you can make one.
SPAs and code are tools. You can use them for all sorts of endeavours.
Exactly, this article is pretty goofy. There was never a real possibility that the Trinity test would ignite the atmosphere; the idea was proposed largely in jest and dismissed with the briefest of calculations. The modern-day equivalent would be the protesters who argued that the LHC (and before it, the RHIC at Brookhaven) shouldn't be powered up because it might create a black hole that swallows the Earth. Or those who argued that the RTG on the Cassini probe might break apart in a launch accident and poison vast swathes of life on Earth. Why didn't the author mention those concerns? They had just as much merit.
The Cuban Missile Crisis, on the other hand, was no joke. We were amazingly lucky that Murphy was on vacation that month. Our reliance on luck to avoid destroying humanity is much, much scarier than our reliance on reason.
- Something with the spread rate of COVID-delta, and a high lethality rate after a long incubation period.
- Enriching uranium in a rather small facility. This may already have happened and been kept quiet. Laser enrichment was talked about a lot in the early 1990s, and then suddenly, after some announcements from Lawrence Livermore, things got much quieter.[1][2] As high-powered lasers get better, this gets easier. There's now a startup in Australia working on this process again.
- Long term, a birth rate that's below replacement rate. That's the current normal in the developed world.
Long conjectured, some disillusioned uni student with the latest tech combining the R number of measles withe the lethality of ebola (and a dash of HIV on the side).
As a twist, "The giving plague" by David Brin is an interesting (short) take on it.
> Something with the spread rate of COVID-delta, and a high lethality rate after a long incubation period.
I sometimes think how "lucky" (obviously, luck is relative) humanity is that HIV is an STD and not an airborne virus with the transmissibility of Delta.
For some reasons I feel HIV treatments would have been found much more quickly had it been the case. Just like I'm pretty sure mosquitos would have been eradicated (or another solution would have been found) if it had killed so many people in developed countries.
I think that's a good and fair point. Reagan famously didn't even mention AIDS publicly until 1985, 4 years after it was discovered. Urban gay ghettos in NY and SF were basically experiencing a plague with young, otherwise healthy men dropping like flies, and the world at large either (a) didn't care, or (b) was glad that AIDS was "killing all the right people".
Your first point is something I’ve wondered as the biotech revolution gets under way.
I wonder how many years away we are from home hackers having the tools necessary to create a horror. Say, something with the spread rate of Covid Delta and which acts as an airborne prion disease.
I've always wanted to have a black-hole file-shredder on display on my desk. Not a big one, just a tiny speck that would generate crazy space-time distortion effects.
I figured that it would be hard to contain on earth because it would fall to the center of the earth. So the trick is to build it in orbit.
It seem far fetch but once you decide to work in space it opens plenty of engineering shortcuts to scale-up the LHC. Space is very big, and it's already cold and a good enough vacuum, so you just need to maintain in position a few superconducting electromagnets.
You collide a few high energy particle to form one and you nurture it to make it grow.
Initially you move it by shining light or throwing things in it when the black-hole is less than 1kg, thanks to momentum conservation it's as easy as playing marbles.
Once it is in position you feed it anything you want and you build your space station and desk around the black hole. The more you shred things in it, the more mass it gets and the harder it will be to move around, but the greater the space-time distortion.
Funds you ask ? Price per kg in orbit has gone down tremendously. And there are plenty of rich people ready to use cryonics to attain immortality, so it didn't took much to convince one of them to hedge on a safer alternative to gain time. Because you see time pass slower near a black hole, and thanks to Einstein's General Relativity that has been known for more than a century. So instead of dying you get closer to your personal black-hole and you fast forward the future until the tech is ready to save you.
How could have I predicted that another stealth start-up (sponsored by the same guy ! as I later discovered) would have exactly the same idea, and now there are two black-holes orbiting earth and no way to divert them. Once they collide in exactly 1337 days their combined momentum won't allow them to orbit earth anymore...
There are at least two sci fi stories based on the premise of vacuum-creating quantum black holes. In one, ("The Hole Man" by Larry Niven) aliens left such a device on Mars. Astronauts discover it, and one of them manages to release it from the force field that holds it in place, with disastrous implications for Mars.
In the other story (whose title I don't recall, but it was probably published in Analog), a bare something (perhaps a wormhole whose other end is in open space) is created, and begins to suck the Earth's air away. The hero builds a dome around it, but leaves a valve in the side to sell vacuum.
I imagine the first thing we would use programmable time dilation for would be to skip build times. Sure, it doesn't make you any more productive in non-dilated time, but just think how much you could extend your life by :-)
> Officials decided instead to open the door, and retrieve the men by raft and helicopter (see picture at the top of this article). While they wore biocontamination suits and entered the quarantine facility on the ship, as soon as the capsule was opened at sea, the air inside flooded out. ... that decision to prioritise the short-term comfort of the men could have released it into the ocean during that brief window.
Thinking about this, I am curious what the original procedure would've been. How did they plan on retrieving the astronauts, in a capsule on the ocean, without allowing the air inside to escape?
Does SETI not count? Easy to imagine the first received ETI signal being some world-ending infohazard.
[edit]: Targeting humans: a viral ideological, philosophical, or religious meme that causes us to self-destruct. A technological gift with insidious Trojan Horse functionality (a biotechnological machine with subtle side effects; a physics device that touches physics we haven't discovered). Irresistible instructions on how to join the friendly galactic internet -- which actually route to a paranoid deviant ETI that destroys anything that transmits (ala Dark Forest hypothesis).
Targeting machines: exploting a buffer overflow in the Allen Array's signal processing pipeline to upload onto this planet a self-replicating superintelligent AI.
Targeting the superior chthonic race living in the Earth's mantle that we don't know about: friendly instructions on how to terraform the terrestrial surface and become a spacefaring species.
I myself think if any theory beyond that of the Fermi Paradox holds water it’s likely the Aurora Hypothesis which simply states that colonizing space is incredibly dangerous and therefore hard to do at scale beyond your immediate solar system, hence why we haven’t seen anything in ours
I don’t quite buy it but it feels the most plausible of all the alternative explanation for Fermi
Because of phenomenons like Simultaneous Discovery[0] I feel that we are the first of the many civilizations in the Type I to Type II transition
I personally believe we are on the precipice of finally colonizing another planet or planets long term), and therefore given the distinct possibility that life all basically started around the same time (simultaneous discovery) we are just among the first of civilizations ever, I don’t believe currently that other galactic civilizations existed or currently exist that are not at the same pace as us, as I (and I admit I have little evidence here) that simultaneous discovery applies beyond just ideas, but there may be some kind of this phenomenon in natural evolution, and making (albeit big) assumption that is true and evolution follows certain paths like our own, we just happen to be among the first civilizations to be at our level, and in fact other alien life is likely to be similarly or less advanced than us
I’m either very right or very very very wrong, I figure
I'm sceptical of any notions of "we are just among the first of civilizations ever" because as far as I understand from the studies of sun-like star sytems that could be hospitable to life, Earth is a bit on the late side; close to the average, but definitely not an "early" formation, and the time advantage or "development head start" that a sizeable fraction other stars/planets comparable to ours (I'm not talking about "first generation" hydrogen stars here) has had is immense (e.g. a billion years).
In essence, the idea of us being among the first in our galaxy is compatible with the notion that intelligent life forming is so scarce that there's only ever going to be just a couple civilizations in our galaxy or that we're alone; because if life is so common that our galaxy would have (for example) a hundred intelligent civilization spawning events, then the expectation is that half of them would have been before us, and it would be a really, really unlikely that all of them are on Earth-like planets younger than us and none of them are on the multitude of Earth-like planets orbiting Sun-like planets older than us.
Also, given all the time-consuming steps required for life to form, "around the same time" would optimistically mean something on the scale of +/- a million years. Like, if the protozoic era took 0.1% more or less, that would be a difference of two million years; so if some civilization was much older or much younger, then the difference would be much more than that, and if we encounter a planet that's +/- hundred thousand years of progress, that mean that we really progressed at a remarkably coincidentally equal starting point and pace; and if we encounter a civilization that's just a thousand years of technological development ahead or behind, then that would be a so unbelievable coincidence that I'd consider that some kind of intelligent designer is required to explain it.
The concept of simultaneous discovery happens because the discoverers share an environment where the prerequisites for that discovery appear at the same time; this would not be plausible for civilizations forming naturally through evolution without any contact or influence between them, that would work only if they're e.g. intentionally designed and "planted" on planets by some previous civilization.
With that said, I just want to put this thought out there:
Since we don’t really know 100%, I posit that in order for life to evolve to begin with, intelligence is a prerequisite in nature, down to the microbial level. I’m not saying nor do I believe there are little microbial societies or anything, but a certain amount of attainable intelligence must be be there and shareable at that level of function
Therefore, is simultaneous discovery simply an extension of nature, in that it’s within natural systems of all kinds to have this, or is it only in the context of civilization, which is synthetic to the natural world?
I suspect you're right, but I've also wondered at the fact that while Homo sapiens has been around and intelligent for at least 45k years, within perhaps 3000 years at least three and possibly six civilizations independently invented writing: the Babylonians and Egyptians and Minoans + Mycenaeans (probably three independent inventions), China, and the Maya.
> we are just among the first of civilizations ever,
Well maybe could say "one of":
We recall that the big bang was ~14 billion years ago.
Our solar system was made ~5 billion years ago (very rough arithmetic) out of the results of an exploded star. The star may have exploded ~7 billion years ago. In that case it may have been a first generation, hydrogen star and took ~6 billion years to form, make heavy elements, and explode (and make more heavy elements).
So, that makes our star a second generation star and our solar system, from an exploded first generation star, one of the first.
If all that is true, then, okay, "we are just among the first of civilizations ever".
IIRC (it's been a long time), first generation stars were very low in "metals" (which term astronomers use to mean any element heaver than helium), and what they did have was mostly (maybe entirely) lithium. They may not have had any planets, or if they did, the planets were pure gas (hydrogen and helium). Thus no life, much less intelligent life.
Easy to imagine the first received ETI signal being some world-ending infohazard.
An interesting variation on this theme: a future Elon Musk figure dons his black hat and launches a satellite capable of stationkeeping at a point in line with a nearby star. The satellite emulates such an ETI.
Shame there wasn't a reference to Ian. M. Banks work Excession which dealt with this subject.
"An Outside Context Problem was the sort of thing most civilizations encountered just once, and which they tended to encounter rather in the same way a sentence encountered a full stop."
I'm not sure I agree with these, honestly. Take the first one (opening Apollo 11's doors "prematurely"). No matter what decision was made, no ill could come of it, since there weren't any microbes to be unleashed.
Deciding not to launch a "counterattack" with nuclear weapons based on a hunch that the alert you're getting is false; THAT'S a world-changing decision.
Feels like gene editing is at a point where more frequent lab escapes of self-replicating airborne pathogens is just a question of time. Not to mention the speed of progress in biotech and synthetic biology means soon this stuff can be done in a garage.
Beyond pathogens, perhaps when life (inevitably) learns to change the code that makes up life, the recursion leads to implosion soon after
The article fails to note reasons why neither apparent existential risk was possible.
We already had moon rocks on Earth, blasted off of the moon by bolide strikes. Likewise Mars rocks, and bits of asteroids and of other planets' moons. Maybe even Venus rocks.
Relatedly, the energy released in certain bolide strikes on Earth far exceeds anything achieved even in Tsar Bomba, itself thousands of times more powerful than Fat Man.
I have not seen any analysis of whether a bolide strike might incidentally produce substantial fusion activity. At the pressure and temperature produced, it is hard to imagine it not occurring.
It seems like there ought to be long-lived products of such fusion detectable in the K-T layer, alongside whatever the bolide carried. Some might be weakly radioactive and thus detectable at very tiny concentration.
"We already had moon rocks on Earth, blasted off of the moon by bolide strikes." True, but they (and any other meteorites) were likely sterilized during passage through the Earth's atmosphere.
If you entertain the MWI (many worlds interpretation) of quantum mechanics, then maybe you could say these moments did end humanity on other branches of spacetime. The idea is useless in practice, but fun to think about, if the popular fiction of the day is any indication. :)
In some of these universes, the LHC did indeed spawn a world-destroying black hole.
And in our universe, the LHC failed several times when they were first starting it up (the Quench Incident, for example), causing a year's delay--which seemed so unlikely at the time that I really wondered...
I take it for granted that human beings won’t last forever as a species, and that’s probably a good thing. It would appear we have the unique privilege of creating our replacements from scratch. Hopefully we don’t screw that up and finish with a gray goo scenario.
(I'm interpreting your comment as "portfolio of planets")
But the corporations would want to spread to all of them. Just like big cities all over the globe look the same, you can buy the same stuff. Trade is good, and I love corporations (never understood why anyone on the left would be against the concept - it's ideally suited to create a "container" for cooperating individuals), but even on our own main planet we can see that we have strong forces of equalization and winner takes all.
You would have to somewhat isolate the places if you want diversity. Same thing that helps with biological diversity. Otherwise it will all just be near-copies of sameness.
In that sense, it would just be like distributed storage of the same content. It helps when one place gets wiped out by accident, but it does not provide true robustness against the "unknown unknowns" (to quote Rumsfeld) that the universe occasionally throws at us.
The article has some great examples of risks that were hyped out of proportion to their true severity. Should these things be considered? Yes. But only a little. If you halt programs completely for tail risks you'll never get anywhere.
Modern examples, IMO, are:
- Kessler Syndrome
- Trying to prevent an asteroid hitting the earth
- Nuclear war making the entire earth uninhabitable
In the documentary The Man Who Saved The World, Stanislav Petrov travels to meet Kevin Costner, his favourite actor, at home. Costner asks him, if he hadn't acted as he did, how many people would have died. You expect him to say, 50 million, or something. He says, everyone. Everyone on earth would've died. It's a chilling moment. (And there's been a lot more than that one near miss.) What kind of crazy species are we that we build a system that when it malfunctions (as it did that day) seems likely to kill everyone on the planet?!
Now I read on HN that the danger of nuclear war is hyped out of proportion to its true severity, and should be considered only a little. Sorry, maybe I misunderstand. I have read a similar thing on HN a few times though, people that seem to think nuclear war really would be no big deal at all.
But it always seems weird to be how some people are so worried to the point of obsession about global warming without apparently ever giving a thought to the ever-present risk of full-scale nuclear war—something infinitely worse. (Well, hardly "war", just a flurry of button-pressing for a few minutes.)
Nuclear war (especially during the cold war, when we had much more warheads than now) would absolutely be a big deal and a horrible mass death, but it would not have ended humanity.
Like, if there's a scale of catastrophic events that goes from 0 to 10 where 0 is no big deal and 10 is human extinction, then the worst events humanity has ever seen are somewhere below 1 on that scale and absolutely horrific mass death is something like 2/10 - because the gap between the damage required for that and damage required for extinction is so much larger than the gap between no big deal and worse mass death than we have ever seen. Arguably the worst damage that life on Earth has seen is the dinosaur-ending asteroid, and IMHO a fraction homo sapiens (though perhaps not our civilization) could survive even that. A full scale USSR-USA exchange in 1960s might perhaps kill most people in the northern hemisphere and perhaps cause a nuclear winter decreasing crop yields with an associated famine - but if just a fraction of people in South Asia and Africa and South America survive the famine while the North nukes themselves to radioactive glasslands, that's very, very far from extinction.
Killing half of humanity would literally be an unprecedented level of horror, but it would not end our civilization; killing 90% of humanity would likely end our civilization-as-we-know it but would not end our species, that would bring us to the population level that Earth had in 1700s; and killing 99.99% of humanity would definitely destroy our civilization but it would "just" push back our population growth to the numbers we had ~70 000 years ago - horrific for every individual, but still not an extinction event.
And, in particular, I also think that nuclear winter is another one of these over-hyped scenarios. If you do the napkin-math it doesn't really work out.
Nuclear war will probably result in every major metropolitan area in the participating countries being obliterated. But I'm willing to bet that non participating countries will survive with their civilization intact. Especially countries with high food security that can deal with total collapse of international trade.
Every action carries that risk somewhere in its very long tail. You have to assess the likelihood of the bad event occurring, and there is a point where it is so unlikely that it need not be considered at all. I don't think humanity is quite stupid enough to knowingly release its Ice-9 just yet.
> If you halt programs completely for tail risks you'll never get anywhere.
If your tail risk is the end of civilization then it doesn't matter how small the probability. You'd be fucked with certainty on any long enough timeframe.
Some tail risks are to large to take. Eventually your number comes up.
I think you mean "probability" rather than severity. The severity of setting the atmosphere on fire is extreme, but the probability turned out to be vanishingly small. Or perhaps you mean risk, probability times severity. Which also turned out to be negligible for the events in the article.
Some other hyped risks: -
- Artificial General Intelligence
- CRISPR gene editing
- Gain-of-function work with viruses
I have no way of assessing the risks, and there is a lot of hyperventilation in some circles
With that said, it is always good to keep in mind just how fragile civilization as a whole can be to things like global catastrophic events, as this article highlighted these are events that could have happened but did not, thankfully, however if they did the world would be very different today