Hacker Newsnew | past | comments | ask | show | jobs | submit | snowfarthing's commentslogin

I'm not entirely convinced this is at all true. The American Left fully understood the "paradox of tolerance", and they pushed their own tolerance to the point where, much to the relief of many Americans, President Trump won his second term.

The problem with the "Paradox of Tolerance" is that intolerance of the "intolerant" drive the "intolerant" underground. They may feel isolated for long periods of time, but when they realize that they are the majority, rather than the minority, power can shift dramatically. This is called a "preference cascade" and it's particularly common among totalitarian regimes.

The only way to prevent preference cascades is to have just enough tolerance of views you consider hateful that they don't get driven underground -- and you do this by rigorously debating anyone you find to be wrong, without going after their livelihoods (among other things).


> They may feel isolated for long periods of time, but when they realize that they are the majority, rather than the minority, power can shift dramatically.

If the intolerant are the majority you run into the paradox of democracy:

"a democratic majority could vote for a tyrant to rule, thus ending democracy."

The issue here is of course: once you have a dictatorship you cannot vote to get your democracy back. Thus the paradox.

I believe many Americans don't realize that point, because it never happened to them, yet.


This essay fleshes out a lot of the conclusions that have been nebulously growing in my mind over the last few months and even years!

And now that I think of it, the fear of AI resemble somewhat the fear of runaway nanobots -- the notion that we can create something that will just turn the entire world into grey goo -- which, in order to have this fear, among other things, you have to overlook the fact that we are already surrounded by nanobots! We just call them "bacteria".


When I first saw the headline, I imagined a big huge organ with 100 keys and pedals, lots of stops, and huge pipes. I was very pleasantly surprised to see something small, with maybe 48 keys, and a handful of creatively-placed pipes!

Sometimes when we imagine a project, we think about how big it is, and get discouraged -- but this is a good reminder that some of the best projects are small, and thus, are less daunting and more easily tackled!


If you want to see a big pipe organ, try lookmumnocomputer on YouTube. He got a giant setup, and added MIDI!


Sam is a real hacker too. Such a delightful personality.


On the contrary, the so-called "capitalism" that emerges when we respect individual rights is the one way we can escape this.

Too much emphasis is placed on government regulation and corporate environments to "make things right" when, in the end, they are all just rigid bureaucratic structures that trap people and force them into heirarchies.

For my entire life, I have mostly tried to conform to this -- albeit mostly focusing on startups because they are more likely to value individuals, and less likely to have rigid structure and tradition -- but I'm only just now realizing that my autistic and ADHD tendencies being pigeon-holed even this much is a recipe for the burnout I've experienced for most of my adult life. I need to try something different!

And if I lived in a more rigid society (all non-capitalist countries are far more rigid than capitalist ones -- pretty much by definition) my options for fleeing rigidity would be vanishingly small.


This is an unfathomably based take, and you have improved my day significantly. Thanks.


I just had a random thought: perhaps it would be a good idea to have a project that doesn't do optimizations, but just focuses on fast compiling.

Then again, I now can't help but wonder if LLVM (or even GCC) would be fast, if you just turned off all the optimizations ...

(Of course, at this point, I can't help but think "you don't need to worry about the speed of compilation" in things like Common Lisp or Smalltalk, because everything is compiled incrementally and immediately, so you don't have to wait for the entire project to compile before you could test something ...)


> Then again, I now can't help but wonder if LLVM (or even GCC) would be fast, if you just turned off all the optimizations ...

It's not the optimizations really, it's the language front ends. Rust and C++ are extremely analysis-heavy. Try generating a comparable binary in C (or a kernel build, which is likely to be much larger!) and see how fast these compilers can be.


Go’s internal compiler / linker is kind of like that. So is qbe[0] iirc.

https://c9x.me/compile/


I think this is more a problem with the nature of technology in general.

If we want simple and fast, we can do that, but sometimes it doesn't cover the corner cases that the slow and complicated stuff does -- and as you fix those things, the "simple and fast" becomes "complicated and slow".

But, as others have observed about GCC vs LLVM (with LLVM having had a similar life cycle), the added competition forced GCC to step up their game, and both projects have benefited from that competition -- even if, as time goes on, they get more and more similar to what each can do.

I think all our efforts suffer from the effects of the Second Law of Thermodynamics: "You can't win. You can't break even. And it's the only game in town."


I was puzzled by the source at first -- it seemed like Assembler, but was too high-level to be that -- and it looked too complex to be BASIC (or possibly JOSS, I suppose) -- I was a little taken aback to see the extension of the filename as "pl1", because I sort-of thought PL/1 was higher-level than that!


When I was in college in 1995/96 (and later 1999/2000), this was somewhat how I programmed too. While I had access to computers at both home and school, my access to them were somewhat limited, and the tools I had at my disposal were very limited (especially at home).

I cannot help but reflect on how my approach was a "hybrid" between both pencil-and-paper and modern-cli-and-ide -- we were coming out of the age of really simple home computers, but not yet in the age of super fast computers with large monitors.


Many years ago, when I was learning about pair programming, I remember someone (possibly even Kent Beck) saying that "Pair programming is kryptonite for incompetent introverts!" and I remember thinking, "Well, yeah, but I bet it's the haven of incompetent extroverts!"

While I haven't really been in forums debating the merits and perils of paired programming, I cannot help but be amused by this essay, that pretty much confirms this initial thought I had about paired programming!


I recall a psychologist or psychiatrist who challenged the notion of "defiant personality disorder" by observing two things:

(1) Many of these "defiant" people merely didn't trust credentials -- they were perfectly fine with authority who earned deference by virtue of proving they really do know what they are doing, and

(2) That the psychology/psychiatry profession in general, consisting of people who have their Masters and PhDs, have to "suck up" to a lot of credentialed authority, without question, to get their degrees -- and thus it's only natural for them to expect everyone to unconditionally respect credentials!

(For the record, I have a PhD, but it's in pure math, which is possibly simultaniously both the least practical and most practical thing you could possibly learn -- but as such, I'm tangential to engineering and physics -- and I'm pretty sure that all three of these fields have a certain "fine, you have a credential, but can you really walk the walk?" element to them.)


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: