Hacker News new | past | comments | ask | show | jobs | submit | thehappyfellow's comments login

Yeah, I was angry when I was writing it, not denying it.

Anger is often the only way to motivate action, since when you were calm you didn't care about solving it.

Author here. I did not expect to see my post on HN!

It was a rant, I was venting, it’s not supposed to be an objective statement about the state of tech. It’s shouting into the void about the things I find unfair and unbearable, I don’t think it’s a great HN material.

I made up parts of the story because it didn’t happen to me and I didn’t want to share details of somebody else’s situation.


FWIW I think it captures the sentiment many users have towards software today fairly accurately, and every software engineer who has even a modicum of pride in their trade really ought to pay attention.

One thing I'd like to point out, though, is that this kind of stuff isn't really about micro-optimizations. Most software in the "good old days" wasn't really micro-optimized either. No, what this is about is bloat. Layers upon layers of abstractions that, in most cases, amount to rearranging the pieces in the way the author deemed most aesthetically pleasing. When I look at call stacks while debugging most modern software, I can't help but feel that it spends most of its time calling functions that call functions etc, 20-30 levels deep. Most data flow isn't from component to component, but within the component between those layers. And it all adds up.


Well, it actually did happen to me (precisely to pay the rent), and I burst into applause after reading your blog post

It really makes me happy to know that my writing struck a chord for at least one person - thank you!

I guess “unfair and unbearable” is paying your rent at the last minute after business hours on a phone with 2012 specs. Wouldn’t want to plan ahead and pay on time or pay with a check or pay with autopay or pay with a laptop or pay with a computer at the library for free.

You say it’s not an objective statement about the state of tech and I would agree: it’s highly subjective, a literally made up story, and a dumbass opinion.

Getting a phone capable of doing an online bill pay is trivial. Literally free with a discount shit cheap phone plan.

The victim mentality will destroy you if you let it take over.


> Getting a phone capable of doing an online bill pay is trivial. Literally free with a discount shit cheap phone plan.

No matter how cheap your phone is, if you're poor someone will always judge you because your phone is too expensive.

"Why do you have a more recent phone that 2012 if you're on benefits?"


I don’t think this has been true in the last 5-8 years where phones are incrementally different and essentially look identical.

I saw an ad yesterday for a free iPhone 13 with a discount carrier and that phone is almost indistinguishable from the current iPhone on sale.


The thing is, until he said it was a made-up story, I thought it was real. This shows that things like that started to become the real norm.

I also don't think the author is in a victim mentality, it is more like a reminder to other developers that they can do so much better.


Just because the story is believable doesn’t mean that the people who are in the story are without responsibility or fault.

When you get into a lease agreement you should know how to pay going into it. If your landlord doesn’t take a form of payment you can handle you don’t sign the lease in the first place. That’s your life responsibility as a functional adult. Paying rent every month isn’t a surprise. Being low on funds isn’t an excuse and being low on funds isn’t the issue at hand.


Learning Rust by building a simple database using it.

I’ve done my share of programming languages (PHP, C++, Python, Ruby, Haskell) and for the last 10 years I’ve been working in OCaml (which I love so much) but Rust would be a nice addition IMO.

And I never implemented LSM style database before! So that’s fun.

I only just started and the pace will be slow (I have 3h/week to spend on it on a good week), if you are curious: https://github.com/happyfellow-one/crab-bucket


LOL, love the name.

LSM style should be an interesting path, especially when it comes to optimization.


Hah, thanks!

I really wanted an optimisation rabbit hole and seems like this projective going to deliver on that :)

I also tweet about the progress on @onehappyfellow if you’re interested


You should play a lot of hands, or organise a tournament, for Figgie to really make sense. It's fun but I prefer to play in person.


How do you play in person? Are the cards for sale somewhere?


Just a normal deck is fine, you have to prepare it but it’s not a big deal.


I think that is kind of a big deal. You either need to prepare many identical decks in advance or come up with an elaborate selection procedure you repeat every 4 minutes.


I don't think it would be too hard. After the round players throw their cards in by suit face up to 4 piles, every suit gets topped up to 12 cards. Have one person take all four suits under the table and then hands them in random order to the dealer. The dealer randomizes them themselves so nobody can know which suit is which, then removes the required number of cards from the random suits and discards them, and shuffles the deck.

Obviously it isn't super quick, but once people know whats going on it doesn't take that long either.


Just switch dealers each round and have them sit out...


I assume this was originally played with normal decks, but the screenshots show special decks with only suits.

Doesn't this make a difference to gameplay? With normal suits, if you trade a few times, you may be able to establish that twelve distinct cards are in a suit even though you never saw more than say four of them at once

Maybe there typically isn't enough trading for this to make a difference? If there is, it does introduce an extra skill to the game of remembering the cards that have been seen before. That's valuable in lots of card games including some poker variants, but it doesn't seem like something JS would emphasize training.

If you have 12 identical decks you can turn them into 13 Figgie decks by combining all the cards of each value.


For related background read the (horrifying) description of Leverage, a “research” institution with links to Zizians: https://medium.com/@zoecurzi/my-experience-with-leverage-res...

There’s an undercurrent of cults and cult-like institutions in the rationalist crowd (think: lesswrong.com folks) and this is one instance of this.


I think it’s worth discussing the fact that many folks in EA- and rationalist-adjacent circles are deeply panicked about developments in AI, because they believe these systems pose a threat to human existence. (Not all of them, obviously. But a large and extraordinarily well-funded contingent.) The fear I have with these organizations is that eventually panic - in the name of saving humans - leads to violent action. And while this is just a couple of isolated examples, it does illustrate the danger that violent action can become normalized in insular organizations.

I do hope people “on the inside” are aware of this and are working to make sure those organizations don’t have bad ideas that hurt real people.


I think this is a variant of Walter Russell Mead's abrahamic bomb [1].

when you remove the theology from Psychology based on Judeo Christian Morality, it recreates it's facsimile In this case, AI judgment Day precedes either eternal damnation or salvation. Ziz, like other "radical rationalists" even believe that the singularity AI will punish people retroactively for their moral failings, such as eating meat.

https://www.hudson.org/domestic-policy/abraham-bomb-walter-r...


So my speculation from your comment is the data scientists in the group saw what they were doing up close as leading to something that was anathema to what they wanted the world to be, it seemed like a weird trivial connection in the news article, but in context of violent rejection of AI makes sense. https://www.sfgate.com/bayarea/article/bay-area-death-cult-z...


Scared to click the link!


I think the surprise part in under-appreciated. The quicktest tests I write are frequently of the form “let’s spend 15 minutes and see what happens” kind of tests - and I almost always find bugs that way. The return on investment is bonkers.

I do think that immutable by default OCaml + good PBT tooling there helps a lot.


How come e.g. Jane Street uses it so much? It’s the second most common type of test I write.


Jane Street uses OCaml and property based tests are easiest when dealing with pure functions, and are taught in FP classes usually, so I assume it’s that. Easier to setup and target audience.

Edit: also a numerical domain, which is the easiest type to use them for in my experience!


The same reason Google burns $50M+ in electricity each year using protobufs instead of a more efficient format. An individual company having specific needs isn't at odds with a general statement being broadly true.


How’s that comparable at all? There are no network effects from writing property based tests, people use them if they are helpful - are they testing enough of the code with reasonable amount of effort. Nobody’s forcing people to write tests, unlike Google forces usage of protobuf on all projects there.


It's comparable in the way described in sentence #2:

> An individual company having specific needs isn't at odds with a general statement being broadly true.

Google needs certain things more than reduced carbon emissions, and Jane Street needs certain things more than whatever else they could spend that dev time on.


Fine but cutting the thought process at "it depends" is not a great way to understand what's happening here. You can explain anything happening at any company by saying "they need certain things more than whatever else they could spend that time on".

Why is PBT useful at Jane Street, at least more than in other places? Is it the use of functional language? Average Jane Street dev being more familiar with PBT? Is the domain particularly suited to this style of testing?

Explicitly, my claim is that the biggest bottleneck is education on how to use PBT effectively and Jane Street is not using them to get an extra mile in safety, they use it because it's the easiest way to write large chunk of the tests.


>Why is PBT useful at Jane Street, at least more than in other places?

Because trading firms write a lot more algorithmic code than most businesses. Trading strategy code is intensely algorithmic and calculation heavy by its very nature as is a lot of the support code written around it.

At least, that's what it was like when I worked in a trading firm. Relatedly, it was one of the few projects Id worked on where having 95% unit tests and 5% integration tests made perfect sense. It fitted the nature of the code, which wasnt typical of most businesses.

Somebody else wrote that they wrote a lot of numerical code in another business for which property testing is extremely useful and again, I dont doubt that either. 95% is still != 100% though.


Not to derail but what’s more efficient in your view? We compared messagepack, standard http/json and probufs for an internal service and protobufs came out tops on every measure we had.


The gold standard is a purpose-built protocol for each message, usually coming in ~20x faster and ~2-8x smaller than a comparable proto (it's perhaps obvious why Google doesn't do this, since the developer workload is increased for every message even in a single language, and it's linear in the number of languages you support, without the ability to shove most of the bugginess questions to a single shared library, and backwards compatibility is complicated with custom protocols -- they really do want you to be able to link against most g3 code without interop concerns). I've had a lot of success in my career with custom protocols in performance-sensitive applications, and I wouldn't hesitate to do it again.

Barring that though, capnproto and flatbuffers (perhaps with compression on slow networks) are usually faster than protos. Other people have observed that performance deficit on many occasions and made smaller moderately general-purpose libraries before too (like SBE). They all have their own flavors of warts, but they're all often much faster for normal use cases than protos.

As a hybrid, each project defining its own (de)serializer library can work well too. I've done that a few times, and it's pretty easy to squeeze out 10x-20x throughput for the serialization features your project actually needs while still only writing the serialization crap once and reusing it for all your data types.

Recapping on a few reasons why protos are slow:

- There's a data dependency built into the wire format which is very hard to work around. It blocks nearly all attempts at CPU pipelining aND vectorization.

- Lengths are prefixed (and the data is variable-length), requiring (recursively) you to serialize a submessage before serializing its header -- either requiring copies or undersized syscalls.

- Fields are allowed to appear in any order, preventing any sort of code which might make the branch predictor happy.

- Some non-"zero-copy" protocols are still quite fast since you can get away with a single allocation. Since several decisions make walking the structure slow, that's way more expensive that it should be for protos, requiring either multiple (slow) walks or recursive allocations.

- The complexity of the format opens up protos to user error. Nonsense like using a 10-byte slow-to-decode-varint for the constant -1 instead of either 1, 4, or 8 fast-to-decode bytes (which _are_ supported by the wire format, but in the wild I see a lot of poorly suited proto specs).

- The premise in the protocol that you'll decode the entire type exactly as the proto defines prevents a lot of downstream optimizations. If you want a shared data language (the `.proto` file), you have to modify that language to enforce, e.g., non-nullability constraints (you'd prefer to quickly short-circuit those as parse errors, but instead you need extra runtime logic to parse the parsed proto). You start having to trade off reusability for performance.

And so on. It's an elegant format that solves some real problems, but there are precious few cases where it's a top contender for performance (those cases tend to look like bulk data in some primitive type protos handle well, as opposed to arbitrary nesting of 1000 unrelated fields).

Specific languages might have (of course) failed to optimize other options so much that protos still win. It sounds like you're using golang, which I've not done much with (coming from other languages, I'm mildly surprised that messagepack didn't win any of your measurements), and by all means you should choose tools based on the data you have. My complaints are all about what the CPU is capable of for a given protocol, and how optimization looks from a systems language perspective.


What does a 'purpose-built protocol for each message' look like? You avoid type/tagging overhead, but other than that I'd expect a ""sufficiently smart"" generic protocol to be able to achieve the same level of e.g. data layout optimization. Obviously ProtoBuf in particular is pessimising for the reasons you describe, but I'm thinking of other protocols (e.g. Flatbuffers, Cap'n Proto, etc.)


The problem is that "sufficiently smart" does a lot of heavy lifting.

One way to look at the problem is to go build a sufficiently smart generic protocol and write down everything that's challenging to support in v1. You have tradeoffs between size (slow for slow networks), data dependencies (slow for modern CPUs), lane segmentation (parallel processing vs cache-friendly single-core access vs code complexity), forward/backward compatibility, how much validation should the protocol do, .... Any specific data serialization problem usually has some outside knowledge you can use to remove or simplify a few of those "requirements," and knowledge of the surrounding system can further guide you to have efficient data representations on _both_ sides of the transfer. Code that's less general-purpose tends to have more opportunities fore being small, fast, and explainable.

A common source of inefficiencies (protobuf is not unique in this) is the use of a schema language in any capacity as a blunt weapon to bludgeon the m x n problem between producers and consumers. The coding pattern of generating generic producers/consumers doesn't allow for fine-tuning of any producer/consumer pair.

Picking on flatbuffers as an example (I _like_ the project, but I'll ignore that sentiment for the moment), the vtable approach is smart and flexible, but it's poorly suited (compared to a full "parse" step) to data you intend to access frequently, especially when doing narrow operations. It's an overhead (one that reduces the ability for the CPU to pipeline your operations) you incur precisely by trying to define a generic format which many people can produce and consume, especially when the tech that produces that generic format is itself generic (operating on any valid schema file). Fully generic code is hard enough to make correct, much less fast, so in the aim of correctness and maintainability you usually compromise on speed somewhere.

For that (slightly vague) flatbuffers example, the "purpose-built protocol" could be as simple as almost anything else with a proper parse step. That might even be cap'n proto, though that also has problems in certain kinds of nested/repeated structures because of its arena allocation strategy (better than protobuf, but still more allocations and wasted space than you'd like).


Just because a company uses something doesn't mean all companies should. May as well use monorepos in that case


Trading companies are unusual in writing a lot of algo-heavy code. Did you assume every company was like this?

I can assure you they arent.


Even trading companies have a ton of project and code which you'll find at any reasonably sized tech company, the algo-heavy code is a small fraction of the total code they write. In this sense, they are not such an outlier just based on the business they are in - I think the use of a functional language, good tooling and education around PBT are much more important factors.


>the algo-heavy code is a small fraction of the total code they write

Wasnt the case in the trading firm I worked at.

Do you work at Jane Street? Have you worked elsewhere?


The classic, train your mind to believe anything you want, use it to improve your life, try believing some really cooky stuff, woah you actually believe it, no shit, you descend into crazier and crazier stuff.

I’m currently leaning towards believing that learning intense focus through meditation + setting intentions for yourself in that state is genuinely useful, just like rituals are (saying “yes” to your partner in front of friends and family it changes you). But the border between it and “I’m a trans dimensional shaman in psychiatric A&E” is a bit to thin for my liking.


This stuff works, it's just that people interpret what's happening in different ways. You can believe you are visiting an astral plane that is a separate physical dimension or whatever or you can believe you are essentially having a type of lucid dream. There are people who believes regular dreams are seeing the future or visiting alternate dimensions. That doesn't mean dreams don't exist, but it also doesn't mean that dreaming is traveling to some alternative physical reality.


I can believe that it works. But there's a risk of actually starting to believe that the astral plane and other stuff is real, particularly if someone has predispositions for schizophrenia.

There's also a degree of danger, believing your dreams tell you the future is one thing but if you immerse yourself in occult, text about daemons, start believing they might have their own agendas - and I mean really believe - you can see how dangerous it can get.


On the one hand, you're right, but on the other hand, what you're describing is essentially just religion. Believing in spells is little different than believing in prayer. The "astral plane" may as well be Heaven or Hell, or Purgatory. The Bible is an "occult" text about demons, containing spells and rituals, and many people believe in a literal God and Devil and intercession by angelic and demonic forces.

And of course there are plenty of schizophrenics who claim to hear the voice of God and claim to be doing His will, something which might have gotten one canonized centuries ago, but no one pathologizes Christianity when that happens. If we're going to consider magical thinking normal in one sense, we should consider it normal in every sense, because the only difference between "religious" and "occult" practice is cultural acceptance.

Or else accept that all of it is equally ridiculous, that Aleister Crowley is no more divine or absurd than the Pope.


To be honest, my thinking on this is not crystalised but I do think there's an important difference between a religion like Christianity and other "occult" practices - it's about social guardrails against going insane.

Practicing Christians will congregate weekly, reinforce their beliefs, chat about them, confess to the same priest - all of this stops going into crazy corners of the belief space. Also, normal to be openly Christian (at least where I'm from) and people have vague ideas about which beliefs are roughly in that category - and can call out deviations.

I'm basing this all on my guesses to be clear! Still curious to see if others have similar thoughts.


That's a sound theory. I'm a fairly staunch atheist so do believe both are equally absurd. Even with that frame though it's easy to acknowledge community that forms around (most) religion serves to align group behaviour. This can be both good or bad depending on what those behaviours are in a larger social context.

You still get similar community in a lot of alternative spirituality / occult practices but from slices of that community I've been exposed to it does focus much more on personal exploration and opens the door to the recursive degeneration that may lead to.


> but no one pathologizes Christianity when that happens

Many people, even many believers, certainly pathologize those that become too embroiled in Christian beliefs. Sure, they admire the deeply devout priest or nun or monk or who is spending their whole life in the Church. But if that person starts telling them that God is speaking to them, or that God is showing them far off events, or that they can pray to obtain physical results directly - they way this book suggests is possible - they will certainly take a few steps back and stop listening so intently.

Not to mention, there are vast differences between religious and occult practices. While religious people do sometimes pray for material improvements to their life, many religious practices are more moral or social rather than magical. People eat a certain way because they believe this is what they believe is the right thing to do, they help the poor or others in their communities, they worship their god or gods just because.

Wishing to live a good life because you believe that will guarantee you'll have a good after-life (i.e. religion in a nutshell) is extremely different from performing spells that you think will fix your life here and now (occult magical practices).


Want to second that, not getting people an opportunity to like you can turn you into that person at the top who’s disconnected from line engineers work - and people won’t trust you enough to tell you when you’re making a mistake. Guess how I learnt that :)


Happy New Year everyone! I hope this coming year you’ll find the meaning you’re yearning for, spend it with people who deeply care for you, love and be loved. That you’ll flourish, feel content and happy. That you’ll experience more good than evil.

Learn OCaml in 2025


Consider applying for YC's Summer 2025 batch! Applications are open till May 13

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: