Hacker News new | past | comments | ask | show | jobs | submit | CBLT's comments login

I just read the article and feel I learned nothing. It's all the obvious lessons, without any real implementation advice. Am I the wrong audience?

Unfortunately despite how obvious all this is, they are common real world problems that management all over gets wrong time and time again.

Yeah, but unfortunately the SQLite team doesn't include that tool with their "autotools" tarball, which is what most distros (and brew) use to package SQLite. The only way to use the tool is to compile it yourself.

Yeah, that’s a bummer. It does appear to be in nixpkgs, though:

  nix-shell -p sqlite-rsync

Realistically, are you using SQLite if you can’t compile and source control your rev of the codebase? Is that really a big deal?

Yes, it's extremely common to be using it and not even be compiling anything yourself, let alone C or any support libraries.

`sqlite3_rsync` must be installed on the remote host too, so now you're cross-compiling for all the hosts you manage. It also must be installed into the PATH the ssh uses, which for a number of operating systems doesn't include /usr/local/bin. So I guess you're now placing your sshd config under configuration management to allow that.

These tasks aren't that challenging but they sure are a yak shave.


Allow -> Tarpit -> Block should be done by ASN

You probably want to check how many ips/blocks a provider announces before blocking the entire thing.

It's also not a common metric you can filter on in open firewalls since you must lookup and maintain a cache of IP to ASN, which has to be evicted and updated as blocks still move around.


Something that might work for getting your kids interested in modular arithmetic: The Chicken McNugget Theorem.


> only one allocation per node

I believe the implication is there's fewer than one allocation per node with the new API. You allocate contiguous memory once, and use it to store n elements.


No, that's not stated.

> The new version isn't generic. Rather, you embed the linked list node with your data. This is known as an intrusive linked list and tends to perform better and require fewer allocations. Except in trivial examples, the data that we store in a linked list is typically stored on the heap. Because an intrusive linked list has the linked list node embedded in the data, it doesn't need its own allocation.

Is simply a misunderstanding. The storage layout will be the same for the generic and the intrusive one.

The benefit of intrusive linked lists is that each node can be a member of several linked lists with a single allocation per node. This is used heavily e.g. in the Linux kernel.

The cost is that you need to keep track of the offset from object address to which linked list pointer you're currently dealing with. That's often a compile time constant, but makes the API more awkward. In this case it seems to be the string "node", which seems nice enough. C libraries often use the preprocessor to do something similar.


    new Node<TStruct>[16];

    new TStructContainingNode[16];
What’s the difference?


there’s 16 contiguously stored pointers to 16 non-contiguously stored tstructs (they may be contiguously stored, but you can’t make this assumption from this type). there’s 16 contiguously stored tstructcontainingnode’s.


I love the verbose flag[0] to regex, so I can write comments inline.

[0] https://docs.python.org/3/library/re.html#re.VERBOSE


It's been proven that people are extraordinarily poor drivers for the first few seconds they take over driving from a computer.


I would say any activity that demands focus would have the same pattern, anyone who has driven a car, rode a bike, etc. should be able to tell that it takes a while for you to get into focused mode if you let it drift even for a short while.

It's much more pronounced if you've ever raced a car on a track, rode a fast bike on tricky paths, or even go-karts, if your mind wander for a split second it takes a few seconds of active focusing to get back to the baseline where you enter "flow" again with the activity.

Expecting drivers who let a machine control their machine, getting out of the control feedback loop, to be able to regain focused control over split second decisions is just absurd to me.


I solve most of those issues with a Docker Bakefile; I'm confident I could solve the rest with Bakefiles if I had to. Reasonable developer experience.


The last time I tried something more than Buildx that Docker itself put out the experience was bad - something about not properly caching. I'll have to give this another shot sometime, though.


I think Yarn zero install is now the default, and does the same thing you're advocating? I'm not really a JS person, but it looks like it's done reasonably competently (validating checksums etc).


Didn't the Vitess team found planetscale?


Yes! The founders of PlanetScale were the co-creators of Vitess at YouTube, where it was built to handle MySQL scalability. PlanetScale builds on Vitess but offers a managed, developer-friendly experience.


Consider applying for YC's Summer 2025 batch! Applications are open till May 13

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: