Hacker News new | past | comments | ask | show | jobs | submit login

I'm pretty sure I can fix that global lock, but I'm focusing on getting the basics solid before focusing on performance.

My idea was that if there are concurrent transactions, then you just need a merge when the second one commits.

This is "optimistic locking": since they're probably updating different subdirectories it would be a trivial merge. If they update the same attribute of the same entity (i.e. they both change the same line of the same attributes.json file) then there's a conflict and the second commit fails.

This still requires a lock but only a tiny one, just at the moment of actually updating refs/heads/<branchname> (whereas right now it's around the whole transaction which obviously sucks).

Are there any flaws in that?

The reason I haven't done it yet even though it sounds pretty trivial is that with Grit (the Ruby Git library) the merge will need to use the working directory and that's a problem for concurrency (and also because I want the working directory to be useable by a human). I was thinking that with a bit of extra work I could make it happen in memory somehow.

Paul




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: