Hacker News new | past | comments | ask | show | jobs | submit login

I had the same thought as I read that line. I think he's actually describing Linus Torvalds there, who, legend has it, thought about Git for a month or so and when he was done thinking, he got to work coding and in six days delivered the finished product. And then, on the seventh day he rested.

But for the rest of us (especially myself), it seems to be more like an interplay between thinking of what to write, writing it, testing it, thinking some more, changing some minor or major parts of what we wrote, and so on, until it feels good enough.

In the end, it's a bit of an art, coming up with the final working version.




Git is a special case i would say . Because it is fairly self contained. It had minimal dependencies on external components. It mostly relied on filesystem API. Everything else was “invented” inside of Git.

This is special because most real world systems has a lot more dependencies. That’s when experimentation is required. Because one cannot know all relevant API’s beforehand and their behaviors. Therefore the only way is to do it and find out.

Algorithms are in essence mathematical problems, therefore is abstract and should be able to be solved in the head or use pen and paper.

Reality is that most programming problems are not algorithms but connecting and translating between systems. And these systems are like blackboxes that require exploration.


> Git is a special case i would say . Because it is fairly self contained. It had minimal dependencies on external components. It mostly relied on filesystem API. Everything else was “invented” inside of Git.

This type of developer tends to prefer building software like this. There is a whole crowd of hardcore C/C++/Rust devs who eschew taking dependencies in favour of writing everything themselves (and they mostly nerd-snipe themselves with the excessive NIH syndrome, like Jonathon Blow off writing Powerpoint[1]...)

Torvalds seems to be mostly a special case in that he found a sufficiently low-level niche where extreme NIH is not a handicap.

[1]: https://www.youtube.com/watch?v=t2nkimbPphY&list=PLmV5I2fxai...


It's really easy to remember the semantics of C. At least if you punt a bit on the parts that savage you in the name of UB. You know what libc has in it, give or take, because it's tiny.

Therefore if you walk through the woodland thinking about a program to write in C there is exactly zero interrupt to check docs to see how some dependency might behave. There is no uncertainty over what it can do, or what things will cost you writing another language to do reasonably.

Further, when you come to write it, there's friction when you think "oh, I want a trie here", but that's of a very different nature to "my python dependency segfaults sometimes".

It's probably not a path to maximum output. From the programming is basically a computer game perspective it has a lot going for it.

Lua is basically the same. An underappreciated feature of a language is never, ever having to look up the docs to know how to express your idea.


ie closed off systems that don't interact with external systems.

and these are the type of coders that also favor types.

whereas if you do the 'informational' type programs as described by Rich hickey -- ie interact with outside systems a lot. you will find a lot of dependencies, and types get in the way


I tend to see this as a sign that a design is still too complicated. Keep simplifying, which may include splitting into components that are separately easy to keep in your head.

This is really important for maintenance later on. If it's too complicated now to keep in your head, how will you ever have a chance to maintain it 3 years down the line? Or explain it to somebody else?


I'm more than half the time figuring out the environment. Just as you learn a new language by doing the exercises, I'm learning a bunch of stuff while I try to port our iptables semantics o firewalld: [a] gitlab CI/CD instead of Jenkins [b] getting firewalld (requires systemd) running in a container [c] the ansible firewalld module doesn't support --direct required for destination filtering [d] inventing a test suite for firewall rules, since the prebuilt I've found would involve weeks of yak shaving to get operating. So I'm simultaneously learning about four environments/languages at once - and this is typical for the kind of project I get assigned. There's a *lot* of exploratory coding happening. I didn't choose this stuff - it's part of the new requirements. I try for simple first, and often the tools don't support simple.


This is the only practical way (IMHO) to do a good job, but there can be an irreducibly complex kernel to a problem which manifests itself in the interactions between components even when each atomic component is simple.


Then the component APIs need improvement.


Without an argument for this always being possible, this just looks like unjustified dogma from the Clean Code era.


At the microlevel (where we pass actual data objects between functions), the difference in the amount of work required between designing data layout "on paper" and "in code" is often negligible and not in favor of "paper", because some important interactions can sneak out of sight.

I do data flow diagrams a lot (to understand the domain, figure out dependencies, and draw rough component and procedure boundaries) but leave the details of data formats and APIs to exploratory coding. It still makes me change the diagrams, because I've missed something.


The real world bank processes themselves are significantly more complicated than for any one person to hold it in their head. Simplification is important but only until the point it still completes 100% of the required functionality.

Code also functions as documentation for the actual process. In many cases “whatever the software do” is the process itself.


If you can do that, sure. Architecting a clear design beforehand isn't always feasible though, especially when you're doing a thing for the first time or you're exploring what works and what doesn't, like in game programming, for example. And then, there are also the various levels at which designing and implementation takes place.

In the end, I find my mental picture is still the most important. And when that fades after a while, or for code written by someone else, then I just have to go read the code. Though it may exist, so far I haven't found a way that's obviously better.

Some thing I've tried (besides commenting code) are doing diagrams (they lose sync over time) and using AI assistants to explain code (not very useful yet). I didn't feel they made the difference, but we have to keep learning in this job.


Of course it can be helpful to do some prototyping to see which parts still need design improvements and to understand the problem space better. That's part of coming up with the good design and architecture, it takes work!


Sometimes, as code get’s written, it becomes clearer what kind of component split is better, which things can be cleanly separated and which less so.


I don't do it in my head. I do diagrams, then discuss them with other people until everyone is on the same page. It's amazing how convoluted get data from db, do something to it, send it back can get, especially if there is a queue or multiple consumers in play, when it's actually the simplest thing in the world, which is why people get over-confident and write super-confusing code.


Diagrams are what I tend to use as well, my background is Engineering (the non software kind) for solving engineering problems one of the first thing we are taught to do at uni is to sketch out the problem and I have somewhat carried that habit over when I need to write a computer program.

I map out on paper the logical steps my code needs to follow a bit like a flow chart tracking the change in states.

When I write code I'll create like a skeleton with placeholder functions I think I'll need as stubs and fill them out as I go, I'm not wedded to the design sometimes I'll remove/ replace etc whole sections as I get further in but it helps me think about it if I have the whole skeleton "on the page"


Well that explains why Git has such a god awful API. Maybe he should've done some prototyping too.


I'm going to take a stab here: you've never used cvs or svn. git, for all its warts, is quite literally a 10x improvement on those, which is what it was (mostly) competing with.


I started my career with Clearcase (ick) and added CVS for personal projects shortly after. CVS always kind of sucked, even compared with Clearcase. Subversion was a massive improvement, and I was pretty happy with it for a long time. I resisted moving from Subversion to Git for a while but eventually caved like nearly everyone else. After learning it sufficiently, I now enjoy Git, and I think the model it uses is better in nearly every way than Subversion.

But the point of the parent of your post is correct, in my opinion. The Git interface sucks. Subversion's was much more consistent, and therefore better. Imagine how much better Git could be if it had had a little more thought and consistency put into the interface.

I thought it was pretty universally agreed that the Git interface sucks. I'm surprised to see someone arguing otherwise.


Subversion was a major improvement over CVS, in that it actually had sane branching and atomic commits. (In CVS, if you commit multiple files, they're not actually committed in a single action - they're individual file-level transactions that are generally grouped together based on commit message and similar (but not identical!) timestamps.) Some weirdness like using paths for branching, but that's not a big deal.

I actually migrated my company from CVS to SVN in part so we could do branchy development effectively, and also so I personally could use git-svn to interact with the repo. We ended up eventually moving Mercurial since Git didn't have a good Windows story at the time. Mercurial and Git are pretty much equivalent in my experience, just they decided to give things confusing names. (git fetch / pull and hg fetch / pull have their meanings swapped)


> I thought it was pretty universally agreed

Depends what you consider “universally agreed”.

At least one person (me) thinks that: git interface is good enough as is (function>form here), regexps are not too terse - that’s the whole point of them.

Related if you squint a lot: https://prog21.dadgum.com/170.html


It's really hard to overstate how much of a sea change git was.

It's very rare that a new piece of software just completely supplants existing solutions as widely and quickly as git did in the version control space.


And all that because the company owning the commercial version control system they had been using free of charge until that point got greedy, and wanted them to start paying for its use.

Their greed literally killed their own business model, and brought us a better versioning system. Bless their greedy heart.


What do you mean by API? Linus's original got didn't have an API, just a bunch of low level C commands ('plumbing'). The CLI ('porcelain') was originally just wrappers around the plumbing.


Those C functions are the API for git.


On the other side the hooks system of git is very good api design imo.


Yeah, could be.. IIRC, he said he doesn't find version control and databases interesting. So he just did what had to be done, did it quickly and then delegated, so he could get back to more satisfying work.

I can relate to that.


baseless conjecture


> I think he's actually describing Linus Torvalds there, who, legend has it, thought about Git for a month or so and when he was done thinking, he got to work coding and in six days delivered the finished product. And then, on the seventh day he rested.

That sounds a bit weird. As I remember, linux developers were using semi-closed system called BitLocker (or something like that) for many years. For some reason the open source systems at that time weren't sufficient. The problems with BitLocker were constantly discussed, so it might be that Linus was thinking about the problems for years before he wrote git.


Well, if you want to take what I said literally, it seems I need to explain..

My point is, he thought about it for some time before he was free to start the work, then he laid down the basics in less than a week, so he was able to start using Git to build Git, polished it for a while and then turned it over.

Here's an interview with the man himself telling the story 10 years later, a very interesting read:

https://www.linuxfoundation.org/blog/blog/10-years-of-git-an...

https://en.wikipedia.org/wiki/Git#History


>> …and when he was done thinking, he got to work coding and in six days delivered the finished product. And then, on the seventh day he rested.

How very biblical. “And Torwalds saw everything that he had made, and behold, it was very good. And there was evening, and there was morning—the sixth day.”


> And there was evening, and there was morning—the sixth day.

I presume you're using zero-based numbering for this?


You've were downvoted but that part made me smile for the same reason :-)

> The seventh day he rested

This is obviously Sunday :-)




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: