Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> I believe the core issue is devs like above never take the time to “grok” Git

I believe the same, and it's not just a problem you see with git, it's all over the place. Some developers seems eager to use something so they just skim the documentation in order to do the least amount of reading and understanding in order to implement something, but often miss some fundamental detail and have to jump back. Or, they fundamentally misunderstand the tool at all, but push forward with their own idea what the tool is, rather than stepping back and start learning again.



You say that like it's bad, but that's just life. We're trying to fit a whole universe into 3 pounds of meat. If we stopped to truly understand anything before taking action, we'd never get anywhere.

That's especially true in technology, where whole armies of people are working to complicate things as fast as possible. For the best of reasons, of course. But when I started out, I could read one short book and have a pretty good understanding of my computer's processor, and from there it wasn't much further to understanding the whole computer. Now I could spend days just understanding, say, processor cache strategies [1]. A field that is super interesting, but if I am to get any of my actual work done, I can't afford to dig into that and the many, many other similar subtopics. I'm going to get a casual understanding of something, do my best to make something useful, and only dig in further when I'm forced to.

When I do have to dig in, it comes in two cases for me. One is where there is something necessary complexity that I would have to learn about regardless. E.g., if something is too slow, I need to learn about what happens under the hood to do proper performance tuning. Great, fine, I will learn it.

But then there's the other bucket, which includes unnecessary complexity, bad abstractions, poorly considered UX, and the like. For me, git is clearly in that bucket. I intentionally have very simple development flows. Git can do a great deal, 98% of which I not only don't need, I actively don't want. [2] So I'm going do my best to remain ignorant of its inner workings, stick with my small number of commands, and very occasionally refer to "Oh Shit, Git!?!" [3] And I'm perfectly happy with that until it gets replaced with a technology that better matches the domain.

[1] e.g.: https://chipsandcheese.com/2022/05/21/igpu-cache-setups-comp...

[2] An example of what I don't want: https://www.tiktok.com/t/ZTRpPPuKf/

[3] https://ohshitgit.com/


I'm very happy to fundamentally misunderstand GIT's internals because I want to do a finite (and small) set of operations with it, not HOW it operates under the covers.

Conflating the two, like you do, is elitism.

"If you just stopped and read to understand...", a-ha, sure, I'll do that for every one of the no less than 500 tools I've used over the course of my career and will never get anything done on time.

There's no time. We got real work to do and no we can't switch to a company that gives us this time. There's a big world outside the Silicon Valley.

GIT is a huge UX failure and seeing people pretend otherwise makes me question my -- or their -- sanity.


You won't have to put your entire life on break in order to understand the fundamentals of git and why it works the way it works. Going through https://jwiegley.github.io/git-from-the-bottom-up/ and really understanding the material will take you a couple of hours at max, but will save you a lot of time in the future.

Wanting to understand things before using them is hardly elitism, not sure why you would think that.

Just like you probably don't want to fix bugs without understand the cause, it's hard to use a tool correctly unless you know how the tool works.


Git is one (egregious) example though; Do I need to understand the fundamentals of every tool that I use/interact with every day? That's just not feasible. If not, where do you draw the line? To many, git is a means to an end, in the same way that $(insert_internal_tool_here) is. Nobody expects you to know the details of a B-tree and an R-tree to use MySql, so why is it ok to expect people understand the implementation details of git to use centralized version control?


Except not knowing about btrees for MySQL will likely mean you don't understand how to write good indices.


I like your analogy because it outlines the flaw of GIT: we have to "fix bugs" in its UX so we can get our job done with it. :\

Also yeah, I agree learning GIT is not a huge sacrifice but with time I built a huge internal resistance against it so... dunno. ¯\_(ツ)_/¯

Maybe I'll get to it one day, in the meantime I am OK relearning cherry-pick and a few others every time I need them. I don't know, it just doesn't make sense to me. Guess to this day I don't see why it had to be a graph DB.


This article contains some very dangerous advice. In particular it says that the difference between reset--hard and checkout is merely down to the working tree changes not being preserved. What it does not mention is that reset will obliterate commit history subsequent to the requested revision 5f1bc85.

From the source:

Here’s me being straight up loco and resetting the head of my working tree to a particular commit:

  $ git reset --hard 5f1bc85
The –hard option says to erase all changes currently in my working tree, whether they’ve been registered for a checkin or not (more will be said about this command later). A safer way to do the same thing is by using checkout:

  $ git checkout 5f1bc85
The difference here is that changed files in my working tree are preserved.


> reset will obliterate commit history subsequent to the requested revision 5f1bc85.

It's not obliterated. It is still in the reflog, and in the history of any other branches or tags then have those commits in their history.

You have to try quite hard or wait 2 months to obliterate something that was committed or stashed into a repo.

You can even recover changes that were only staged.

Compare this to most other programs (word processors, GIMP, etc) which will happily genuinely obliterate if you undo multiple times and then do anything other than immediately redo.


This language is, IMO, damaging to the professionalism and image of software engineering. IIRC, I am echoing content from "Clean Coder", but in "professional" careers, it is expected that the practitioner is competent and stays up-to-date on the latest tools and techniques in their field.

When I see a doctor, I expect them to be familiar with the latest medical research. I expect they will treat my illnesses with modern medicines and employ the right tools, correctly, and understand how they work at a sufficient level of depth to do the job correctly. For example, I used to make electrical medical staplers; surgeons need not care about how RTOSes work, but they need to know how to interact with the software enough to do their job. Similarly, I'm not saying we all ought to be able to build Git from scratch. I'm saying we ought to master it to the extent necessary. If your use case is committing alone on a single branch, learn "git commit". But for most devs, understanding the tree structure that Git uses to store data and what basic operations are doing "under the covers" builds a mental model that makes Git easy.

We have to have the time to learn the tools that help us do our jobs well. You can neglect that duty to do "real work", as much as a doctor can neglect learning how to use a scalpel so they can "get on with the surgery".


> it is expected that the practitioner is competent and stays up-to-date on the latest tools and techniques in their field.

Even when the tools themselves suck?

The thing about our profession is that anyone can build better tools. Just like Linus took a week or two to sketch Git with some C code and a bunch of shell scripts (seriously, that was what happened - but he had the BitKeeper design in mind and how to improve on it). There is not regulation that says that in order to build a tool for millions of people to use you need to have a certification or anything.


Actually, I believe a deliberate goal was to write something distinctly different from how BitKeeper operated.


Yes, but without being informed by the BitKeeper's design and shortcomings he would never design Git so quickly


We all know the ideal theory, man.

Citing it as if it's something that's even needed to be said is kind of ironic on a forum with mostly programmers in it who, I am pretty sure, by and large possess a fair amount of critical thinking and quick analysis skills.

A ton of people have to make do with partial schedules. And I mean a ton, likely no less than 85% of all programmers everywhere.

You might want to make a good deep analysis on whether you're not coming from a position of severe privilege and a very positive filter bubble.


I agree with you, but I also have the thought of "well, git is one of the _top_ tools of the 500 I use," so I think I'm a bit more inclined to fill in a few more gaps as I encounter them. Ultimately though if you have the right balance of knowledge about the tool, you can always stop learning more about it until you learn otherwise…


Also true, and I agree. As mentioned in a sibling comment, with time I kind of started loathing the idea of learning GIT's internals so here we are. We'll see, these things tend to fall away with time.


[flagged]


The only one complaining here is you. Also this is not Reddit, chill.


You can either learn the tools of the trade, or you can go online and complain about how your hammer is too hard to use and so you refuse to hold it right. How is that not complaining?


You tell the guy using a brick as hammer that it isn’t actually a good hammer, no matter how many other people are also using bricks as terrible hammers. Git is a brick.


Memories of the "PHP Is A Fractal of Bad Design" article...


That's not a good analogy. A better analogy is someone telling you that you need to understand material sciences to be able to use an impact driver properly.


Can you learn material sciences in an afternoon of minimal effort?


No, and you can't learn git in an afternoon either. Here's a very very simple scenario. You and I are working on a fork of a project. You make a branch and push it. I want to update an unrelated branch with the changes from the fork, so I follow [0] (note all of the various adjustments in the comments), and suddenly git switch doesn't work for your branch anymore.

Git has dozens of failure modes like this where the behaviour is completely unintuitive unless you understand the internals of git.

[0] https://stackoverflow.com/questions/7244321/how-do-i-update-...


Not sure what you're going on about. I tried replicating what you describe, but `git switch` keeps working just fine.

    ~/tmp/foo git init repo1
    Initialized empty Git repository in /home/user/tmp/foo/repo1/.git/
    ~/tmp/foo cd repo1
    tmp/foo/repo1 ‹master› echo hi > hi
    tmp/foo/repo1 ‹master› git add hi
    tmp/foo/repo1 ‹master*› git commit -m hi
    [master (root-commit) d0cf572] hi
    1 file changed, 1 insertion(+)
    create mode 100644 hi
    tmp/foo/repo1 ‹master› cd ..
    ~/tmp/foo git clone repo1 repo2
    Cloning into 'repo2'...
    done.
    ~/tmp/foo cd repo1
    tmp/foo/repo1 ‹master› echo hi2 > hi 
    tmp/foo/repo1 ‹master*› git add hi
    tmp/foo/repo1 ‹master*› git commit -m hi2
    [master 3a82bdc] hi2
    1 file changed, 1 insertion(+), 1 deletion(-)
    tmp/foo/repo1 ‹master› git checkout -b someotherbranch
    Switched to a new branch 'someotherbranch'
    tmp/foo/repo1 ‹someotherbranch› git 
    tmp/foo/repo1 ‹someotherbranch› echo otherchange > otherchange
    tmp/foo/repo1 ‹someotherbranch› git add otherchange 
    tmp/foo/repo1 ‹someotherbranch*› git commit -m "otherchange"
    [someotherbranch 99ec880] otherchange
    1 file changed, 1 insertion(+)
    create mode 100644 otherchange
    tmp/foo/repo1 ‹someotherbranch› cd ../repo2
    tmp/foo/repo2 ‹master› git remote add upstream ../repo1 
    tmp/foo/repo2 ‹master› git fetch upstream
    remote: Enumerating objects: 8, done.
    remote: Counting objects: 100% (8/8), done.
    remote: Compressing objects: 100% (3/3), done.
    remote: Total 6 (delta 0), reused 0 (delta 0), pack-reused 0
    Unpacking objects: 100% (6/6), 448 bytes | 448.00 KiB/s, done.
    From ../repo1
    * [new branch]      master          -> upstream/master
    * [new branch]      someotherbranch -> upstream/someotherbranch
    tmp/foo/repo2 ‹master› git checkout master
    Already on 'master'
    Your branch is up to date with 'origin/master'.
    tmp/foo/repo2 ‹master› git rebase upstream/master
    Successfully rebased and updated refs/heads/master.
    tmp/foo/repo2 ‹master› git switch someotherbranch
    branch 'someotherbranch' set up to track 'upstream/someotherbranch'.
    Switched to a new branch 'someotherbranch'


GIT can't be learned quickly by everyone. I for example haven't needed a graph DB even once throughout a 21 years of career.


No, and neither can one learn git in an afternoon.


Speak for yourself.


We speak for more than one person and you should stop pretending otherwise.

Obviously not everyone can learn it quickly. That's a fact. Denying it is ruining the forum discourse.


Mercurial.

Also, not having another option would hardly make git optimal (or even good).


No, it doesn't. It does make learning it practical considering it's what everyone uses.

If you think you can do better, please do! Let me know when you've gotten a few projects to switch over and I'll gladly learn that, too. Not a lot of projects using mercurial these days.


How many projects/companies do you need? There's still a fair number using it.


Practically speaking, anything mainstream that I actually use. And the "still" qualifier there is the problem. That number should be growing, not shrinking.


Ok. Mainstream... you use it.. How about: nginx, sudo, pypy, mozilla/firefox, facebook?

And. totally agreed, the number should be growing (especially for such a nice piece of tech with a far better toolset). Now that you've signed on to learning, hopefully that will be the case.


Serious Stockholm syndrome on display


It's insane to me how many developers simply refuse to read documentation or spend any time at all learning the new tools they're supposed to use.


This is all Stockholm syndrome. Git has a lot of random accidental complexity, from reset and checkout doing too many things (yes I'm aware of switch and restore), to stacks being a pain to work with. The idea that you're supposed to just be in the middle of an interactive rebase most of the time is mind-boggling.

A better thing to say is "yeah we've been saddled with this horrible tool, yeah we know it sucks, but it'll suck a bit less when you learn it. Oh and sorry if you're not a professional developer and have to use git, I hope we can do better next time."


Yes, we all know it has some rough edges and could be more convenient. Unless someone actually makes that idyllic more convenient tool and it becomes widespread, none of that matters. We're stuck with the hammer that everyone else is using. No sense refusing to learn how to use it just because you're stubborn, everyone else managed just fine.


Well no, that's not the case. In fact many projects (gitless for example) clearly show that a consistent UI for git is possible, but unless git developers decide to rework the tool completely, we are stuck with what we have. And no, using a 3rd party tool and convincing every team that gitless is better is simply not going to happen. The way out is for someone to swallow their pride, admit they did a half-assed job and fix it. Not holding my breath though.

I say that as someone who uses git daily, who has learned git internals, who uses cli almost exclusively, who helps the teammates out of their git problems and who still hates the cli inconsistencies.


How to tell apart the professionals from those who just do the equivalent of button mashing to get something to work.


I've seen this phenomenon as well. I've noticed a few factors that I hypothesize contribute to this:

1) Making Computer Science programs in universities overly simple. When I was in college, I tutored students in CS. When I started college, the CS 101 course was taught in Java, and when I finished, it was being taught in Python. There was an immense gap between the understanding of the Python-first vs. Java-first students.

The Java-first students understood fundamental concepts like arrays, passing parameters by reference vs. by value, "OOP" concepts, and other common paradigms in languages. The Python-first students would use "lists" and "dictionaries" for many problems without understanding that those structures impacted the time complexity of their solutions, or that they used memory allocation, reallocation, hashing, etc. Python is great for hacking together a non-fault-tolerant program that does X as quickly as possible, but I saw it damage the way the students thought about computing.

It made the students think it ought to be easy. It made them think that they should be able to write, in nearly plain English, what they wanted the computer to do, and that it ought to do it. It made them think they should be able to intuitively be able to code without spending any amount of time learning the craft, and that they could do anything an advanced programmer could do by typing a few lines of simple code. They were less willing to accept that sometimes problems are hard, sometimes they would have to think, and sometimes they would have to write more than a for loop to solve problems. The Java students' problem sets covered implementation of data structures and complex applications, while the Python students struggled to put together even the implementation.

For a humorous and broader explanation of this subject, see James Mickens' excellent article The Night Watch [0]. For example: "That being said, if you find yourself drinking a martini and writing programs in garbage-collected, object-oriented Esperanto, be aware that the only reason that the Esperanto runtime works is because there are systems people who have exchanged any hope of losing their virginity for the exciting opportunity to think about hex numbers and their relationships with the operating system, the hardware, and ancient blood rituals that Bjarne Stroustrup performed at Stonehenge."

2) Bad documentation, and bad users of documentation. I dislike documentation that neglects how the tool works and instead creates a cookbook that its users can copy verbatim. It has similar effects to what I described above. Developers that use this type of documentation find themselves helpless when they encounter a problem they've not memorized the solution to. I think it also creates a learned helplessness wherein users cannot question the model the tool uses or think outside that model to solve complex problems. I prefer documentation that teaches the ethos of the subject in question so that I can understand better and imagine new solutions.

I've also heard the complaint that Git's documentation is awful; some say that it is only useful if you know already know where what you are looking for is, and useless if not. In other words, it lacks an "apropos" (aka "man -k") equivalent. This is my gripe with "wiki" style documentation. It is scattered and unsearchable and has no cohesion. We as developers need to do a better job creating documentation that is searchable, coherent, and useful to new and experienced users alike. Various sections of documentation should have ample cross-references to help users understand the connection between parts of the system to help them form a mental model of how tooling works.

[0]: https://www.usenix.org/system/files/1311_05-08_mickens.pdf




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: