Hacker Newsnew | past | comments | ask | show | jobs | submit | zephen's commentslogin

> why involve Git at all then?

I made a similar point 3 weeks ago. It wasn't very well received.

https://news.ycombinator.com/item?id=47411693

You don't actually need source control to be able to roll back to any particular version that was in use. A series of tarballs will let you do that.

The entire purpose of source control is to let you reason about change sets to help you make decisions about the direction that development (including bug fixes) will take.

If people are still using git but not really using it, are they doing so simply to take advantage of free resources such as github and test runners, or are they still using it because they don't want to admit to themselves that they've completely lost control?


> are they still using it because they don't want to admit to themselves that they've completely lost control?

I think this is the case, or at least close.

I think a lot of people are still convincing themselves that they are the ones "writing" it because they're the ones putting their names on the pull request.

It reminds me of a lot of early Java, where it would make you feel like you were being very productive because everything that would take you eight lines in any other language would take thirty lines across three files to do in Java. Even though you didn't really "do" anything (and indeed Netbeans or IntelliJ or Eclipse was likely generating a lot of that bootstrapping code anyway), people would act like they were doing a lot of work because of a high number of lines of code.

Java is considerably less terrible now, to a point where I actually sort of begrudgingly like writing it, but early Java (IMO before Java 21 and especially before 11) was very bad about unnecessary verbosity.


> If people are still using git but not really using it, are they doing so simply to take advantage of free resources such as github and test runners,

does it have to be free to be useful? the CD part is is even more important than before, and if they still use git as their input, and everyone including the LLM is already familiar with git, whats the need to get rid of it?

there's value in git as a tool everyone knows the basics of, and as a common interface of communicating code to different systems.

passing tarballs around requires defining a bunch of new interfaces for those tarballs which adds a cost to every integration that you'd otherwise get for about free if you used git


A series of tarballs is version control.

Git gives you the series of past snapshots if that's all you want it for, but in infrastructure you don't need to re-invent.


A series of tarballs is really unwieldy for that though. Even if you don't want to use git, and even if the LLM is doing everything, having discrete pieces like "added GitHub oauth to login" and "added profile picture to account page" as different commits is still valuable for when you have to ask the LLM "hey about the profile picture on the account page".

Your example is only for dumping memory.

> this is a weak argument for what computers should do; if LE is more efficient for machines then let them use it

Computers really don't care. Literally. Same number of gates either way. But for everything besides dumping it makes sense that the least significant byte and the least significant bit are numbered starting from zero. It makes intuitive mathematical sense.


Same number of gates either way

Definitely not, which is why many 8-bit CPUs are LE. Carries propagate upwards, and incrementers are cheaper than a length-dependent subtraction.


So, to be clear, I was writing about when you design a computer. It truly is the same number of gates either way. I have written my fair share of verilog. At one level, it's just a convention.

For the use of a computer, yes, if you are doing multi-word arithmetic, it can matter.

OTOH, to be perfectly fair and balanced, multi-word comparisons work better in big-endian.


Not only dumping, but yes I agree it only matters when humans are in the loop. My most annoying encounters with endianness was when writing and debugging assembly, and I assure you dumping memory was not the only pain point.

I've done plenty of assembly language. It was the bulk of my career for over 20 years, and little endian was just fine, and big endian was not.

> Computers really don't care. Literally. Same number of gates either way.

Eh. That depends; the computer architectures used to be way weirder than what we have today. IBM 1401 used variable-length BCDs (written in big-endian); its version of BCDIC literally used numbers from 1 to 9 as digits "1" to "9" (number 0 was blank/space, and number 10 would print as "0"). So its ADD etc. instructions took pointers to the last digits of numbers added, and worked backwards; in fact, pretty much all of indexing on that machine moved backwards: MOV also worked from higher addresses down to lower ones, and so on.


> BE is intuitive for humans who write digits with the highest power on the left.

But only because when they dump memory, they start with the lowest address, lol.

Why don't these people reverse numberlines and cartesian coordinate systems while they're at it?


A lot of graphics APIs do actually reverse the y-coordinate for historical reasons.

Right. I've done plenty of postscript/PDF.

But 99% of the time the x-coordinate and the number increment from left to right.


From personal experience, especially don't point out that you foresaw the problem and warned against the path in a putative "lessons learned" meeting, lest ye be admonished that the true meaning of "disagree and commit" includes "and forget this conversation ever happened," even though the singular point you were trying to make at the "lessons learned" meeting was about how paying attention to concerns might actually be useful in future projects.

> In closing, let me reiterate this point so it is crystal clear. If you are a maintainer of a libre software project and you refuse a community port to another architecture, you are doing a huge disservice to your community and to your software’s overall quality.

Linus Torvalds disagrees. Vehemently.

https://www.phoronix.com/news/Torvalds-No-RISC-V-BE

> For those who don’t know, endianness is simply how the computer stores numbers. Big endian systems store numbers the way us humans do: the largest number is written first.

Really, what's first? You're so keen on having the big end first, but when it comes to looking at memory, you look... starting at the little end of memory first??? What's up with that?

> I happen to prefer big endian systems in my own development life because they are easier for me to work with, especially reading crash dumps.

It always comes back to this. But that's not a good rationale for either the inconsistency of mixed-endianness where the least significant bit is zero but the most significant byte is zero, or true big endianness, where the least significant bit of a number might be a bit numbered 7 or numbered 15, or even 31 or 63, depending on what size integer it is.

> (Porting to different endianness can help catch obscure bugs.)

Yeah, I'm sure using 9 bit bytes would catch bugs, too, but nobody does that either.


BE was a huge mistake. Arabic numerals originated in a right-to-left language too.

depending on what size integer it is

That's the worst part about BE: values that have a size-dependent term in them, in addition to a subtraction. 2^n vs. 2^(l-n) and 256^N vs 256^(L-N).

According to Linus, BE has been "effectively dead" for at least a decade: https://news.ycombinator.com/item?id=9451284


Arabic numerals originated in India, were languages are written left to right.

> After a while you learn to ignore criticism.

Valid and useful criticism is rare.

Critics providing other sorts of criticisms fall into a bimodal distribution. There are those who criticize because your proposal seems risky and they don't want to see you fail, and then there are those who criticize because they don't want to see you succeed.


> This is bullshit.

Yeah, the there-are-no-problems-only-opportunities crowd hates to be told that their proposal contains an insurmountable opportunity.


> This is a classic meta shutdown - the exact thoughtless criticism the article rails against.

No, it's not. Read the rest of his comment. I agree with it wholeheartedly. The article describes a terrible way of surfacing a new idea, and if you keep trying to get buy-in that way, you will keep failing.

> It doesn't help to waste oodles of time learning about mistakes made by others under different contexts and constraints.

Intelligence is practically defined by the ability to learn from others' mistakes.

> Avoiding mistakes is hard.

But useful. I once read about a machinist who started at a new job. His boss caught him trying to rework a piece he had screwed up, took the piece away from him and threw it on the discard pile. "We want you to focus on doing things right the first time, not fixing your mistakes."


Not creating gatekeepers is good advice.

The sort of meeting described in the article seems custom-made for gatekeeper creation.

If a meeting is about something else, don't raise your irrelevant idea. If a meeting is about your idea, then why are you having it if you haven't fleshed out your idea and already talked individually with a lot of stakeholders?

The only reason I can see for this sort of behavior is a kind of naivete that ascribes immense valuation to ideas, coupled with fear of losing credit if the idea loses association with your name.


> does this actually save me time at all?

Soooooo....

As one who hasn't taken the plunge yet -- I'm basically retired, but have a couple of projects I might want to use AI for -- "time" is not always fungible with, or a good proxy for, either "effort" or "motivation"

> How much money have I wasted in tokens?

This, of course, may be a legitimate concern.

> If it generates the slop version in a week but it takes me 3 more weeks to clean it up, could I have I just done it right the first time myself in 4 weeks instead?

This likewise may be a legitimate concern, but sometimes the motivation for cleaning up a basically working piece of code is easier to find that the motivation for staring at a blank screen and trying to write that first function.


Well for me, the amount of time/effort as a function my of my motivation has acted as a natural gatekeeper to bad ideas. Just because I can do something with AI now doesn’t necessarily mean that I should. I am also weary of trading time and effort for outright money right out of my own pocket to find out, especially when I find the people I’d be giving money to so reprehensible. I don’t live somewhere where developers make a lot of money. I’m not poor in any stretch but not rich enough that I can waste money on slop for funsies. But I can spend a month on validating a side project because I find coding as a hobby enjoyable in and of itself, and I don’t care if I throw out a few thousand lines of code after a little while and realize I’m wasting my time.

Cleaning up agent slop code by hand is also a miserable experience and makes me hate my job. I do it already because at $DAYJOB because my boss thinks “investing” in third worlders for pennies on the dollar and just giving them a Claude subscription will be better than investing in technical excellence and leadership. The ROI on this strategy is questionable at best, at least at my current job. Code Review by humans is still the bottleneck and delivering proper working features has not accelerated because they require much more iteration because of slop.

Would much rather spend the time making my own artisanal tradslop instead if it’s gonna take me the same amount of time anyway - at least it’s more enjoyable.


Your position makes an immense amount of sense for your described situation.

As I said, I'm retired, and so I've never had to clean up AI slop at $DAYJOB.

Since the whole AI thing would be a learning experience for me, it would include trying to toilet train the AI itself, as others have intimated can be done in some cases, rather than dealing with a bunch of already-checked-into-the-repo-slop.

And that may be a losing proposition. I don't know; haven't tried it yet.

> Would much rather spend the time making my own artisanal tradslop instead if it’s gonna take me the same amount of time anyway - at least it’s more enjoyable.

Although I haven't had the AI experience you describe, I have had a similar experience with coworkers who moved fast and broke all kinds of shit. That was similarly no fun. It's like trying to work on your wife's minivan, but she won't pull over and let you properly fix it.

Given sufficient time, I enjoy polishing/perfecting/refactoring code. My final output often looks radically different from my prototype. It is clear to me that I would hate the situation you describe. It is not clear to me that starting with prompted slop and wrangling it into submission would be much less enjoyable to me than writing my own slop and then wrangling it into submission.

> especially when I find the people I’d be giving money to so reprehensible.

This is a bit of a concern, but I'm pretty sure that, at the moment, every token you burn costs them more than you.


The biggest thing that has changed in my experience (at least in a professions setting) is now that people have AI agents they don’t really have any motivation to improve. If you tell them something that needs to be changed they just reprompt the agent until it’s good enough - but the most sinister thing is they keep making the same mistakes over and over again. There is no growth, no shared understanding that disseminates through review - just re-prompting. They often just directly use my review comments as prompts! People don’t understand code they generated themselves just a few days later. But not in a “oh just let me reread this again real quick” kind of way but a “I have absolutely no clue wtf I am even looking at” way.

I’ve been sounding the alarm in my own circles about the lack of junior roles now because of AI - which will lead to a shortage of seniors in just a few years - but there is something even more sinister: juniors no longer improve enough to be intermediates and seniors, and worse…seniors and intermediates have regressed to juniors through laziness and cognitive offloading.

Like if I’m just sending code review to a middle man prompter - why not just skip the middle man? I’m already wrangling a handful of AI agents myself every day, so what is even the point of this extra person anyway? I don’t want to replace people with AI but if the person is so lazy that even I would probably prefer just doing the prompting myself then why shouldn’t I replace them with AI?


That does sound like an intractable problem.

My problem, if and when I get started, would be tangential to this. It is clear that communication with LLMs is changing so rapidly that there may not be any universal long-lived lessons to be learned from optimizing your interactions with a particular model.

I know that one-shotting things is probably not best, but determining how far to take it and when to cut over and finish it myself is something that I want to learn, but perhaps not too well.

My skills are an eclectic mix of high- and low- level. I know exactly what, for example, a frequency analyzer can do for me, but controlling the $400K frequency analyzer is often best left to the guy who lives and breathes it.

Likewise, my debugging skills are exceptional, but I am not as proficient with any particular debugger as are people who live in the debugger daily because they write terrible code. My debugging skills are mostly predicated on a big part of your daily life -- reading code.

(To be fair, I have known a very few people who live in the debugger because they are dealing with intractable problems caused by other people, but those are the rarities. I, myself, used to live in the debugger a lot when I was writing graphics drivers for the mostly undocumented Windows 3.1.)

Which brings us to your reports and/or co-workers. These people have always existed. They pride themselves on and partly base their value on and derive their value from the tools they think they know inside-out.

In truth, they don't know the tools, but they are intimately familiar with the controls of the tool, like a child who knows how to make a smartphone do exactly what their parent needs it to do.

So, as long as it's a tool you need, but it's too painful for you to control directly, these people are useful. In your case, you already have cause to use the LLM directly on a regular basis, so, as you point out, the value of these people is diminishing and maybe already negative.

> why shouldn’t I replace them with AI?

You probably should. Or, at a minimum, if possible, you should restructure things so that the people who are doing things that you are already proficient at are doing them for someone else who isn't as proficient at the tools, and you can get out of that loop.

One reason I am not yet completely insane is that I realized about 40 years ago that the place I hated most being was inside someone else's debug loop. Because most people are objectively stupid, and this goes double for people who need you in that loop. So I always work to structure my responsibilities and work setup to avoid this. If I find a bug in an internal supplier's code, I create an MVCE and hand it over to them. If an internal customer claims to find a bug in my code and doesn't provide an MVCE, I figure out what they are attempting to do, create my own MVCE for their function, and either fix it if it was really my problem, or hand it back to them, and ask them to expand on it until it breaks and get back to me.

Reflecting on this, I realize that I am probably not too likely to succumb to interminable prompting loops, because that wouldn't feel much different to what I nave avoided most of my life. On the few occasions over the last four decades where being involved in someone else's debug loop was completely unavoidable, the most useful thing I brought to the table when they were out of ideas and ready to throw a lot of effort at trying random things was a series of questions like "What are you going to learn from that? What will your decision points be?"

And I'm not much of a gambler, so I won't be spending too many tokens hoping "the next time, for sure!"


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: