Hacker Newsnew | past | comments | ask | show | jobs | submit | AdieuToLogic's commentslogin

> I wrote this the other day:

>> Hallucinations can sometimes serve the same role as TDD. If an LLM hallucinates a method that doesn’t exist, sometimes that’s because it makes sense to have a method like that and you should implement it.

A detailed counterargument to this position can be found here[0]. In short, what is colloquially described as "LLM hallucinations" do not serve any plausible role in software design other than to introduce an opportunity for software engineers to stop and think about the problem being solved.

See also Clark's third law[1].

0 - https://addxorrol.blogspot.com/2025/07/a-non-anthropomorphiz...

1 - https://en.wikipedia.org/wiki/Clarke%27s_three_laws


Did you mean to post a different link? The article you linked isn’t a detailed counterargument to my position and your summary of it does not match its contents either.

I also don’t see the relevance of Clarke’s third law.


> They're wondered why they weren't fired, when their prank had no negative side effects ...

Copyright infringement lawsuits are a real thing and can include both the offending company (Apple) and all parties identified as potential violators.

In other words, management may have saved this guy's ass from being named in a very costly lawsuit.


After having introduced an Easter egg and being called out for it, the author states:

  I became a cautionary tale though and would occasionally 
  warn off the new hires who might have had an inkling to do 
  something similar. And true to my word, I would tread very 
  carefully from that day on with an eye to what Apple HQ 
  would think about any of my actions — and potential 
  consequences (intended or not). 
It is very likely that management weighed the author's value to the organization against the cost, real or perceived, to rectify this particular situation. An additional potential value which was ultimately realized is the author became an extension of organizational policy "at ground level."

IMHO, this is an optimal resolution and should be applauded. Management reaped a 20-year reward and the author kept his job.


Reminds me of a story (which I've heard in slightly varying versions) that happened in the early days of IBM. A man had made a costly mistake, but Thomas Watson didn't fire him, saying "we just spent all that money educating you!"

People learn by screwing up, and those can be the best lessons.


I did.

In one of my early jobs, I spammed a bunch of the company customers, trying to promote my music. This was before the Internet (1987 or so), and before spam, so I was a “proto-spammer.”

It became a watershed in my life, and I took my work and career, very, very seriously, after that.


[flagged]


It was a joke.

> Reminds me of a story (which I've heard in slightly varying versions) that happened in the early days of IBM. A man had made a costly mistake, but Thomas Watson didn't fire him, saying "we just spent all that money educating you!"

> People learn by screwing up, and those can be the best lessons.

Precisely.

Had the author been fired for his actions, the only people who could learn from the situation would be those employed at that time. As each cycled out of the company, this lesson would be lost.

By addressing the error in judgement while retaining the author, the organization ensured the lesson would be passed on to each new generation of engineers without having to prescribe the policy dictatorially.


This is also a much better way of handling the problem than OP's blunder on the article.

Berating in an office might teach one person. Not berating and admitting process failure in public can be educational for the whole company.

Reminds me of Gitlab's database deletion thing a few years ago. Not only they spent money educating the one engineer, by making the story public they also educated their other employees, and dare I say the whole industry. I still hear people referring to that story from time to time.


That's a classic case of the sunk cost fallacy. Just because you spent a lot educating someone doesn't mean that you shouldn't get rid of them.

It does sound like the classic sunk cost fallacy. But it also implies that the would-be-fired person has become better after being "educated", and probably better than the average newly-hired person replacing them if they are fired...

When a generally smart person makes a humiliating million-dollar mistake, then you can trust that person, more than any of their coworkers, to never make that specific mistake again. That's the "expensive education" here.

Depending on the mistake it could also mean that they are more likely to make the same mistake. Especially after the memory of the event fades, they may regress to the old way they acted.

"all that money" here refers to the money they lost due to the mistake. The sunk cost fallacy refers to money you intended to spend...

You may want to refresh yourself on the definition. Sunk cost is resources already spent, which cannot be recovered - the inability to recover it is what drives the reluctance in the case of the fallacy.

If considered an "investment" (will produce value if retained) it's not a sunk cost.

I wouldn't say it's an optimal resolution, because it was a complete overreaction by management in the first place. The optional resolution would have been that the author was gently but firmly told "that isn't acceptable here, don't do it again". There was no need for his manager to tear him a new one over such a minor incident.

It was probably not an overreaction: if they'd had to pull millions of CDROMs off the shelves, that would be a very big deal.

That's why it's often critical to implement systems to second-check important stuff.

If I may lapse into garrulous-old-fart mode: Soon after I made partner in my BigLaw IP firm, as the resident software(-adjacent) geek I was tasked by the management committee with overseeing the firm's docketing operation: Six very nice people (ladies of varying ages, as it happened) who for many years had opened all our incoming mail from courts, agencies, etc., and logged future deadlines into our by-then antiquated, home-grown, pre-PC calendaring system, still running on a mini-computer.

The proximate cause of my tasking was that the other partners were getting increasingly frustrated by the human-origin calendaring errors that kept showing up in lawyers' tractor-feed greenbar printouts. If we'd blown a court deadline, we likely would have gotten zero sympathy from the court, the client, and our malpractice-insurance carrier.

The first thing I did was implement a Navy-nuke system in which every deadline entry got second-checked. That produced some grumbling among the docketing staff because it made extra work for them.

(We mitigated that by eliminating some calendaring tasks that no longer made sense, such as logging every date, in ink, into a bound ledger book, which no one had looked at in years — I called it our WORN drive: Write Once, Read Never.)

I reassured the staff that no one would get fired for making a mistake, because we all do that. But they very well might get fired if their mistake got out the door because they'd bypassed the second-checking system.

Happily, the rate of calendaring errors showing up on lawyer greenbar printouts dropped to essentially zero. No one had to be fired, and several staff members said that they liked the second-checking system.

(Soon after, we switched to a then-modern, PC-based system.)


No, it would still be an overreaction in that case. When someone makes their first mistake, even if it's an expensive mistake for the company, you don't start with the kind of berating that the author related. Certainly you might have to escalate to that level of intensity if the person doesn't improve, but you don't start there.

I think you're missing a key point here.

The (business) reason you don't punish people for a mistake as a general policy isn't that you're being a forgiving soul and giving them a second chance. It's because humans have a natural unavoidable tendency to eventually make certain errors in certain roles, and it's counterproductive to punish the poor soul that happened to make that mistake. It could've been anyone else, and you're punishing the one who has the best experience for avoiding the same mistake in the future. You should instead use their experience to design a process that catches those mistakes.

That rationale goes completely out the window when you're talking about lapses in judgment that the average employee would absolutely not make, like a random unauthorized Easter egg that could put the company in legal jeopardy. It's like if a surgeon brought his Playstation to the operating room. It's not a case of "we just spent $cost training him" anymore - that was never even part of his job description in the first place, nor something anyone would have expected him to even try. There was simply no reason for it; it was just an employee fooling around and planning to apologize for it afterward. (!) At that point you're dealing with something that's much closer to an insider threat than a human error.

So, as a matter of general policy, firing him for that would have absolutely made sense. Of course individual cases may warrant different reactions, but the general reasoning for blameless mistakes simply would absolutely not have applied here.


It's really, really important to understand the context here. From the article:

> I was hired on at Apple in October of 1995. This was what I refer to as Apple’s circling the drain period. Maybe you remember all the doomsaying — speculation that Apple was going to be shuttering soon

Everyone in the company was presumably on edge, and expensive mistakes were starting to border not on career ending, but on _company_ ending. The difference between firing someone for making expensive mistakes and laying them off is nearly immaterial, IMO.


I get that in cases where, like, you accidentally drop a database table. Deliberately adding unauthorized code to a build in 1995 was much less OK than that. Regardless: he wasn't fired.

I really think these reactions come down to people not having working in shrink-wrap software in the mid-1990s. You weren't missing much, though.


I think the argument here is that the dressing-down is done in hindsight, which the developer didn't have. It's not fair to vary the punishment by the cost of the mistake, if the person didn't intentionally make the mistake (and thus didn't take the cost into account), as that's just revenge.

What you want instead is corrective action, which is achieved fine by saying "this cost us $X million in recall costs because we don't want a copyright infringement lawsuit", and then counting on the employee to now know not to make that mistake again.

You could, I guess, argue that if you yell at an employee, they're less likely to make that mistake again, but then you'd have to be yelling at them for every mistake, since they didn't know which mistakes would end up being costly (otherwise they wouldn't make them).


What's weird is that even this developer seems to disagree with people here. It's not complicated, I don't think: we just have a rooting interest in IC developers, and in Easter eggs. I'm really only here to keep saying that 1995 was nothing like 2015, nothing at all like it.

I agree with you on that, I'm just saying that yelling at people is rarely productive. Even firing should be something that's done after multiple issues, not because of one mistake, even if it's sizable. That's just my opinion, though.

I'm 100% on the same page with the "don't freak out at the early-career developer who accidentally drops a table" people; seems like a good management lesson (it also makes me glad I don't manage people). I just read this thread and the Jamiroquai and Sublime started playing in my head and I was teleported back to cubicle culture, which I am here to report is totally different than modern dev culture. :)

Yeah, we really had to make sure software didn't have too many backs back when we'd have to issue a patch release a year later. I'm not sure I miss it.

But they didn't have to, and a bit of thoughtful consideration would have (and presumably did) make that clear.

This is less of a "caught driving drunk" situation and more a "caught driving with one taillight out" situation. You want to make sure it doesn't happen again, but there was no real danger from this single instance.


If he'd actually caused a recall, he'd probably have been fired. Instead, he got chewed out. Sounds about right.

> As I say in the post, you shouldn't use this for docking operations.

Brilliant. :-D


You totally could use it for docking. A real ISS docking manoeuvre takes several hours. Orbits are very predictable and I'm quite confident that the error you'd get projecting your orbit 15min into the future would be good enough to get within close radar range for the final approach. In fact you probably could do it, even if your spavecraft doesnt have DNS at all, and you have to do the DNS resolve from a ground laptop before you board it. Soyez can dock within 3 hours of lauch. Orbits are very predictable in this timeframe.

If there's no timestamp, all you know is a Lat/Long that was accurate sometime in the last 15 minutes (or more, "best effort basis"). But you don't know when, and you don't know the altitude. That's gonna make using that information for docking...difficult.

I shall make the suggestion to NASA that they start using this ;-)

Sure they're predictable, but since you don't get the exact timestamp for those expired coordinates, it's still useless.

Oh, and accuracy is shit anyway (altitude is rounded to 10m)


> CGI has a very long history of security issues stemming primarily from input validation or the lack thereof.

And a Go program reading from a network connection is immune from the same concerns how?


It's not, you have to use Rust :)

> It's not, you have to use Rust :)

If only I could borrow such confidence in network data... :-D


> I believe but cannot cite that fork() got a lot cheaper over the last 30 years as well ...

The fork[0] system call has been a relatively quick operation for the entirety of its existence. Where latency is introduced is in the canonical use of the execve[1] equivalent in newly created child process.

> ... cgi bin works really well if you don’t have to pay for ssl or tcp connections to databases or other services, but you can maybe run something like istio if you need that.

Istio[2] is specific to Kubernetes and thus unrelated to CGI.

0 - https://man.freebsd.org/cgi/man.cgi?query=fork&apropos=0&sek...

1 - https://man.freebsd.org/cgi/man.cgi?query=execve&sektion=2&a...

2 - https://istio.io/


> It was indeed, and I spent much time wailing and gnashing my teeth as a Perl programmer that nothing similar existed in Perl.

mod_perl2[0] provides the ability to incorporate Perl logic within Apache httpd, if not other web servers. I believe this is functionally equivalent to the cited PHP Apache module documentation:

  Running PHP/FI as an Apache module is the most efficient 
  way of using the package. Running it as a module means that 
  the PHP/FI functionality is combined with the Apache 
  server's functionality in a single program.
0 - https://perl.apache.org/docs/2.0/index.html

The key difference was that you had to adapt your Perl to work with mod_perl, where mod_php "just worked" in the same way CGIs did -- you could throw your .php scripts up over FTP and they'd benefit from mod_php being installed. This was a massive difference in practice.

EDIT: I have managed to dig out slides from a talk I gave about this a million years ago with a good section that walks through history of how all this worked, CGIs, mod_perl, PSGI etc, for anyone who wants a brief history lesson: https://www.slideshare.net/slideshow/psgi-and-plack-from-fir...


Just a story of my past.

I got into web dev in the tail end of perl and cgi-bin. I remember my first couple scripts which were just copy/paste from tutorials and what not, everyone knows how it goes. It was very magical to me how this "cgi-bin" worked. There was a "script kiddy hacking tool" I think named subseven (or similar) written partially in perl that you would trick your friends into running or you'd upload on filesharing. The perl part gave you your web based C&C to mess with people or open chats or whatever. I really got into programming trying to figure out how this all worked. I soon switched over to PHP and in my inexperience never realized the deployment model was so similar.

I do think this model of running the script once per request and then exiting really messed with my internal mental model of how programs and scripts worked. Once I was exposed to long running programs that could maintain state, their own internal data structures, handling individual requests in a loop, etc, was a real shock and took me awhile to conceptualize.


I had the same experience! I started out with Perl and CGI, then moved to PHP. Switching to a world where the web application kept running across multiple requests took me quite a bit of effort to get used to.

It is so odd how request-centric my worldview was back then. I literally couldn’t fathom how “old legacy crusty languages” like c/c++ could possibly be behind a website. Learning google was built this way blew my mind.

It’s strange thinking back to the days where persisting information as simple as a view counter required persisting data to a flatfile* or something involving a database.

These days with node and our modern languages like go and rust it’s immediately obvious how it’s done.

I think it’s both a mix of me learning and growing and the industry evolving and growing, which I think all of us experience over time.

* for years using flat files was viewed as bad practice or amateurish. fun to learn years later that is how many databases work.


> It’s strange thinking back to the days where persisting information as simple as a view counter required persisting data to a flatfile* or something involving a database

> These days with node and our modern languages like go and rust it’s immediately obvious how it’s done.

Okay I'll bite. How is it done now and why is the new way better than using a DB?


Persisting information should be done using a database, though. Otherwise your view counter will reset to zero on server restart. Overall I still think PHP's request-centric model is the best fit for the web.

sub7 was a windows binary (client and server), but it’s possible there was an unofficial perl interface for it or something similar. the perl era definitely saw a lot of precursors to modern C2 dashboards

I might be mixing it up with a similar tool around the same time period, although thanks for confirming there did exist a thing called sub7.

I do remember being exposed to the arcane Perl syntax for the first time so it must have been a different program.


Ah yes, it would seem little "Bobby Tables"[0] strikes again.

0 - https://xkcd.com/327/


> There's that famous quote "There are only two hard things in Computer Science: cache invalidation and naming things.", and, sure, it's a bit ironical, but there's some truth in there.

The joke form of this quote goes along the lines of:

  There are only two hard things in Computer Science: cache 
  invalidation, naming things, and off-by-one errors.
:-D

I rather like the snark of:

there's two hard problems in computer science: we only have one joke and it's not funny.

Apparently⁰ by Philip Scott Bowden¹

https://martinfowler.com/bliki/TwoHardThings.html

¹ https://x.com/pbowden/status/468855097879830528


Just remembered another one: there are 10 types of people in the world: those who understand binary and those who don't. :)

Which leads to

> I don't see what's so hard about DNS, it's just cache invalidation and naming things.


Oh that's gooood. Got a cite or is it yours?

... and avoiding off-by-one errors.

My favorite variation only really works in text:

There are three hard problems in Computer Science:

1) Cache invalidation

2) Naming th3) Concurings

rency

4) Off-by-one errors


So many people yearn for LLM's to be like the Star Trek ship computer, which when asked a question unconditionally provides a response relevant and correct, needing no verification.

A better analogy is LLM's are closer to the "universal translator" with an occasional interaction similar to[0]:

  Black Knight: None shall pass.
  King Arthur: What?
  Black Knight: None shall pass!
  King Arthur: I have no quarrel with you good Sir Knight, But I must cross this bridge.
  Black Knight: Then you shall die.
  King Arthur: I command you, as King of the Britons, to stand aside!
  Black Knight: I move for no man.
  King Arthur: So be it!
  [they fight until Arthur cuts off the Black Knight's left arm]
  King Arthur: Now, stand aside, worthy adversary.
  Black Knight: 'Tis but a scratch.
  King Arthur: A scratch? Your arm's off!
  Black Knight: No, it isn't.
  King Arthur: Well, what's that then?
  Black Knight: I've had worse.
0 - https://en.wikiquote.org/wiki/Monty_Python_and_the_Holy_Grai...

Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: