Hacker News new | past | comments | ask | show | jobs | submit | breadbox's comments login

Not as much fun that way.


A valid point! Clearly the correct solution would be for the kernel module to check if the filename contains the substring "32", and if so it should load it as a 32-bit binary.


Have you considered building your program into the binfmt module and only running 0 byte executables?


So I read the other article, and I saw that bit that disagreed with my essay. My first thought was, "Oh, of course that's how it works. How did I get that so wrong?" My only excuse was that this essay was originally a tech talk and I was under a deadline. (But I really should have caught it when I wrote it up as an essay.)

So I was going to go edit my essay, when I learned that my essay was also posted on Hacker News. And now I discover that someone has already called out my error before I could fix it. Sigh.

Anyway, I just thought I should acknowledge this before I go to fix it.


Thank you so much for responding! I really appreciate you clearing this up for me.

And don't beat yourself up too much. This was a phenomenal article and it gave me the courage to dig into the kernal code myself.


And kudos to you for following up with your own exploration. It's so often the case that the most interesting stuff is hidden in what other people are wrong about.


I couldn't help myself. Despite my better judgement, I wrote an extension to my original essay specifically dissecting this response. (It's now linked at the bottom of the original page.)

Thanks for sharing this transcript with me.


I don't think that's a fair characterization of the statement. It's the type of mistakes, not the bare fact of them, that suggests a lack of what we could call "understanding".


Hey, if we can distract Chat-GPT with becoming an expert INTERCAL programmer, then I say "Win-win!"


If I complain about an answer given to me by one librarian, I'm complaining about that answer and that librarian. If you can find a more knowledgeable librarian somewhere else, that doesn't affect my complaint.

But to be clear, there are no screenshots in the essay. I assimilated the HTML directly into the document.


Look, I understand your point. I really do. But I feel that (perhaps due to one or more of your acknowledged biases) you're applying the wrong context to the situation at hand.

Namely: this is INTERCAL. There is no freaking standard. The "standard" is a 60-page text file written in 1973. The current compiler was written 90% based on this joke-filled document, and 9% new ideas because ESR came up with something even more ridiculous. (The remaining 1% was Don Woods responding to emailed questions by consulting his memory. He still has paper copies of the SPITBOL source code, but the original compiler hasn't been run in over 50 years.) There is no standard because at any given time there are at most three people on the planet who care one iota about INTERCAL standardization, and are only willing to put in any effort if it would be funny. So, for example, the question of what sort of randomness the double-oh-seven operator is contractually required to use is simply not a contextually relevant question.

You seem to be suggesting that it might be possible, even thinly so, that Chat-GPT somehow misapplied a strict standard of randomness in the formation of its response, instead of simply papering over a hole in its knowledge with a bit of improvised plausible-sounding guesswork, a well-documented behavior of both Chat-GPT and thinking entities worldwide. If not, then I humbly apologize for misunderstanding your point. Otherwise, I must politely agree to disagree.

And I'm sorry that my essay rubbed you the wrong way. Perhaps one day you will find it better than you do now, but if not then I hope it passes from your memory quickly.


> But I feel that (perhaps due to one or more of your acknowledged biases) you're applying the wrong context to the situation at hand.

That's likely it. I know it might have looked like a joke, but I was serious about my biases, and wanted to say them up front, especially given that GP was curious about motivation and psychology (and honestly, I was curious too, in a self-reflective kind of way).

> You seem to be suggesting that it might be possible, even thinly so, that Chat-GPT somehow misapplied a strict standard of randomness in the formation of its response, instead of simply papering over a hole in its knowledge with a bit of improvised plausible-sounding guesswork, a well-documented behavior of both Chat-GPT and thinking entities worldwide. If not, then I humbly apologize for misunderstanding your point. Otherwise, I must politely agree to disagree.

You're pretty much on point here. I started with a vague intuition, and writing those comments was helpful in clarifying it; what I was suggesting is that, as GPT-4 was trained with a lot of code and coding-related discussions, it surely encountered many texts where readings and misreadings of language standards, good and bad implementations, were discussed. From my personal experience, this would definitely be the case if it ingested significant amount of material on C++ or Common Lisp - but this kind of "language lawyering" also shows up in context of HTML, CSS, JavaScript, POSIX, etc. So the general pattern of "not going beyond what is written" (in context of programming, though it applies in other domains too) is something I believe GPT-4 could've picked on.

Now, I understand and acknowledge the strong tendency for LLMs to "paper over a hole in its knowledge with a bit of improvised plausible-sounding guesswork". What I was thinking in writing those comments is that the "language lawyering" attitude, had GPT-4 picked it up, isn't competing with hallucinations, but rather modulating/complementing them. This would explain why its "plausibly-sounding guesswork" leaned towards denying PRNG-ness of %-operator, instead of the (more obvious to us) assumption that it is a proper PRNG.

This isn't a strong defense, I'm not going to die on that hill or anything. But it's something I think is at least possible, and I thought worth bringing up to counter the common assumption that GPT-4 is plain old getting confused and making plausibly-sounding shit up at random. I.e. I was suggesting that, while still wrong, it might be wrong for a deeper reason, perhaps more excusable one.

But also I wrote it because it was a knee-jerk counterpoint and I felt it hit a sweet spot of being deep enough, contrarian enough and reasonable enough to warrant posting - and, for some reason, I didn't manage to stop myself from hitting "submit".

> And I'm sorry that my essay rubbed you the wrong way. Perhaps one day you will find it better than you do now, but if not then I hope it passes from your memory quickly.

The more I think about it (in large part through writing this response), the more I realize it's me, not you. So don't be sorry - in fact, I apologize for a mostly knee-jerk reaction that was aimed in 1/3 at your text, in 1/3 at the HN commentary I saw for it, and 1/3 at things I saw in LLM-related threads in the past month. I didn't want to make you feel bad or annoyed, and I promise to give your text a more unbiased second read.

Thanks for replying and arguing your case so thoroughly!


And thank _you_ for your thoughtful response!


And again, a heady mix of accurate observations with complete bullshit. Separating out the misinformation is, as always, left as an exercise for the reader.


Which means it's significantly better than your average news outlet.


Unless `write()` returns an error, in which case rax will contain a negative value. I've had to ditch that shortcut many times because of this.


Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: