Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Of course, what you're describing is just machine language, but at a different level of abstraction. How many C++ coders these days can read x86 or ARM assembly? Not many, because they almost never need to.

It's well past time for traditional "high level" programming languages to meet the same fate.



Formal language to formal language provides a level of determinism. I know what kind of code gcc will generate for a loop, generally. But if I care about vectorization or whatever, I will need to inspect the assembly to ensure it is generating it correctly (or write it myself).

Natural language to formal language does not provide that. How the hell would I debug or operate a system by just looking at a prompt? I can't intuit which way the LLM generated anything. I will always have to be able to read the output.

AFAICT, the only people who say you can remove code are people who don't code. I never hear this from actual devs, even if they are bullish on AI.


Yeeeeah, determinism... going to have to leave that behind, I'm afraid. Otherwise you will be outcompeted by people who do. If you thought test-driven development was important before, you ain't seen nothin' yet.

SIMD optimization is already handled well by the current generation of models [1]. There will be no point in doing that by hand before too long. An exercise for antiquarians, like building radios from vacuum tubes.

I never hear this from actual devs, even if they are bullish on AI.

You're hearing it from one now. Five years from now the practice of programming will look quite different. In ten to fifteen years it will be unrecognizable.

1: https://github.com/ggml-org/llama.cpp/pull/11453


You said:

> How many C++ coders these days can read x86 or ARM assembly? Not many, because they almost never need to. It's well past time for traditional "high level" programming languages to meet the same fate.

There is a misunderstanding, let me rephrase. How will I operate and maintain that software without a high level language to understand it? Or do you think we will all just be debugging asm? The same language you just said people don't bother to learn? Or am I supposed to debug the prompt, which will nondeterministically change the asm, which I can't verify because I can't read it?

Doesn't matter how it evolves, some easy to read for humans high level language that deterministically generates instructions will always be needed. To try and replace that is kind of counter productive, imo. LLMs are good at generating high level language. Leave the compilers to do what they are good at.


I hear what you're saying, understand it perfectly well, and sympathize to some extent... but it's not going to play out like that.


People have been saying that about code for several decades now. I have on my shelf a book published in 1980 discussing this very theme.

Data compression on a massive scale and NLP search on top of that will not be the thing that finally does it. Code is logically constrained so it can be load bearing.

If NLP coding is ever solved that might change. But LLMs did not solve NLP, they improved massively on the state of the art but they are still riddled with glaring issues like devolving into nonsense often and in unpredictable ways.

All LLM-as-AI hype hinges on some imaginary version of it that is just around the corner and solves the current limitations. It's not about what is there, but what ought to be in the mind of people who think it's the silver bullet.


What you're missing is that we now understand things about the functional nature of language itself that nobody had the faintest clue about in 1980.

I was big into writing text adventures in those days, where the central problem was how to get the computer to understand what the user was saying. It was common for simple imperative sentences to be misinterpreted in ways that made players want to scream in frustration. Even the best text parsers — written by the greatest minds in the business — could seem incredibly obtuse, because they were. Now the computer is writing the fucking game.

You had to be there to understand what a big deal this is, I guess. If you went back in time and brought even our current primitive LLM tech with you, you'd be lucky not to be burned at the stake. NLP is indeed 'solved,' in that it is now more of a development problem than a research problem. The Turing Test has been passed: you can't say for sure if you're arguing with a bot right now. That's pretty cool.


You assume a lot about me.

But you're right. The NLP interface is cool. It's kinda like VR. It would be awesome if it worked the way we dream it could.

Maybe that's why we keep getting hung up on implementations that make it seem like they got it figured out. Even when they clearly haven't, we still avert our eyes from the fraying edges and make believe.

Maybe that's why both fields are giant money pits.


You assume a lot about me.

... which is my point as well. You can no longer tell if your interlocutor is even human, and yet you're still thinking and talking about books written in the 1980s.

(NLP wasn't much of a money pit back in the 80s, I know that much. If it was, somebody else must've been getting all the money...)


> ... which is my point as well. You can no longer tell if your interlocutor is even human

I don't really think modern chatbots pass the Turing test. It's not that hard to figure it out.

> (NLP wasn't much of a money pit back in the 80s, I know that much. If it was, somebody else must've been getting all the money...)

No, the point is that it's become one now that LLMs make it seem like we finally got the NLP interfaces we've been dreaming about for decades.


I don't really think modern chatbots pass the Turing test. It's not that hard to figure it out.

A lot of people on Reddit didn't figure it out, if you've followed that story ( https://old.reddit.com/r/changemyview/comments/1k8b2hj/meta_... ).

You can file that under the apparently-infinite set of Things That Are Only Going to Get Worse.


I encurage you to read through the comments on that pull request.

The initial PR did introduce a buffer overflow.

Also keep in mind that there was no novel vectorization, there were already multiple SIMD implementations for other ISAs.


A buffer overflow. OMFG. A buffer overflow. "That does it. I'm taking this talking dog right back to the pound."

This is what it must have felt like when a few people started suggesting that horses were probably not going to be the way people got around for much longer, and other people giggled and guffawed at them and said "LOL" in Morse code, or whatever the memetic currency of the day was.

All the first group could do was wrinkle their noses and reach for the doorknob, once they realized that winning an argument with such people was neither possible nor necessary.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: