Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

[flagged]


> If you define a grammar for a new programming language and feed it to an LLM and give it NO EXAMPLES can it write code in your language?

Yes. If you give models that have a cutoff of 2024 the documentation for a programming language written in 2025 it is able to write code in that language.


I have found this not to work particularly well in practice. Maybe I’m holding it wrong? Do you have any examples of this?


In my experience it generally has a very good understanding and does generate the relevant test cases. Then again I don't give it a grammar, I just let it generalize from examples. In my defense I've tried out some very unconventional languages.

Grammars are an attempt at describing a language. A broken attempt if you ask me. Humans also don't like them.


For natural language you are right. The language came first, the grammar was retrofitted to try to find structure.

For formal languages, which programming languages (and related ones like query languages, markup languages, etc) are an instance of, the grammar defines the language. It come first, examples second.

Historically, computers were very good at formal languages. With LLMs we are entering a new age where machines are becoming terrible at something they once excelled at.

Have you lately tried asking Google whether it's 2025? The very first data keeping machines (clocks) were also pretty unreliable at that. Full circle I guess.


I've had LLMs do that more than once.


> NO.

YES! Sometimes. You’ll often hear the term “zero-shot generation”, meaning creating something new given zero examples, this is something many modern models are capable of.


> If you define a grammar for a new programming language and feed it to an LLM and give it NO EXAMPLES can it write code in your language?

Neither does your average human. What's your point?


That's the bar now? Some tech is great if the average human is worse at it?


There are lots of bars depending on context.

But being better than an average human is usually one of the higher bars.


Well it depends. For example, calculators have been around for a while and a calculator that only performs better than the average human is not very useful. Sorting algorithms are another example.

Autocomplete/intellisense in an IDE is probably the most salient example. An autocomplete that performs _as well_ as the average programmer is, well, totally useless.


I'm not sure what you mean. Autocomplete usually just gives a big list of options for the next word, so even getting the most relevant ones to the top would be helpful in less-memorized codebases, and actually filling out entire lines as well as an average programmer is not useless at all.


Oh I agree about filling out entire lines. I use copilot to write boilerplate based on a template pattern I provide all the time.

It’s not very precise though. Autocomplete will give me a list of valid APIs for current given version of any library sorted by most recently used with a locality bonus. Copilot on the other hand does not have that level of precision. Two different tools that I use for slightly different things.


> If you define a grammar for a new programming language and feed it to an LLM and give it NO EXAMPLES can it write code in your language?

Of course it can. It will experiment and learn just like humans do.

Hacker news people still think LLMs are just some statistical model guessing things.


> Hacker news people still think LLMs are just some statistical model guessing things.

That's exactly what they are. It's the definition of what they are. If you are talking about something that is doing something else, then it's not an LLM.


No, it's not. Arrogant developers on HN like to parrot this, but it isn't true.

The power of LLMs are the emergent properties that have emerged which weren't specifically taught to the model. That is not a property that exists in statistical, largely more deterministic models.

If you think it's just a giant token prediction machine you've just ignored the last 5 years.


yawn.

this is so reductive it's almost not even worth talking about. you can prove yourself wrong within 30 minutes but you choose not to.


Sorry, I don't follow. How do you define LLMs?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: