Hacker Newsnew | past | comments | ask | show | jobs | submit | Philpax's commentslogin

you are posting in a thread about it finding a novel solution to an unsolved mathematics problem

> Chris Lattner, inventor of the Swift programming language recently took a look at a compiler entirely written by Claude AI. Lattner found nothing innovative in the code generated by AI [1]. And this is why humans will be needed to advance the state of the art.

This feels like an unfair comparison to me; the objective of the compiler was not to be innovative, it was to prove it can be done at all. That doesn't demonstrate anything with regards to present or future capabilities in innovation.

As others have mentioned, it's not entirely clear to me what the limit of the agentic paradigm is, let alone what future training and evolution can accomplish. AlphaDev and AlphaEvolve ddemonstrate that it is possible to combine the retained knowledge of LLMs with exploratory abilities to innovate in both programming and mathematics; there's no reason to believe that it'll stop there.


Yeah, it's a bit like taking the output of a student project in a compiler construction class and using it to judge whether said student is capable of innovation without telling them in advance they'd be judged on that rather than on the stated requirements of the course.

It'd be interesting to prompt it to do the same job but try to be innovative.

To your point, yeah, I mostly don't want AI to be innovative unless I'm asking for it to be. In fact, I spend much more time asking it "is that a conventional/idiomatic choice?" (usually when I'm working on a platform I'm not super experienced with) than I do saying "hey, be more innovative."


Yeah, I'd love to find time to. But e.g. I think that is also a "later stage". If you want to come up with novel optimizations, for example, it's better to start with a working but simple compiler, so it can focus on a single improvement. Trying to innovate on every aspect of a compiler from scratch is an easy way of getting yourself into a quagmire that it takes ages to get out of as a human as well.

E.g. the Claude compiler uses SSA because that is what it was directed to use, and that's fine. Following up by getting it to implement a set of the conventional optimizations, and then asking it to research novel alternatives to SSA that allows restarting the existing optimizations and additional optimisations and showing it can get better results or simpler code, for example, would be a really interesting test that might be possible to judge objectively enough (e.g. code complexity metrics vs. benchmarked performance), though validating correctness of the produced code gets a bit thorny (but the same approach of compiling major existing projects that have good test suite is a good start).

If I had unlimited tokens, this is a project I'd love to do. As it is, I need to prioritise my projects, as I can hit the most expensive Claude plans subscription limits every week with any of 5+ projects of mine...


Apologies for the obligatory question, but what did you try to do, and with which AI did you try to do it with?

Well following advice from folk on here earlier, I thought I'd start small and try to get it to write some code in Go that would listen on a network socket, wait for a packet with a bunch of messages (in a known format) come in, and split those messages out from the packet.

I ended up having to type hundreds of lines of description to get thousands of lines of code that doesn't actually work, when the one I wrote myself is about two dozen lines of code and works perfectly.

It just seems such a slow and inefficient way to work.


Hate to pull the skill issue card here, but that is a trivial problem that can be one shotted with almost any model with

Okay, tell you what then. Help me learn.

The problem is that I want something that listens on a TCP connection for GD92 packets, and when they arrive send appropriate handshaking to the other end and parse them into Go structs that can be stuffed into a channel to be dealt with elsewhere.

And, of course, something to encode them and send them again.

How would I do that with whatever AI you choose?

I'm pretty certain you can't solve this with AI because there is literally no published example of code to do it that it can copy from.


GD92 packets?

No idea what you’re talking about but if it has a spec then it doesn’t matter if it’s trained on it. Break the problem down into small enough chunks. Give it examples of expected input and output then any llm can reason about it. Use a planning mode and keep the context small and focused on each segment of the process.

You’re describing a basic tcp exchange, learn more about the domain and how packets are structured and the problem will become easier by itself. Llms struggle with large code bases which pollute the context not straightforward apps like this


One other thing, it might be worthwhile having the spec fresh in the LLM's context by downloading it and pointing the agent at it. I've heard that that's a fruitful way to get it to refresh its memory.

Yep you can even extract the relevant parts and put them into local files the llm can scan

> GD92 packets? No idea what you’re talking about but if it has a spec then it doesn’t matter if it’s trained on it.

Okay, so you're running into the same problem that LLMs are.

> Break the problem down into small enough chunks. Give it examples of expected input and output then any llm can reason about it.

So I have to do lots of grunt work?

> You’re describing a basic tcp exchange, learn more about the domain and how packets are structured and the problem will become easier by itself

I've written dozens of things that deal with TCP. I already have a fully-working example of what I want. The idea was to test if I could recreate it using LLMs.

How is it supposed to work? How does it put in the code I already know I want?


>Okay, so you're running into the same problem that LLMs are.

I can't tell if you are a troll or not, but you can't complain that nobody understands your intentionally vague and obtuse way to describe the problem at hand to pretend you're superior.

https://www.publiccontractsscotland.gov.uk/NoticeDownload/Do...

You have to rename the file ending to PDF. It's probably the wrong spec, because I'm basing this research on literally four letters that could mean anything since there is zero context given here. I've also found some German documents about chemistry.

If your argument is that LLMs and humans are stupid because they don't know what a "GD92" is, then yeah maybe it's a you problem.

Go and throw the spec into openai codex inside limactl (get it from GitHub) and use zed (the editor) and a SSH remote project to get inside the VM, don't forget to enable KVM for performance. The free tier for openai is fine, but make sure to use codex 5.2.

First ask questions on what the binary encoding is based on. It's probably X.400, then once you've asked enough questions, tell it to implement it. You probably won't have to read the spec at all yourself.


Hes not a troll, hes just trying way too hard to prove a point that half the people here can see is nonsense.

Its not worth engaging a guy who is adversarial to learning how a tool works, just so he can maintain some air of superiority for his ego.


That is the correct spec.

Remember, I've already written something that does this. I'm trying to understand how and why an LLM would help.

Which part of the job is the LLM supposed to do?


All of it: https://github.com/philpax/gd92-protocol-go-generated

I started the task 56 minutes ago with one prompt, and now I have an implementation I can show you. There's plenty to quibble about - the files splayed over the main directory are quite ugly, and there is no actual test data that we can use on the public internet - but these are all trivially resolvable issues.

I didn't do any additional research for this. I gave it the spec PDF, your instructions upthread, and told it to build a library. You can also consult the transcripts (linked in the README) to see that I have no tricks up my sleeve. I didn't need to decompose the task in any meaningful way: the only input I provided was on minor matters of taste.


tbh that's not a helpful thing to say. I think a more productive thing would be to ask "What model are you using?" "Are you using it in chat mode or as a dedicated agent?" "Do you have an AGENTS.md or CLAUDE.md?"

I've also been underwhelmed with its ability to iterate, as it tends to pile on hacks. So another useful question is "did you try having it write again with what you/it learned?"


> I think a more productive thing would be to ask "What model are you using?" "Are you using it in chat mode or as a dedicated agent?" "Do you have an AGENTS.md or CLAUDE.md?"

In my case I'd have to say "Don't know, whatever VS Code's bot uses", and "no idea what those are or why I have to care".


> Don't know, whatever VS Code's bot uses

The reason I ask about what model is I initially dismissed AI generated code because I was not impressed with the models I was trying. I decided if I was going to evaluate it fairly though, I would need to try a paid product. I ended up using Claude Sonnet 4.5, which is much better than the quick-n-cheap models. I still don't use Claude for large stuff, but it's pretty good at one-off scripts and providing advice. Chances are VS Code is using a crappy model by default.

> no idea what those are or why I have to care

For the difference between chat mode and agent mode, chat mode is the online interface where you can ask it questions, but you have to copy the code back and forth. Agent mode is where it's running an interface layer on your computer, so the LLM can view files, run commands, save files, etc. I use Claude in agent mode via Claude Code, though I still check and approve every command it runs. It also won't change any files without your permission by default.

AGENTS.md and CLAUDE.md are pretty much a file that the LLM agent reads every time it starts up. It's where you put your style guide in, and also where you have suggestions to correct things it consistently messes up on. It's not as important at the beginning, but it's helpful for me to have it be consistent about its style (well, as consistent as I can get it). Here's an example from a project I'm currently working on: https://github.com/smj-edison/zicl/blob/main/CLAUDE.md

I know there's lots of other things you can do, like create custom tools, things to run every time, subagents, plan mode, etc. I haven't ever really tried using them, because chances are a lot of them will be obsolete/not useful, and I'd rather get stuff done.

I'm still not convinced they speed up most tasks, but it's been really useful to have it track down memory leaks and silly bugs.


>I decided if I was going to evaluate it fairly though, I would need to try a paid product.

Okay. Get me a job and I'll pay for any model of your choosing. Until then, finances are very slim.


> Get me a job

Heh, I'm a college student, so I can't help with that...

You could also try Gemini 3 pro with Gemini's CLI which is free, though it's not as good at using tools. But, it sounds like you're not interested, which is fine!

Just please don't continue to argue with finer points if you're not interested. I've done my best to engage with your points, but I get the sense that it doesn't matter what I say.

I am curious though, why do you feel so strongly about LLM products?


I should note that I'm not the same person that you were talking to you the chain. So I hope we're not mixing conversations and people. I don't think I've said that much in this chain, so I can't answer much.

But sure:

>why do you feel so strongly about LLM products?

Personally, I work in games. So pretty much everything in the discourse of LLMs and Gen AI has been amplified 5x for me. The layoffs, the gamers' reaction to stuff utilizing AI, the impact on hardware prices, the politics, etc.

Theres a war of consumers and executives, and I'm trapped in the middle taking heat from both. It's tiring and it's clear who to blame for all of this. I want all of this to pop so the true innovation can rise out, instead of all the gold rush going on right now.

Also,game code is very performance sensitive. It's not like a website or app where I can just "add 5 seconds to a load time" unless I'm working on a simple 2D game, nor throw more hardware to improve performance. Even if LLMs could code up the game, I'd spend more time optimizing what it makes than it saved. It simply doesn't help for the kind of software I work with.


I have worked in games in the past, and currently work in games-adjacent. I'm sympathetic to the concerns you've mentioned, especially given how controversial it is (the recent reveal of DLSS5, which I find directionally interesting but executed poorly, is but one of many examples.)

From speaking to my friends in the industry, it seems like uptake for code is happening slowly, but unevenly, and the results are largely dependent on the level of documentation, which is often lacking. (I know of a few people using AI for (high-quality!) work on Godot, and their AIs struggle with many of the implicit conventions present in the codebase.)

With that being said, I would say that LLMs have generally been quite the boon for the (limited) gameplay work that I have done of recent. Because the cost of generation is so cheap [0], it is trivial to try something out, experiment with variations, and then polish it up or discard it entirely.

This also applies to performance work: if it's a metric that the AI can see and autonomously work on, it can be optimised. This is, of course, not always possible - it's hard to tell your AI to optimise arbitrary content - but it's often more possible than not, especially if you get creative. (Asking it to extract a particularly hot loop out from the code it resides within, and then optimising that, for example: entirely feasible.)

I think there are still growing pains, but I'm confident that LLMs will rock the world of gamedev, just like they're doing to other more well-attested fields of programming.

[0]: https://simonwillison.net/guides/agentic-engineering-pattern...


>directionally interesting but executed poorly

Yeah, that sums up a lot of my thoughts with AI c. 2026.

I do take some schedenfreude knowing that AI training also struggles with the utter lack of documentation here. That may be a win in and of itself if this paradigm forces the games industry to properly care for tech writing.

>Because the cost of generation is so cheap [0], it is trivial to try something out, experiment with variations, and then polish it up or discard it entirely.

Well, that's another thing I'm less confident about. The cost is low, for now. But we also know these companies are in loss leader mode. It'll probably always be cheap for a company to afford agents, but I fear reliance on these giant server models will quickly price out ICs and smaller work environments.

That might be something China beats us too. They seem to be focusing on optimizing models that works on local machines out of necessity, as opposed to running tens of billions of dollars of compute. My other big bias is wanting to properly own as much of my pipeline as possible (to the point where my eventual indie journey is planning around OS tools and engines, despite my experience in both Unity and UE), and current incentives for these companies don't want that.


Crap, you're right. I swear, tiny usernames is both a boon and a curse...

> Personally, I work in games. So pretty much everything in the discourse of LLMs and Gen AI has been amplified 5x for me. The layoffs, the gamers' reaction to stuff utilizing AI, the impact on hardware prices, the politics, etc.

> Theres a war of consumers and executives, and I'm trapped in the middle taking heat from both. It's tiring and it's clear who to blame for all of this. I want all of this to pop so the true innovation can rise out, instead of all the gold rush going on right now.

That makes a lot of sense. I've been pretty fed up with the hyperbole and sliminess, and I can't imagine how difficult it is to be squeezed between angry gamers and naive and dense executives.

When you say "true innovation", is that in terms of non-AI innovation, or non-slimy AI innovation? I guess I personally still believe that LLMs are useful, but only as another tool amongst many others.

I'm also a big believer in human centered UX design, and it's kinda sad that the dominant experience is all textual.

> Also,game code is very performance sensitive

It does seem like game programming is the last bastion of performance, at least in terms of normal hardware, since the game has to go to the consumer's hardware. The "silver bullet" mentality drives me a little crazy because it clearly doesn't work in all situations.

Anyways, I don't know if this response really has a point, but I wanted to at least acknowledge your experience.


>When you say "true innovation", is that in terms of non-AI innovation, or non-slimy AI innovation?

A bit of both. Similar to other tech investment, all the gaming centric accelerators are looking for is AI pitches. Makes me wonder what innovations thr past few years have been overlooked in lieu of the Ai Gold Rush.

But I can see the long term (likely 5+ years out) potebtial of Ai as well. Once we stop using it as a means to steal from and remove artists, I can see all kinds of tedious problems with assets that Ai can accelerate. Generative fill is a glimpse of a genuinely useful tool that helps artists instead of pretending to be an artist itself.

Can it eventually write performant code? Maybe. The other big issue is that 1) a lot of code isn't online to train on and 2) a lot of that code is still a mess to process, with little standards to follow. Maybe it can help with graphics code (which is much more structured) in the near future.


Agreed was a bit rough. Yes they are not great at iterating and keeping long contexts, but you look at what he’s describing and you have to agree that’s exactly the type of problem llm excel at

Shouldn’t have to baby step through the basics when the author is clearly not interested in learning himself


> Shouldn’t have to baby step through the basics when the author is clearly not interested in learning himself

I'd rather assume good faith, because when I first started using LLMs I was incredibly confused what was going on, and all the tutorials were grating on me because the people making the tutorials were clearly overhyping it.

It was precisely the measured and detailed HN comments that I read that convinced me to finally try out Claude, so I do my best to pay it forward :)


I totally agree, and myself have gone through that cycle.

But the guy is being adversarial and antagonistic. Its a 2 way street, sometimes you have to call people out on their BS because I'm not seeing someone argue in good faith, but rather pretending some superior knowledge because hes working on a esoteric protocol like people here don't know how packet headers work


I don't read it as superiority, perhaps bitterness would be the closest word to what I'm reading.

> sometimes you have to call people out on their BS

That's true, but I think that it's often much later than what some people would consider enough. Someone can be bitter, and still have good points. It's very dangerous to preemptively dismiss points, because it means that I won't listen to anyone who disagrees with me. I'm willing to put in the work to interpret someone's response in a productive light because there's often something to find.

There's a framework that I work within when I'm in a discussion. There's three elements: arguments, values, and assumptions. An argument is the face value statements. But those statements come from the values and assumptions of the person.

Values are what people consider most important. In most cases, our values are the same, which is good!

The biggest difference is assumptions. For example, one assumption I have is that free markets are the best method we have to lift individuals out of poverty. This colors how I talk about AI. Another person might assume that free markets have failed, and we need to use a different approach. This colors how they would view AI. So we'll completely talk past each other when arguing AI, because it's more of a proxy war of our assumptions.


>Shouldn’t have to baby step through the basics when the author is clearly not interested in learning himself

Okay. Whip up your favorite model and report back to us with your prompts. I'm pretty anti-AI, but you're going to attract more bees with honey than smoke.


There is a big performance difference between models.

Trying to trace back the quality of the model to the "skills" of the person sounds extremely manipulative.


Astral's tooling is excellent and almost makes up for Python being a badly-designed language. Almost.

I work in Python every day and Astral's tools are really what made it bearable. This acquisition is so disappointing.

Agree. As many others have expressed, uv and ruff have brought some sanity to the Python toolkit.

This is a massive backward step for the Python ecosystem, but it's not like a hundred-billion dollar company will care about that.


Yes, but the grandparent poster and I would agree that the parse is not that ambiguous/the meaning is easily inferred. The sentence states that the library is overlapped _and_ that overlap is available in better quality: it may seem contrived, but it reads as a rather natural collapse of an implicit conjunction to me.

There's not really an exact science to it, but manually-optimised code is usually more structured/systematic to make it easier for the human author to manage the dependencies and state across the board, while automatically-optimised code is free to arrange things however it would like.

As an example of the kinds of optimisations that the best human programmers were doing before compilers took over, see Michael Abrash's Black Book: https://www.phatcode.net/res/224/files/html/index.html - you can intuit how a human might organise their code to make the most of these while still keeping it maintainable.


If you asked a three-year-old a question that they proceeded to completely flub, would you then assume that all humans are incapable of answering questions correctly?

Nobody is arguing for the quality of the search overviews. The models that impress us are several orders of magnitude larger in scale, and are capable of doing things like assisting preeminent computer scientists (the topic of discussion) and mathematicians (https://github.com/teorth/erdosproblems/wiki/AI-contribution...).


I'm a Rust main, but this argument seems... incorrect? You would not need macros for Rust to remain a usable memory-safe language. They certainly make it easier, but they're not necessary. It would be perfectly possible to design a variant of Rust that gets you to 80-90% of Rust's usability, with the same safety, without macros.


how would you implement https://doc.rust-lang.org/stable/std/pin/macro.pin.html without macros? a macro is used to shadow the original variable so that you can't move it (safely) after you pin it


Regular variable definition shadows. Macros expand to regular Rust code, they could always be replaced by the expanded body.


yes, but the code inside is unsafe. the pin macro is like a safe function.


I'm not sure what that has to do with anything. The macro isn't what makes it safe. The unsafe code being properly written is.


but without macros, how would you expose a safe interface?

  fn pin(x: T) -> Pin<&mut T> { ... }
would move the value


Your macroless variant of Rust would offer a safe builtin that does this. It doesn't need to be implemented with a macro.


Since macros just expand into code, how could you imagine that a macro is ever necessary?


the macro uses unsafe inside, so that's another instance of unsafe you'll need to check, whereas the pin macro is like a safe function


Excellent goalpost moving! Congratulations!


no, you just missed my point. expanding the implementation is not a safe abstraction. show me how you'd implement the functionality of the pin macro as a safe abstraction.


I didn't miss that you totally changed the subject and now you're attacking a strawman. See Steve Klabnick's response to your other comment where you did this. Of course macros are good for encapsulation and abstraction, but that's a different subject--and note that the discussion was about Zig vs. Rust, and Zig has no macros so there's unencapsulated unsafe code all over the place.

I won't respond further.


i was responding this claim

> It would be perfectly possible to design a variant of Rust that gets you to 80-90% of Rust's usability, with the same safety, without macros.

i then present an api that i think relies on macros to expose a safe api

> Of course macros are good for encapsulation and abstraction, but that's a different subject.

no it's not. exposing safe abstractions is pretty much rust's raison d'être


everything in zig is unsafe and needs to be checked like rust unsafe, so…


It is presented as a Wikipedia article from the future describing a subculture of tomorrow. See also https://qntm.org/mmacevedo for another example of this genre.


Functionality-wise, it's great, but it's a buggy mess, and it seems to be getting worse with each release.


I've been using deletated Claude agents in vscode and it crashes so much it's insane... I switched to copilot Claude local agents and it works much better.

Idk about this whole vibe coding thing though... Well see what happens


I’m a heavy user for about four months now, and it’s definitely getting better for me. How would you say it’s getting worse?


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: