But just compiling doesn't mean that much and doesn't really solve the core issue of AIs making stuff up. I could hook up a random word generator into a compiler and it also would also pass that test!
For example, just yesterday I asked an AI a question about how to approach a specific problem. It gave an answer that "worked" (it compiled!) but in reality it didn't really make any sense and would add a very nasty bug. What it wrote (It used a FrameUpdate instead of a normal Update) just didn't make sense on a basic level of how the framework worked.
I'm not interested in this Calvinball argument. The post we're commenting on makes a clear claim: an LLM hallucinating entire APIs. Not surreptitiously sneaking subtly shitty stuff past a compiler.
This is my problem: not that people are cynical about LLM-assisted coding, but that they themselves are hallucinating arguments about it, expecting their readers to nod along. Not happening here.
The AES block cipher core: also grievously insecure if used naively, without understanding what a block cipher can and can't do, by itself. Thus also an LLM call.
For example, just yesterday I asked an AI a question about how to approach a specific problem. It gave an answer that "worked" (it compiled!) but in reality it didn't really make any sense and would add a very nasty bug. What it wrote (It used a FrameUpdate instead of a normal Update) just didn't make sense on a basic level of how the framework worked.