I think LLMs are the wrong solution for this problem.
Why make something that produces low level code based off of existing low level code instead of building up meaningful abstractions to make development easier and ensure that low level code was written right?
Basically react and other similar abstractions for other languages did more to take "coding" out of creating applications than gpt ever will IMO.
I had wondered, perhaps there will be an LLM specific framework that works idiomatic to how the LLM operates. I wonder if an LLM optimal framework would be human readable, or would it work differently. The downside obviously, LLMs work by processing existing solutions. Producing a novel framework for LLMs would require humans to make it, defeating the point a bit.
Why make something that produces low level code based off of existing low level code instead of building up meaningful abstractions to make development easier and ensure that low level code was written right?
Basically react and other similar abstractions for other languages did more to take "coding" out of creating applications than gpt ever will IMO.