> Humans learn by looking at existing code and gleaning new ideas, and does the AI although it requires much more data.
Actually I doubt it. There are different ways to learn, the cheapest of which is observation indeed. There's also trial and error, how most enthusiast start with little more than a description of the grammar if it's primitive enough, famously so for the BASIC varieties. Of course this leads to local minima and frustration quickly, Turing Tarpits e.g., but it also leads to independent discovery and rediscovery. At the next step of learning, I believe, it's the differentiation that makes ... all the difference – ie. learning about the development of languages and eventually of computing machines per se. At a higher order of learning it cannot be limited to calculation without a semantic interface (and vice versa).
I mean, etymologies like ominous calx (Latin "pebble", cp. chalk, calcium) for calculation, and symbolical notation show the kind of "code" that we copy from. That said, I'd think at the current state of the art, the ai would have to look at the machine code as the representation of its output eventually (perhaps because I found myself wanting to do that), even if encoded through macros and programming language devices, which it could do entirely unsupervised.
However, understanding the task is a more general problem, isn't it?
Belter was probably joking.
A good logic programming ai could do competitive programming through purely deductive reasoning.
I just don’t see good evidence that’ll be possible at a world class level in 10 year.
Give the AI as much data as it wants, if it manages to solve something, it means it's a problem worth automating (is this a cat?)