Machines currently are at an amateur level, but amateurs across the board on the knowledge base.
Amateurs at Python, Fortran, C, C++ and all programming languages. Amateurs at car engineering, airplane engineering, submarine engineering etc. Amateurs at human biology, animal biology, insect biology and so on.
I don't know anyone who is an amateur at everything.
> Machines currently are at an amateur level, but amateurs across the board on the knowledge base.
No, and that is a one of their limitations. LLMs are human-level or above on some tasks - basically on what they were trained to do - generating text, and (at least at some level) grokking what is necessary to do a good job of that. But, they are at idiot level on many other tasks (not to overuse the example, but I just beat GPT-4 at tic-tac-toe since it failed to block my 2/3-complete winning line).
Things like translation and summarization are tasks that LLMs are well suited to, but these also expose the danger of their extremely patchy areas of competence (not just me saying this - Anthropic CEO recently acknowledged it too). How do you know that the translation is correct and not impacted by some of these areas of incompetence? How do you know that the plausible-looking summary is accurate and not similarly impacted?
LLMs are essentially by design ("predict next word" objective - they are statistical language models, not AI) cargo-cult technology - built to create stuff that looks like it was created by someone who actually understands it. Like (origin of term cargo-cult) the primitive tribe that builds a wooden airplane that looks to them close enough to the cargo plane that brings them gifts from the sky. Looking the same isn't always good enough.
Also take a look at rare diseases and doctors [1], in which machines are already better at diagnosing thousands of different rare diseases. Is is fair to say that machines are better at diagnosing diseases in general, just because they diagnose rare diseases, which each one, every doctor will need to diagnose once or twice in his career? Not clear at all.
Right now we are constrained by data, but that constraint will go away in 5 years or so. Will AGI will be achieved by then effortlessly? I have my doubts. My sentiment is that, even if AGI is never achieved, every small advancement in reasoning ability, in context window, in multimodal sensors and actuators, will have a very broad effect on jobs, on the economy and the way we are currently producing anything.
They cannot make a submarine themselves, or design it, but when they reach 50 percent, they will reach 50% at everything.
In submarine engineering, they will be able to design and construct it in some way like 3d print it, and the submarine will be able to move into the water for some time before it sinks. Yeah, probably for submarines a higher percent should be achieved before they are really useful.
Our jobs are safe. I would even expect more "beginners" to try something with AI and then need an actual programmer to help them
( At least, if they are unwilling to invest the time in development and debugging themselves
Ps. Probably all the given examples are in top 3 most popular programming languages.