I know it sucks now and I agree GPT-4 is not a replacement for coders. However the leap between GPT-3 and 4 indicates that by the 6 level, if improvements continue, it'll reach the scope and accuracy we expect from highly paid skilled humans.
It's only a guess people make that AI improvements will stop at some arbitrary point, and since that point seems to always be a few steps down from the skill level of the person making that prediction, I feel there's a bit of bias and ego driven insecurity in those predictions.
> However the leap between GPT-3 and 4 indicates that by the 6 level, if improvements continue, it'll reach the scope and accuracy we expect from highly paid skilled humans.
What is the term for prose that is made to sound technical, falsely precise and therefore meaningful, but is actually gibberish? It is escaping me. I suppose even GPT 3.5 could answer this question, but I am not worried about my job.
Do you honestly think no AI advancement will fix those limitations? That LLM's or their successors will just never reach human level no matter how much compute or data are thrown at them?
No, we won't. Not in either of our lifetimes. There are problems with infinitely smaller problem spaces that we cannot solve because of the sheer difficulty of the problem. LLMs are the equivalent of a brute force attempt at cracking language models. Language is an infinitesimal fraction of the whole body of work devoted to AI.
>> Do you honestly think no AI advancement will fix those limitations? That LLM's or their successors will just never reach human level no matter how much compute or data are thrown at them?
It has not happened yet.
If it does, how trustworthy would it be? What would it be used for?
In terms of scope, it's already left the most highly-skilled people a light year behind. How broad would your knowledge base be if you'd read -- and memorized! -- every book on your shelf?
plausible, but also i think a highly paid skilled person will do a lot worse if not allowed to test their code, run a compiler or linter, or consult the reference manual, so gpt-4 can get a lot more effective at this even without getting any smarter
It's only a guess people make that AI improvements will stop at some arbitrary point, and since that point seems to always be a few steps down from the skill level of the person making that prediction, I feel there's a bit of bias and ego driven insecurity in those predictions.