It's interesting that everyone is talking about programmers being replaced by AI, but the model did far better on the humanities type subjects than on the programming tests.
Maybe I’m just old but I don’t quite understand the hype.
As long as it’s vulnerable to hallucinating, it can’t be used for anything where there are “wrong answers” - and I don’t think ChatGPT-4 has fixed that issue yet.*
Now if it’s one of those tasks where there are “no wrong answers”, I can see it being somewhat useful. A non-ChatGPT AI example would be those art AIs - art doesn’t have to make sense.
The pessimist in me see things like ChatGPT as the ideal internet troll - it can be trained to post stuff that maximise karma gain while pushing a narrative which it will hallucinate its way into justifying.
* When they do fix it, everyone is out of a job. Humans will only be used for cheap labor - because we are cheaper than machines.
Humans get things wrong too. A better question is: what error rate is acceptable for this task?
Jobs where higher error rates are acceptable, or where errors are easier to detect, will succumb to automation first. Art and poetry fit both of these criteria.
The claim is that as the model and training data sizes increase, these errors will get more and more rare.
We will see...
I am very optimistic about the far future. However, there will be a transition period where some jobs have been automated away but not others. There will be massive inequality between the remaining knowledge workers and manual laborers. If I was in a role on the early automation side of the spectrum then I would be retraining ASAP.
Humans can self correct / think critically. AIs like ChatGPT can’t do that at all.
You know sometimes you have a “bright idea” then after thinking about it for a second you realise it’s nonsense. With AI like ChatGPT, the “thinking about it for a second” part never happens.
There are logs where ChatGPT initially gives the wrong answer, but then corrects itself when asked to explain the wrong answer. Is that the second part you're thinking of?
The crucial difference there is the presence of an external agent intelligent enough to spot that the answer is wrong; humans can do that for themselves. ChatGPT doesn't self-reflect.
Interestingly, many (most?) humans don't self-reflect or correct themselves unless challenged by an external agent as well — which doesn't necessarily have to be another human.
Also of note, GPT-4 seems to show huge improvements so far over GPT-3 when it comes to "thinking out loud" to come to a (better) answer to more complex problems. Kind of a front-loaded reflection of correctness for an overall goal before diving into the implementation weeds — something that definitely helps me (as a human) avoid unnecessary mistakes in the first place.
> Interestingly, many (most?) humans don't self-reflect or correct themselves unless challenged by an external agent as well
Disagree with you here - why do you say this? Maybe we don't apply self-reflection consistently (for example when it comes to political beliefs) but even toddlers know when they haven't achieved the goal they were aiming for. ChatGPT has no clue unless you prod it, because it doesn't know anything - it's stringing words together using probability.
You are imagining that overnight we'll just use chatgpt to answer if a loan should be granted to a customer, and of course it can't do that reliably. But think turning that decision into steps that we can chip away at the problem. E.g.:
Step 1 will be to use chat gpt to get all of the loan inputs from documents, step 2 could be to identify any information that is missing that we should use to make the decision, step 3 will be making the decision. At each step well checks/balances and have human feedback. But don't kid yourself this is coming and the benefit for those that make the shift first are huge.
We are still very, very far away from having robotics overtake human dexterity. Even if AI can replace all knowledge workers, barbers, surgeons, and athletes will have a job for a long time.
The only careers in the future will be people who don’t do something an AI can do, so the EV won’t be low compared to no existent jobs. Obviously “capitalist” is the only job that makes real money.
I mean low EV compared to other careers right now. Obviously automation benefits the ownership class massively. Buy tech stocks; they are discounted at the moment.