It’s worth being clear about what AI risk is. This has nothing to do with “AI may do some harm by putting lots of people out of work”.
The idea is that there is _existential risk_ (ie species-extinction) once an AI can self-modify to improve itself, therefore increasing its own power. A powerful AI can change the world however it wants, and if this AI is not aligned to human interests it can easily decide to make humans extinct.
Scott said in the OP that he now sees AGI as potentially close enough that one can do meaningful research into alignment, ie it’s plausible that this powerful AI could arrive in our lifetimes.
So he is claiming the opposite of you; AGI is more relevant than ever, hence the career change.
I agree with your premise that non-General AI will continue to improve and add lots of value, but I don’t think your conclusion follows from that premise.
I agree that putting lots of people out of work isn't the problem. The problem is that these Non-General models become very powerful and they can be programmed by humans to do very impactful things. So much so that even if AGI comes into existence just a few years later it will be of minimal impact to the world.
The idea is that there is _existential risk_ (ie species-extinction) once an AI can self-modify to improve itself, therefore increasing its own power. A powerful AI can change the world however it wants, and if this AI is not aligned to human interests it can easily decide to make humans extinct.
Scott said in the OP that he now sees AGI as potentially close enough that one can do meaningful research into alignment, ie it’s plausible that this powerful AI could arrive in our lifetimes.
So he is claiming the opposite of you; AGI is more relevant than ever, hence the career change.
I agree with your premise that non-General AI will continue to improve and add lots of value, but I don’t think your conclusion follows from that premise.