Andrew, in that case, is definitely right in that long-term AI safety has almost nothing to do with near-term AI implementation.
More generally, when I see people assert that these people don't know what they're talking about, it pretty much always seems like a case of reference class confusion. People seem to expect that AI safety research must look exactly like AI implementation research, otherwise it's illegitimate. (Kind of like how biology research must look exactly like chemistry research, otherwise it's illegitimate.)
This is a game theory problem; it might be informed by knowledge of machine learning, like how a broad understanding of chemistry helps in much biology research, but they're not the same level of abstraction; insisting that anyone who wants to study a problem that kind of touches on your own research interests must do it in exactly the same way you do it and focus on exactly the same things, otherwise they're hacks and crackpots, reeks of narrow-mindedness.
More generally, when I see people assert that these people don't know what they're talking about, it pretty much always seems like a case of reference class confusion. People seem to expect that AI safety research must look exactly like AI implementation research, otherwise it's illegitimate. (Kind of like how biology research must look exactly like chemistry research, otherwise it's illegitimate.)
This is a game theory problem; it might be informed by knowledge of machine learning, like how a broad understanding of chemistry helps in much biology research, but they're not the same level of abstraction; insisting that anyone who wants to study a problem that kind of touches on your own research interests must do it in exactly the same way you do it and focus on exactly the same things, otherwise they're hacks and crackpots, reeks of narrow-mindedness.