> Folks much smarter than I seem worried so maybe I should be too but it just seems like such a long shot.
Honestly? I'm not too worried
We've seen how the google employee that was "seeing a conscience" (in what was basically GPT-2 lol) was a nothing burger
We've seen other people in "AI Safety" overplay their importance and hype their CV more than actually do any relevant work. (Usually also playing the diversity card)
So, no, AI safety is important but I see it attracting the least helpful and resourceful people to the area.
I think when you’re jumping to arguments that resolve to “Ilya Sutskever wasn’t doing important work… might’ve played the diversity card,” it’s time to reassess your mental model and inspect it closely for motivated reasoning.
Another person’s interpretation of another person’s interpretation of another person’s interpretation of Jan’s actions doesn’t even answer the question I asked as it pertains to Jan, never mind the other model violations I listed.
I’m pretty sure if Jan came to believe safety research wasn’t needed he would’ve just said that. Instead he said the actual opposite of that.
Why don’t you just answer the question? It’s a question about how these datapoints fit into your model.
Honestly? I'm not too worried
We've seen how the google employee that was "seeing a conscience" (in what was basically GPT-2 lol) was a nothing burger
We've seen other people in "AI Safety" overplay their importance and hype their CV more than actually do any relevant work. (Usually also playing the diversity card)
So, no, AI safety is important but I see it attracting the least helpful and resourceful people to the area.