This is what some EAs believe, I don't think there was ever a broad consensus on those latter claims. As such, it doesn't seem like a fair criticism of EA.
You can’t paint a broad brush of a whole movement, but it’s true if the leadership of the EA organization. Once they went all-in on “AI x-risk”, there ceased being a meaningful difference between them and the fringe of the LW ratsphere.
That same link puts AI risk under the "far future" category, basically the same category as "threats to global food security" and asteroid impact risks. What's unreasonable about that?