They're mostly posting incorrect claims like "all it does it make collages out of other people's art" though. It doesn't do that; the model stores about 1 byte per original image it's seen.
Think of that 1 byte as a hyper-compressed essence of knowledge necessary to make said collage. Then it will be correct. Those models would not exist without the labor of the artist community - which in this case was used without consent and without pay.
These models would exist, just without painterly style. Most of the training data is photos of real things, a lot of which is stock photography that has been ripped off much in the same way like those artists. It is a challenge to copyright law, but if artists are allowed learn by looking at other peoples work, why treat an AI differently?
You're kicking the can down the road. Obviously, an AI is not a person, that's in the premise of the question. What makes an AI so different from a person that warrants differential treatment in this case? It's not that there aren't any good answers to this question, but yours is not much of an answer at all.
It's probably okay to learn facts from copyrighted material even if you're an AI - you can't reproduce the text of a novel but you can learn the meaning of a word from context in it. Similarly you can learn to draw a hand from looking at a ton of stock photos with hands, as long as you produce original hands.
AI will need some favorable legal precedents to avoid getting banned though, or else they'll have to only train off CC0 Flickr/Wikipedia scraping.
I also think it's a more obvious problem that it can reproduce copyrighted characters by eg prompting for "Homer Simpson".