Not really. It contains all kinds of copyrightable data. It's like a dictionary of phrases. Sire there are lots of generic ones, "a lot of", "this or that" and then there are novel ones "It is a truth universally acknowledged".
Your writing and artwork will contain these novel bits and if your accidentally string the right few together you're suddenly in a lot of trouble.
Do you see that in practice? E.g. after you finetune an llm/sd model, does it string it together?
Can't speak for llms, but I'm an SD enthusiast with 2 YoE, which tells me that most of these threads have nearly no idea what they are talking and theorizing about. I see meaningless reductions to technicalities similar to "it's just neurons firing" and general lack of basic knowledge that should come with the minimal practice beyond just talking to it. While working hard to actually reproduce the concepts from the training sets in a way that can be called resembling-enough and compatible with other models.
So I apologize if this sounds challenging, but I'm not into a bare philosophy around AI. Practice tells me a completely different story than these threads tend to express, and I know that very few people actually tinker with AI any deeper than trying to write system/instruction prompts into the chat-mode chat box.
Your writing and artwork will contain these novel bits and if your accidentally string the right few together you're suddenly in a lot of trouble.