LLMs require fuzzy input and are thus good for fuzzy output, mostly things like recommendations and options. I just do not see a scenario where fuzzy input can lead to absolute, opinionated output unless extremely simple and mostly done before already. Programming, design, writing, etc. all require opinions and an absolute output from the author to be quality.