Hacker Newsnew | past | comments | ask | show | jobs | submit | altopex's commentslogin

LLMs have issues with creative tasks that might not be obvious for light users.

Using them for an RPG campaign could work if the bar is low and it's the first couple of times you use it. But after a while, you start to identify repeated patterns and guard rails.

The weights of the models are static. It's always predicting what the best association is between the input prompt and whatever tokens its spitting out with some minor variance due to the probabilistic nature. Humans can reflect on what they've done previously and then deliberately de-emphasize an old concept because its stale, but LLMs aren't able to. The LLM is going to give you a bog standard Gemini/ChatGPT output, which, for a creative task, is a serious defect.

Personally, I've spent a lot of time testing the capabilities of LLMs for RP and storytelling, and have concluded I'd rather have a mediocre human than the best LLMs available today.


You're talking about a very different use than the one suggested upthread:

    I use it to criticize my creative writing (poetry, short stories) and no other model understands nuances as much as Gemini.
In that use case, the lack of creativity isn't as severe an issue because the goal is to check if what's being communicated is accessible even to "a person" without strong critical reading skills. All the creativity is still coming from the human.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: