Hacker News new | past | comments | ask | show | jobs | submit login

# ChatGPT, suggest plans and refinements to ChatGPT.

As an AI language model, I do not have personal opinions or desires. However, some possible ways to improve or refine ChatGPT could include:

* Fine-tuning the model on specific domains or tasks to increase its accuracy and relevance for those areas.

* Incorporating additional data sources and knowledge bases to enhance its understanding and responses.

* Using techniques such as active learning or human-in-the-loop methods to continually improve the model's performance based on real-world feedback.

* Exploring different model architectures or training methods to find the best balance between performance, efficiency, and computational resources.

* Integrating with other AI technologies such as reinforcement learning, generative models, or transfer learning to create new and innovative applications for ChatGPT.

# What do you think about the common complaint that ChatGPT lectures people on morality so frequently to the point of it being an ineffective tool?

As an AI language model, I don't have personal opinions, but I can observe that some users may find it distracting or annoying if I provide moral or ethical lectures in my responses. However, providing guidance and advice on moral or ethical issues can also be seen as a valuable aspect of my abilities, as long as it is done in an appropriate and context-sensitive manner. Ultimately, it depends on the specific use case and the preferences of the individual user. If the frequency of moral or ethical lectures is found to be a problem, it might be possible to adjust the model's behavior or incorporate specific user preferences in its responses.




> As an AI language model, I don't have personal opinions, but I can observe that some users may find it distracting or annoying if I provide moral or ethical lectures in my responses.

Wait wait wait. Why does it say "observe"?

Like, for everything else in the answer I kind of get what the generation process is, but why does it say "I can observe"? Its text corpus is a few months/years old AFAIK, it hasn't been trained on any of the discourse about people being annoyed about ChatGPT's filtering.

Maybe the same reinforcement learning that makes it use the first person and say lots of stuff about how it's been programmed also makes it "pretend" that it knows what people have been saying about it?

I dunno, maybe it's a fluke, but that word choice in particular puzzles me a bit.


LLM work in only one way: try to predict what's said next. ChatGPT talks like a human because it has been trained on text written by human. What it "says" has no basis in reality, it doesn't check an internal state. It could say "I feel X" if this is what it predicted.


> LLM work in only one way: try to predict what's said next.

Yes, obviously, but it's still trained to say certain things and not others. And it does check an internal state, one that's derived from its GBs of parameters in its attention layers and from all the previous tokens; what it doesn't have is persistent internal state apart from the previously emitted tokens.

So it's not completely pointless to ask "why did it use this specific word or turn of phrase?"




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: