Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Any idea how to encode the previous messages when sending a followup question? E.g.:

1. I ask Q1

2. API responds with A1

3. I ask Q2, but want it to preserve Q1 and A1 as context

Does Q2 just prefix the conversation like this?

„I previously asked {Q1}, to which you answered {A1}. {Q2}“



https://platform.openai.com/docs/guides/chat/introduction

"The main input is the messages parameter. Messages must be an array of message objects, where each object has a role (either “system”, “user”, or “assistant”) and content (the content of the message). Conversations can be as short as 1 message or fill many pages."

"Including the conversation history helps when user instructions refer to prior messages. In the example above, the user’s final question of “Where was it played?” only makes sense in the context of the prior messages about the World Series of 2020. Because the models have no memory of past requests, all relevant information must be supplied via the conversation. If a conversation cannot fit within the model’s token limit, it will need to be shortened in some way."

So it looks like you pass in the history with each request.


This is explained in the OpenAI docs. There is a chat completion API and you pass in the prior messages from both the user and the assistant.


I used the same trick with the previous GPT3 API (da-vinci) and it worked well, I'd pass as one big prompt:

  User: hello (previous prompt)
  Bot: hi (previous response)
  User: who are you? (new prompt)
  Bot: (here it continues conversation)
I wonder how the new ChatGPT API differs, other than the fact that it's structured (you use JSON to represent the conversation memory separately instead of one large prompt).

I guess I will spend the next day playing around with the new API to figure it out.


Judging by this[0] the new structured format is immune to "injections":

[0] https://github.com/openai/openai-python/blob/main/chatml.md


Probably something like that.

You could try formatting it like

Question 1: ... Answer 1: ...

...

Question n: ... Answer n: ...

It makes you vulnerable to prompt injection, but for most cases this would probably work fine.


In addition to the other comment this type of memory is a feature in LLM frameworks like Langchain


ask chatgpt




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: