Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I hope this pricing impacts ChatGPT+

$20 is equivalent to what, 10,000,000 tokens? At ~750 words/1k tokens, that’s 7.5 million words per month, or roughly 250,000 words per day, 10,416 words per hour, 173 words per minute, every minute, 24/7.

I uh, do not have that big of a utilization need. It’s kind of weird to vastly overpay



Remember that the previous replies and responses are fed back in. If you’re 20 messages deep in a session, that’s quite a few tokens for each new question. An incredible deal nonetheless!


That‘s optional according to the docs, but yes, you’d probably add those.


I use ChatGPT infrequently so the $20/month isn't worth it to me for pro.

I stood up a open source & login-free UI here: https://www.chatwithme.chat/

It accepts your API token and stores it in your browser. It does not have feature parity with ChatGPT but gives you the basics.


I’m impressed by the speed at which you’ve designed this. Have you considered a « Show HN »?


Is this using the new GPT 3.5 turbo model?


Most of the value for me with ChatGPT+ is getting access when the system is at capacity.


Presumably the paid api also will give you access when the chatgpt website is at capacity, and for most people it is probably orders of magnitude cheaper.


I wouldn’t mind paying a premium for the convenience (maybe $5 per month, billed monthly, max), but I’m definitely not spending $20.


Same here. That was the sole reason I upgraded. There were a few times where I really needed ChatGPT at a specific time and got the "we're at capacity" message. $20/mo is nothing to have that go away.


There were a few outages that also locked me out (obviously) as a paying subscriber. Not sure how often I was able to access it even though the service was ‘at capacity’. Knowing something like that might make me feel better about the value of Premium.


That’s a bummer to hear that outages can lock out paying subscribers. That hasn’t happened to me yet but if it does that would cause me to reconsider the premium subscription.


When you say you "really needed" ChatGPT, what was the use case?


Analysis of customer feedback on behalf of consulting clients (hence the deadline where downtime wasn’t acceptable).


> 10,416 words per hour, 173 words per minute, every minute, 24/7.

Unless I'm misunderstanding something, it does not sound like that much when every query you make carries several hundred words of prompt, context and "memory". If the input you type is a couple words, but has 1k extra words automatically prepended, then the limits turn into 10 queries per hour, or one per 6 minutes.


Even with that math, I do not interact with ChatGPT 240 times per day.


Not now, but if it'll end up powering next gen Copilot, email suggestions, search interfaces, etc. you might end up interacting with it a lot more each day, without realizing it.


If I’m interacting with it without my knowledge or intent then I am _DEFINITELY_ not paying $20 per month for that.


Well, let's put it differently: all those hypothetical services are using the API in question, so your marginal cost for them taken together adds to $20/month, which they'll pass onto you, and you'll then happily pay, because you find the services useful.


Maybe. I’m pretty frugal and a big fan of doing things myself. I certainly hope that they can some day can provide me with enough value to make spending $20 a no-brainer, but until that’s obvious or unavoidable, I’m not giving them $20 ¯\_(ツ)_/¯


If you think you're overpaying just hit the API yourself.


Any idea how to encode the previous messages when sending a followup question? E.g.:

1. I ask Q1

2. API responds with A1

3. I ask Q2, but want it to preserve Q1 and A1 as context

Does Q2 just prefix the conversation like this?

„I previously asked {Q1}, to which you answered {A1}. {Q2}“


https://platform.openai.com/docs/guides/chat/introduction

"The main input is the messages parameter. Messages must be an array of message objects, where each object has a role (either “system”, “user”, or “assistant”) and content (the content of the message). Conversations can be as short as 1 message or fill many pages."

"Including the conversation history helps when user instructions refer to prior messages. In the example above, the user’s final question of “Where was it played?” only makes sense in the context of the prior messages about the World Series of 2020. Because the models have no memory of past requests, all relevant information must be supplied via the conversation. If a conversation cannot fit within the model’s token limit, it will need to be shortened in some way."

So it looks like you pass in the history with each request.


This is explained in the OpenAI docs. There is a chat completion API and you pass in the prior messages from both the user and the assistant.


I used the same trick with the previous GPT3 API (da-vinci) and it worked well, I'd pass as one big prompt:

  User: hello (previous prompt)
  Bot: hi (previous response)
  User: who are you? (new prompt)
  Bot: (here it continues conversation)
I wonder how the new ChatGPT API differs, other than the fact that it's structured (you use JSON to represent the conversation memory separately instead of one large prompt).

I guess I will spend the next day playing around with the new API to figure it out.


Judging by this[0] the new structured format is immune to "injections":

[0] https://github.com/openai/openai-python/blob/main/chatml.md


Probably something like that.

You could try formatting it like

Question 1: ... Answer 1: ...

...

Question n: ... Answer n: ...

It makes you vulnerable to prompt injection, but for most cases this would probably work fine.


In addition to the other comment this type of memory is a feature in LLM frameworks like Langchain


ask chatgpt


I hope the same! I do wonder though if ChatGPT+ is subsidizing the ChatGPT API cost here.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: