Hacker Newsnew | past | comments | ask | show | jobs | submit | sarpdag's commentslogin

Actually they are charging for 269000 tokens. Looks like they changed how they charge accounts.

The screenshot for my dashboard: https://ibb.co/2Vy5CRJ

Cursor Version: 1.2.4 (Universal)


There's the Tokens tab on the mid-right, which is more detailed pricing. Not all tokens are the same price.

But the Usage at the top is a clear view too. On Claude sonnet-thinking, you've used $10 worth. 10k input (your prompt and code), 185k output (presumably written code, maybe thinking tokens too). ~900k cache write is probably it indexing the code and 14m cache read is it understanding/scanning your code.


A journey from nonprofit to military contract.


It was fun. I don't know if I would play again or not, but played 2 times without losing focus.


I really like multi armed bandit approach, but struggles with common scenarios involving delayed rewards or multiple success criteria, such as testing ecommerce search with number of orders and GMV guardrails.

For simple, immediate-feedback cases like button clicks, the specific implementation becomes less critical.


It’s best for immediate rewards. If you have delayed rewards there is a paper on sampling from the “delay distribution” that solves this.


Authentic and nostalgic, I am enjoying watching.


I have tried the "mad cow" joke on o1-mini and it is still failing to explain correctly, but o1-preview correctly states "The joke is funny because the second cow unwittingly demonstrates that she is already affected by mad cow disease."


I got the domain in 2010 to build a different search engine, but sadly I always postponed.


I would let the user choose to see the new content vs old but upvoted. Also It is important to publish the ranking behind it, if it is not straight forward.


Search engines not only control what information you see but also how it’s ranked. Today, information is tightly controlled by a few major platforms. AI tools like ChatGPT or Perplexity typically provide a single answer to a question, even if they include sources.

Platforms like Reddit are excellent for complex discussions, but they’re overkill for simple queries like “What’s Michael Jackson’s age?” and not efficient for straightforward information needs. Your profile is tied to the questions you ask.

I am building a proof of concept that puts both the creation and ranking of information in the hands of users, supported by AI for speed.

AI will gather answers from various sources, offering a range of perspectives, articles. One answer per source, rather than unifying them into a single response. The World Wide Web is wonderful. Knowledge sources should remain distributed. People should visit the original source to get the full context.

Users can contribute their own answers and points of view, allowing others to see multiple perspectives and real world expertise.

Instead of algorithms deciding what ranks highest, the community will upvote or downvote answers. The best content will rise based on real user feedback, not hidden ranking systems.

Please let me know what do you think about it?


This concept is a refreshing approach to decentralizing information and reducing algorithmic bias. Empowering users to contribute, rank, and upvote content encourages diverse perspectives and real expertise to shine. The emphasis on distributed knowledge sources, rather than relying on unified AI answers, keeps the spirit of the open web alive. Excited to see how this evolves!


Thank you. Please let me know what is missing or should be better from your perspective.


I will try immediately.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: