Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

So this is interesting because it's anecdotal (I presume you're a high-token user who believes it's revolutionary), but it's actually a measurable, falsifiable hypothesis in principle.

I'd love to see a survey from a major LLM API provider that correlated LLM spend (and/or tokens) with optimism for future transformativity. Correlation with a view of "current utility" would be a tautology, obviously.

I actually have the opposite intuition from you: I suspect the people using the most tokens are using it for very well-defined tasks that it's good at _now_ (entity extraction, classification, etc) and have an uncorrelated position on future potential. Full disclosure, I'm in that camp.



Token usage meaning via agentic processes. Essentially every gripe about LLMs over the last few years (hallucinations, lack of real time data, etc.) was a result of single shot prompting directly to models. No one is seriously doing that for anything at this point anymore. Yes, you spend ten times more on a task, and it takes much longer. But your results are meaningful and useful at the end, and you can actually begin to engineer systems on top of that now.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: