Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I feel like us not being in that future right now means we won‘t get there so fast.

What‘s missing to get all of this now? What recolutionary research, product development that hasn‘t happened yet will happen in the coming year?

To me it looks like LLM tech is stagnating, after the hype peak we are close to the trough of disillusionment.



Part of the problem is that GPT-4 inference is too expensive to roll out at scale with current GPU availability and cost, so even basic features aren't generally available (e.g. your word processor writing for you) or if they are the model used is cheaper and not as good.

Partly it just takes time - it will overall take (I think, based on previous similar changes like the web) 20 years before the ideas from the current generation of LLMs are built out and integrated into products and made into new products and it is all done. People and organisations take time to change.


The problem with LLMs as I see it is that they make logic errors all the time, and somehow LLMs themselves are not smart enough to reason about their own reasoning.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: