Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

We are currently at "mainframe" level of AI. It takes a room sized computer and millions of dollars to train a SOTA LLM.

Current models are extremely inefficient, insofar as they require vast internet-sized data, yet clearly we have not gotten fully human-quality reasoning out. I don't know about you, but I didn't read the entire Common Crawl in school when I was learning English.

The fundamental bottleneck right now is efficiency. ChatGPT is nice as an existence proof, but we are reaching a limit to how big these things can get. Model size is going to peak and then go down (this may already have happened).

So while we could crowdfund a ChatGPT at great expense right now, it's probably better to wait a few years for the technology to mature further.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: