Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> Hearing is also well into the terabytes worth of information per year. Add in touch, taste, smell, proprioception, etc and the brain gets a deluge

Is that supposed to be a lot? Only a small fraction of that is committed to permanent storage.

A random server is today processing anywhere from tens of terabytes to hundreds of petabytes annually



Low level details like that aren’t relevant to this discussion. Most human processing power is at the cellular level. The amount of processing power in a single finger literally dwarfs a modern data center, but we can’t leverage that to think only live.

So it’s not a question of ‘a lot’ it’s a question of orders of magnitude vs “the quantities used to train AI”

Library of congress has what 39 million books, tokenize every single one and you’re talking terabytes of training data for an LLM. We can toss blog posts etc to that pile but every word ever written by a person isn’t 20 orders of magnitude larger or anything.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: