All of the neural network architecture for human level thinking and processing, including vision, speech, emotion, abstract thought, balance and fine motor skills, everything was publicly released in April 2003, twenty years ago this month. It's a 700 megabyte tarball and sets up an 80b parameter neural network.
What? Huh? Yes the human genome encodes all human level thought.[1] Clearly it does because the only difference between humans that have abstract thought as well as language capabilities and primates that don't is slightly different DNA.
In other words: those slight differences matter.
To anyone who has used GPT since ChatGPT's public release in November and who pays to use GPT 4 now, it is clear that GPT 4 is a lot smarter than 3 was.
However, to the select few who see an ocean in a drop of water, the November release already showed glimmers of abstract thought, many other people dismiss it as an illusion.
To a select few, it is apparent that OpenAI have found the magic parameters. Everything after that is just fine tuning.
Is it any surprise that without OpenAI releasing their weights, models, or training data, Google can't just come up with its own? Why should they when without turning it into weights and models, the human neural network architecture itself is still unmatched (even by OpenAI) despite being digitized twenty years ago?
No, it's no surprise. OpenAI performed what amounts to a miracle, ten years ahead of schedule, and didn't tell anyone how they did it.
If you work for another company, such as Google, don't be surprised that you are ten years behind. After all, the magic formula had been gathering dust on a CD-ROM for 20 years (human DNA which encodes the human neural network architecture), and nobody made the slightest tangible progress toward it until OpenAI brute forced a solution using $1 billion of Azure GPU's that Microsoft poured into OpenAI in 2019.
Is your team using $1 billion of GPU's for 3 years? If not, don't expect to catch up with OpenAI's November miracle.
p.s. two months after the November miracle, Microsoft closed a $10 billion follow-on investment in OpenAI.
What? Huh? Yes the human genome encodes all human level thought.[1] Clearly it does because the only difference between humans that have abstract thought as well as language capabilities and primates that don't is slightly different DNA.
In other words: those slight differences matter.
To anyone who has used GPT since ChatGPT's public release in November and who pays to use GPT 4 now, it is clear that GPT 4 is a lot smarter than 3 was.
However, to the select few who see an ocean in a drop of water, the November release already showed glimmers of abstract thought, many other people dismiss it as an illusion.
To a select few, it is apparent that OpenAI have found the magic parameters. Everything after that is just fine tuning.
Is it any surprise that without OpenAI releasing their weights, models, or training data, Google can't just come up with its own? Why should they when without turning it into weights and models, the human neural network architecture itself is still unmatched (even by OpenAI) despite being digitized twenty years ago?
No, it's no surprise. OpenAI performed what amounts to a miracle, ten years ahead of schedule, and didn't tell anyone how they did it.
If you work for another company, such as Google, don't be surprised that you are ten years behind. After all, the magic formula had been gathering dust on a CD-ROM for 20 years (human DNA which encodes the human neural network architecture), and nobody made the slightest tangible progress toward it until OpenAI brute forced a solution using $1 billion of Azure GPU's that Microsoft poured into OpenAI in 2019.
Is your team using $1 billion of GPU's for 3 years? If not, don't expect to catch up with OpenAI's November miracle.
p.s. two months after the November miracle, Microsoft closed a $10 billion follow-on investment in OpenAI.
[1] https://en.m.wikipedia.org/wiki/Human_Genome_Project