Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Yeah, sorry, needed a much bigger canvas than a comment to explain. Let me try again. The example I took was to show mapping from one space to another space and it may have just come across as not learning anything. Yes. You are right it was someone else's pretrained LLM. But this new space learnt the latent representations of the original embedding space. Now, instead of the original embedding space it could also have been some image representation or some audio representation. Even neural networks take input in X space and learn a representation in Y space. The paper shows that any layer of a neural network can in fact be replaced with a set of planes and we can represent a space using those planes and that those planes can be created in a non iterative way. Not sure if I am being clear, but have written a small blog post to show for MNIST how an NN creates the planes(https://gpt3experiments.substack.com/p/understanding-neural-...). Will write more on how once these planes are written, how we can use a bit representation instead of floating point values to get similar accuracy in prediction and next how we can draw those planes without the iterative training process.


> how we can draw those planes without the iterative training process.

Sounds interesting, but this is the part I would need more explanation on.

Just started reading your linked blog, I see it goes into some details there.


Will add a lot more details next week. Have been postponing it for a long time.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: