Hacker Newsnew | past | comments | ask | show | jobs | submit | mrbonner's commentslogin

Don’t listen to anyone saying it is fine for reading or writing extensively with the xReal. I have one and it is PITA to do that over a long period. You better just stick with watching videos or playing games with it.

It’s all for show I guess. But at this point, why would anyone be surprised about it?

So this is the norm: quantized version of the SOTA model is previous model. Full model becomes latest model. Rinse and repeat.


Cool! I checked the source and noticed that even LLM prefers simplified, high level Rust coding styles: use value types such as String, use smart pointers such as reference counting, clone liberally, etc… instead of fighting the borrow checker gatekeepers.

It is the style I prefer to use Rust for. Coming from Python, Typescript and even Java, even with this high level Rust, it yields incredible improvement already.


> Cool! I checked the source and noticed that even LLM prefers simplified, high level Rust coding styles: use value types such as String, use smart pointers such as reference counting, clone liberally, etc… instead of fighting the borrow checker gatekeepers.

Yeah that tracks because the AI is dumb as a bag of bricks. It can apply patterns off stackoverflow, but can hardly understand the borrow checker.


Unverifiable software stack now amplified with LLM undetermistic. This while thing starts to feel like we are building on top a giant house of card!


You're talking about the aladeen or that aladeen? I don't understand which aladeen you are talking about.


This is great. I think Apple bought Kuzu, a in memory graph database in late 2025 to support RAG in combine with their FM like this. Even with such as small model, a comprehensive context of our personal data in graph RAG would be sufficient for a PA system. Do we know if we can have access to this RAG data?


Godspeed AI-I


Can you at least read the article before criticizing them? They explicitly call out that they use Bayesian Optimization (Gaussian process) thing for this. It is "AI" but not "LLM" like you think it is.


I am not sure if this is an April fool joke anymore in the age of AI.


Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: