Hacker Newsnew | past | comments | ask | show | jobs | submit | mtone's commentslogin

> Does privacy of Netflix ratings matter? The issue is not “Does the average Netflix subscriber care about the privacy of his movie viewing history?,” but “Are there any Netflix subscribers whose privacy can be compromised by analyzing the Netflix Prize dataset?”

Well said.


For this type of repetitive application I think it's common to "fine-tune" a model trained on your business problem to reach higher quality/reliability metrics. That might not be possible with this chip.

They say LoRA finetunes work.

Just looked - Microsoft Authenticator doesn't appear to work. I might be able to get off of it but it will take some prep. My banks are supported so that's good.


Why would you use Microsoft Authenticator when there are hundreds of other apps that manage OTPs?

Use aegis https://f-droid.org/packages/com.beemdevelopment.aegis/


Because many admins are horrible and disable TOTP for "security".

My uni does it and I've had use the only alternative option, cell call, and rigged Tasker to automatically answered and play the needed tone so I don't need to carry it with me.


Good question. That was for my MS account/licenses and some Azure stuff. I use Google Authenticator for most things.

Thanks for the link, I'll take a look. I might just move it to a secondary device first.


Microsoft authenticator should work on GOS, I can only find single person saying it doesn't but there's plenty of reasons it might not work for them (vpn, too strict exploit protection settings). And there's multiple people mentioning it working fine.


Microsoft Authenticator works on my GrapheneOS (I have the Play Services, not sure if it matters).


> if you could exclude all of the R&D and training costs

LLMs have a short shelf-life. They don't know anything past the day they're trained. It's possible to feed or fine-tune them a bit of updated data but its world knowledge and views are firmly stuck in the past. It's not just news - they'll also trip up on new syntax introduced in the latest version of a programming language.

They could save on R&D but I expect training costs will be recurring regardless of advancements in capability.


Recently llama.cpp made a few common parameters default (-ngl 999, -fa on) so it got simpler: --model and --context-size and --jinja generally does it to start.

We end up fiddling with other parameters because it provides better performance for a particular setup so it's well worth it. One example is the recent --n-cpu-moe switch to offload experts to CPU while filling all available VRAM that can give a 50% boost on models like gpt-oss-120b.

After tasting this, not using it is a no-go. Meanwhile on Ollama there's an open issue asking for this: https://github.com/ollama/ollama/issues/11772

Finally, llama-swap separately provides the auto-loading/unloading feature for multiple models.


Do you really need a H200 for this? Seems like something a consumer GPU could do. Smaller models might be ideal [0] as they don't require extensive world knowledge and are much more cost efficient/faster.

Why can't you build this today?

[0]: https://arxiv.org/pdf/2506.02153 Small Language Models are the Future of Agentic AI (Nvidia)


Related discussion on Anubis: https://news.ycombinator.com/item?id=43427679


I use Readdle Documents to sync PDF folders with my server PC via FTP. Free version supports PDF highlighting & simple annotations, basic file management, and automatically syncs back everything.


Assuming it's correct, I think this answer explains it well: https://stackoverflow.com/a/31996121/283879

Basically, yes it may write snapshots per-file (never per-commit) locally but there is a separate routine to transparently repack the whole thing with deltas.


There was talk about introducing type syntax as valid but ignored in the JS language, making TS valid JS.

It would take forever to become mainstream but if node and major browsers started to support this tomorrow, along with ESM modules we could drop TS compilation and bundling entirely during development, safely publish npm packages as TS (even a bundled TS) and simplify tooling for monorepos, IDEs, etc.

Unfortunately that wouldn't solve dealing with templates like JSX/TSX or future language syntax/features.

https://devblogs.microsoft.com/typescript/a-proposal-for-typ...


Right yea that solution would be many years out and doesn't work for JSX. As opposed to compiling to JS/JSDoc which could be done today and should solve our problems with stepping into NPM package code without dead ends.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: