Comments are marked dead by automatic processes, not through downvotes. They're dead before anyone sees them, and you can't vote on a dead comment. amangsingh's comments have probably triggered some automated moderation. Probably at least partially because they sound LLM-generated.
Spot on regarding the automod. Unfortunately, the way I naturally structure my writing almost always triggers a 50/50 flag on AI content detectors. It is the absolute bane of my existence.
The filter instantly shadowbanned the Show HN post when I submitted it, which is why the link was dead for a while. Thankfully, human mods reviewed it and restored it. The link is fully live for a while now!
Noticed that and was wondering, thanks for the explanation. Does this imply that human-people need to go “vouch” for the flagged comments to bring them back into HM’s good graces?
Swappa is great for sellers and bad for buyers. They use a Paypal marketplace product and have no actual authority in a transaction, they just match buyers and sellers. If the phone is not as described, you'll have to pay return shipping yourself.
Yeah. The response to the issue of the LLM cheating should be removing the LLM's access to the ledger. If the architecture allowed the LLM access to the ledger, I have zero reason to believe any amount of cryptography will prevent it. Talk about bloat. The general idea seems salvageable though.
Sibling comment from OP reads very much as LLM-generated.
To clarify the architecture: The LLM doesn't have access to the ledger. That’s the entire point of Castra.
The LLM only has access to the CLI binary. The SQLite database is AES-256-CTR encrypted at rest. If an LLM (or a human) tries to bypass the CLI and query the DB directly, they just get encrypted garbage. The Castra binary holds the device-bound keys. No keys = no read, and absolutely no write.
As for the 'LLM-generated' comment; I’m flattered my incident report triggered your AI detectors, but no prompt required. That’s just how I write (as you can probably tell from my other replies in the thread). Cheers :)
> in many MBs that will halve the bandwidth of the PCIe slots
Not on boards that have 12 channels of DDR5.
But yeah, squeezing an LLM from RAM through the PCIe bus is silly. I would expect it would be faster to just run a portion of the model on the CPU in llama.cop fashion.
It is much faster, yeah. llama.cpp supports swapping between system memory and GPU, but it’s recommended that you don’t use that feature because it’s rarely the right call vs using the CPU to do inference on the model parts in system CPU memory.
Edit: the settings is "GGML_CUDA_ENABLE_UNIFIED_MEMORY=1"... useful if you have unified memory, very slow if you do not.
What key would be a good candidate as a default though? Imagine the memes for exiting vim if you needed a modifier to get into normal mode. Caps lock is truly a useless key and should be escape anyway.
As someone who cut their teeth on a sun "programmer" layout, I really need control to be in that position. I might try mapping the vestigial control key to escape though. Or maybe the hack that dtj1123 describes (tap is escape, hold is control), if I can pull that off on macos.
<ctrl-[> always works out of the box which is less of a stretch than esc.
I do jk as I always find a roll easier on the fingers than a double-tap jj or kk. You could also use space provided you aren't using one of those distros that bases its identity on the spacebar.
reply