Hacker News new | past | comments | ask | show | jobs | submit | reginald78's comments login

I'm assuming they're still using the phone in some capacity in (what they thought) was offline mode. What they really need are phones with hardware switches for all radios, which of course almost don't even exist as a product. If a faraday bag worked for them they'd probably be better off just removing the battery altogether when they don't need the phone (removable batteries also aren't that common anymore).

It speaks to how terribly fit for purpose mobile devices are for soldiers in an active modern battlefield. Not only do they require discipline and technology training to prevent leaking positions, but most of them actually lack the capability to prevent leaking altogether no matter how trained you are.


The fact that they have guardrails to try and prevent it means OpenAI themselves thinks it is at least shady or outright illegal in someway. Otherwise why bother?


IIRC with XP they late loaded a lot of things to get the desktop showing faster than 2000. My experience at the time was that while the desktop might have loaded faster it wasn't actually usable for quite awhile after I was looking at it, but that might have had more to do with all the crapware on many XP machines I used at the time.

XP definitely needed more ram than 2000 to function acceptably. I remember 128mb being slow but tolerable on 2000 and absolutely brutal on XP.


>My experience at the time was that while the desktop might have loaded faster it wasn't actually usable for quite awhile after I was looking at it

This, I think the fastboot stuff probably seemed good on development machines used at microsoft, but on the cheap computers loaded with OEM garbage that they were pushing as being capable of running xp, it mostly loaded the desktop and then locked up for several minutes to finish actually booting.


> IIRC with XP they late loaded a lot of things to get the desktop showing faster than 2000

https://en.wikipedia.org/wiki/Prefetcher - that's the thing they introduced in XP to speed up loading the system and programs


Windows 9x suffered from DLL hell. So every time a program was installed it potentially overwrote dlls with a different version often older or incompatible. Windows 2000/XP just redirected the installer's dlls into a per program location preventing this which is a large reason those versions were so much more stable.

Most people recommended a complete reinstall every 6 months well through the XP era but I found this was hardly ever necessary after I switched to 2000. Conversely, during my 98 days I never had to schedule reinstalls, Windows had rotted apart by then forcing me to do it!


I definitely remember the DLL hell experience that manifested as an older 2d game overwriting some DirectX dlls in the OS with older versions, and suddenly all my FPS games stopped working.

That was a fun one to troubleshoot as a 12 year old kid.


This happened to me with Minecraft. It was amazing what a pain in the ass they made it to give them some money and then the gaslighting and hassle I had to endure to get what I paid for.

I only play that game single player.


Anubis (or something similar) is an alternative option: https://github.com/TecharoHQ/anubis

Aside from the obvious disadvantages of a non-anoynmous web I also don't even think it will work. How do you deal with identification and punishment of threat actors across the globe? We've been failing at that since the start. When was the internet ever high trust?


>When was the internet ever high trust?

In the 1970s and 1980s.


I thought this as well reading the last discussion. I believe some extra shady free VPNs have used a browser extension to borrow your endpoint to work around geoblocks, etc. I always thought this was a terrible idea, who wants their home internet ip associated with some random VPN users traffic? A voracious mindless bot that slurps up everything it can get to isn't much better.

Microsoft could build this into Windows even, they already use your upload bandwidth to help distribute their updates.


I never understand those charts. To me a 10 is a state that only briefly exist before I passed out from agony. If I was at a 7 pain scale you wouldn't need to ask me, it would be obvious.


I’ve never understood that scale. Is a “10” the worst pain I’ve ever experienced in the past or the worst pain I can imagine? Either way, how can my relative approximation to that “10” be enough information for the doctor to decide what to do next?


It’s much easier and more fruitful to ask “mild, moderate, or severe?” regarding pain. It frames the question in terms of how it affects you instead of trying to relate it to other types of pain you may or may not have experienced before


Of course, relevant xkcd: https://xkcd.com/883/


I’ve had what was told to me is a 10, you don’t always pass out, unfortunately


As a medical doctor friend of mine used to say, if the patient is still screaming they can't be experiencing 10/10 pain.


Live with that 7 for years and it won’t be so obvious


And one of the developers of passkeys threatened to use the specified attestation anti-feature to blackball Keepassxc's implementation when they made something not locked in enough.

https://github.com/keepassxreboot/keepassxc/issues/10407

There have been some discussions to create an export standard since then but I remain skeptical. Why was this not part of the original spec but the ban hammer was? Depending upon how this standard is implemented I can easily see it preventing export to anything but Google, Microsoft and Apple's implementations. And it still leaves the attestation badness in place.


I was referring to device bound discoverable credentials and saying all implementations that an average Joe will run across have a sync fabric deliberately. Platform lock-in is a different thing.

AFAIU the attestation referred to here won’t be signed so any implementation can say anything. It’s just supposed to be ise for things like showing the user a logo so they know where their passkey is stored.


Maybe I'm missing something, but doesn't this mean the work has to be done by the client AND the server every time a challenge is issued? I think ideally you'd want work that was easy for the server and difficult for the server. And what is to stop being DDoS'd by clients that are challenged but neglect to perform the challenge?

Regardless, I think something like this is the way forward if one doesn't want to throw privacy entirely out the window.

client


The magic of proof of work is that it's something that's really hard to do but easy to validate. Anubis' proof of work works like this:

A sha256 hash is a bunch of bytes like this:

  394d1cc82924c2368d4e34fa450c6b30d5d02f8ae4bb6310e2296593008ff89f
We usually write it out in hex form, but that's literally what the bytes in ram look like. In a proof of work validation system, you take some base value (the "challenge") and a rapidly incrementing number (the "nonce"), so the thing you end up hashing is this:

  await sha256(`${challenge}${nonce}`);
The "difficulty" is how many leading zeroes the generated hash needs to have. When a client requests to pass the challenge, they include the nonce they used. The server then only has to do one sha256 operation: the one that confirms that the challenge (generated from request metadata) and the nonce (provided by the client) match the difficulty number of leading zeroes.

The other trick is that presenting the challenge page is super cheap. I wrote that page with templ (https://templ.guide) so it compiles to native Go. This makes it as optimized as Go is modulo things like variable replacement. If this becomes a problem I plan to prerender things as much as possible. Rendering the challenge page from binary code or ram is always always always going to be so much cheaper than your webapp ever will be.

I'm planning on adding things like changing out the hash in use, but right now sha256 is the best option because most CPUs in active deployment have instructions to accelerate sha256 hashing. This combined with webcrypto jumping to heavily optimized C++ and the JIT in JS being shockingly good means that this super naïve approach is probably the most efficient way to do things right now.

I'm shocked that this all works so well and I'm so glad to see it take off like it has.


I am sorry if this question is dumb, but how does proof of work deter bots/scrappers from accessing a website?

I imagine it costs more resource to access the protected website but would this stop the bots? Wouldn't they be able to pass the challenge and scrap the data after? Or normal scrapbots usually timeout after a small amount of time/ resources is used?


There are a few ways in which bots can fail to get past such challenges, but the most durable one (ie. the one that you cannot work around by changing the scraper code) is that it simply makes it much more expensive to make a request.

Like spam, this kind of mass-scraping only works because the cost of sending/requesting is virtually zero. Any cost is going to be a massive increase compared to 'virtually zero', at the kind of scale they operate at, even if it would be small to a normal user.


Put simply, most bots just aren't designed to solve such challenges.


> I think ideally you'd want work that was easy for the server and difficult for the server.

That's exactly how it works (easy for server, hard for client). Once the client completed the Proof-of-Work challenge, the server doesn't need to complete the same challenge, it only needs to validate that the results checks out.

Similar to how in Proof-of-Work blockchains where coming up with the block hashes is difficult, but validating them isn't nearly as compute-intensive.

This asymmetric computation requirement is probably the most fundamental property of Proof-of-Work, Wikipedia has more details if you're curious: https://en.wikipedia.org/wiki/Proof_of_work

Fun fact: it seems Proof-of-Work was used as a DoS preventing technique before it was used in Bitcoin/blockchains, so seems we've gone full circle :)


I think going full circle would be something like bitcoin being created on top of DoS prevention software and then eventually DoS prevention starting to use bitcoin. A tool being used for something than something else than the first something again is just... nothing? Happens all the time?


Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: