"Cookies & Tracking - This site has no cookies and no tracking.
CSS & JavaScript - CSS, JavaScript and HTTPS are not required. This site works in text and vintage browsers."
Just love this attitude from a webmaster and the Discmaster site works just as expected - straight forward and eminently effective.
Now I compare this with a similar archive site that appeared on HN about a day ago—one dedicated to the works of the analytical philosopher Willard Van Orman Quine. This excruciatingly slow JavaScript-ridden site was so bad that I tried three different browsers before I could get to the archive files and even then I couldn't download anything if I'd wanted to. Frankly, it is horrible site, one wonders why they bother when they make the user experience so utterly bad.
If one ever wants to see the damage bloated JavaScript has done to the web then there's not a much better example that to compare these two sites side by side.
If I were a hacker I could try to MITM this site that serves executables like games and applications and inject something nefarious. I don’t see not having HTTPS as a positive thing.
TLS allows the browser to detect content (like scripts) injected into original web pages by (potentially) malicious third parties. Or at least by third parties who aren't using certificates issued by roots explicitly trusted by the browser software. Before the use of HTTPS in the early 90s, it was not an exceptional condition for third parties to insert content (ISPs inserted ads, libraries inserted cookies, BGP hackers just looked at traffic). One of the reasons SHTTP and HTTPS were invented was to allay fears by banks that entering credit card numbers on non secure links would increase the rare of fraud. But the main reason (I believe) we use HTTPS on sites that don't take CC#s today is because we cannot enumerate every potential risk. Because we cannot forsee the risk, doesn't mean it doesn't exist. In other words... "better safe than sorry."
Do you recall the days when it was common practice for ISPs to MITM all traffic?
The most obvious and profitable use case were those ISPs who returned 404 pages riddled with advertisements for domains that were unregistered (e.g. if you mistyped a URL)
Certain ISPs even injected JavaScript into every page.
“I” in the OP’s comment probably means “government” or “ISP” rather than a friendly roommate on the same wifi network.
France isn’t listed, but Germany, Netherlands, United States (obviously) are. This was a thing that used to commonly happen before TLS became standard.
Too obvious - just give it the same ESSID and password as your local coffee shop's network, and let the "Connect Automatically" setting on Windows do the rest.
Many if not most WiFi clients send a list of SSIDs they would want to connect in cleartext in their directed probe request frames in the active scanning mode. I expect there are attackers automatically advertising these networks to lure in devices.
This is far from being obvious to find out and hardly a scalable MITM.
These are always the same tricks that work in a lab, but then when it comes to do an actual MITM on a web site - there are no solutions.
If my web site is www.example.com, you would need to get very close to the site (network topologically speaking) and insert yourself in the traffic. This is in practice undoable, except if you hack the local network where the web server lives (specifically some equipment) or the server itself.
Or hack the DNS server.
None of these are remotely easy with basic security in place.
My remark was tongue-in-cheek. I'm aware that this would only go as far as appearing as a bridge router, which Windows often designates with a number after the name (e.g. WittyNetworkName 2). It's not very subtle, and you would likely only intercept a fraction of the traffic from a client, as they would default to directly contacting the host where possible.
HTTPS isn't always needed, and the idea that it is needs to die. Most use cases it does make sense to use HTTPS indeed, but there is a small number of use cases where it doesn't make sense.
You want your website to work on vintage computers where they run super old browsers? Then you probably need a HTTP version of the website without TLS/SSL as it's not gonna be accessible otherwise.
Running a software/package repository/registry where every package is signed and verified locally? No need for TLS/SSL and it would just slow down downloading 1000s of packages as handshaking does add latency to requests.
But again, probably 99% of cases it's better to have HTTPS than not. But sometimes not having HTTPS is indeed a positive thing.
> You want your website to work on vintage computers where they run super old browsers? Then you probably need a HTTP version of the website without TLS/SSL as it's not gonna be accessible otherwise.
That's incredibly niche. I don't think it deserves to be an example of "meme needs to die".
> Running a software/package repository/registry where every package is signed and verified locally? No need for TLS/SSL and it would just slow down downloading 1000s of packages as handshaking does add latency to requests.
Need some secure way to pass around the SHA256 (or whatever) hash you're using for verification.
And thinking of "memes that need to die", https://istlsfastyet.com suggests this isn't as bad as I think you think it is.
The very website we're talking about in this submission is specifically made for vintage computers! But I agree, relatively niche.
> Need some secure way to pass around the SHA256 (or whatever) hash you're using for verification.
That gets passed out-of-band before actually downloading anything from the registry. In the case of Arch, every package is signed by developer keys that are securely fetched on initial install, and later fetched as a package.
Less niche use cases are doing latency sensitive things over HTTP (which overall I wouldn't recommend, but sometimes you're stuck with 3rd party stuff), then adding the handshake can easily double the time for the connection.
For example, one website I have can have the TCP connection be established in ~40ms, while the SSL/TLS handshake takes ~100ms. So because of SSL/TLS, the request time more than doubled. If I was serving signed packages over this connection, you can easily skip the handshaking part and when you download thousands of packages, save some time. I don't think that use case is as niche as you think.
Parkinson's Law, paraphrased: "work expands so as to fill the time and budget available for its completion."
I see this so many times in my line of work (software development, consultancy); companies throw millions at a solved problem like e-commerce, the over-enthusiastic engineers jump on it but won't just pick something off the shelf, they will re-engineer it in $cool_tech because just solving the problem is boring.
It’s basically funded by institutions in and around that department of the university. So lots of Austrian grands and stuff so people could create this system. I checked it because it happens to be I am doing research on this type of software for a client, so I was curious if there was an open source version of the software (there didn’t seem to be) and to just see what it was about.
Since there isn’t an open source version, I didn’t dive deeper into the slowness of it or it’s architecture.
Javascript has little to do with the awful design of that site. Also, as of today, the Discmaster site is down, likely due to the design being heavily search focused.
As I said to prox, I took the site at face value (I didn't bother to view the source—couldn't be bothered). However, I'm now mildly curious why sites like this are so slow. Has anyone done a decent objective analysis of load speeds versus actual content versus the other extraneous dross?
Clearly, it'd be easy to compare 'apples' with 'oranges' thus quite a difficult analysis to get real figures.
Just love this attitude from a webmaster and the Discmaster site works just as expected - straight forward and eminently effective.
Now I compare this with a similar archive site that appeared on HN about a day ago—one dedicated to the works of the analytical philosopher Willard Van Orman Quine. This excruciatingly slow JavaScript-ridden site was so bad that I tried three different browsers before I could get to the archive files and even then I couldn't download anything if I'd wanted to. Frankly, it is horrible site, one wonders why they bother when they make the user experience so utterly bad.
If one ever wants to see the damage bloated JavaScript has done to the web then there's not a much better example that to compare these two sites side by side.