> John gives up. Concludes never to touch Node, npm, or ES6 modules with a barge pole.
> The End.
> Note that John is a computer scientist that knows a fair bit about the Web: He had Node & npm installed, he knew what MIME types are, he could start a localhost when needed. What hope do actual novices have?
The ones who don't give up on the first night will likely have more luck (try, try again).
As someone with little experience in Rust, I can say that I have had no trouble downloading, compiling, and running binaries from a crate. The only hard part for me is organizing my own crate correctly.
Rust is definitely known for having a sharp learning curve, but I'm not sure what you mean about the package system. I've onboarded several people onto Rust at my job over the past year, and the package system has never been an issue for any of them.
Or, you know, Python, Ruby, Java, Go, C#, Swift, Kotlin, etc., etc., or any number of other languages that are less of a foot-gun full of bizarre issues, poorly maintained packages, and droves of Jr. devs eager to do clever things that make the language and ecosystem psychotic.
Python is good at many things, but packaging is not one of them. The best python package manager at the moment is Poetry, and it drew most of its inspiration from npm and yarn...
Python packaging has historically been so bad that dotcloud invented docker in an attempt to make it usable.
But at least I can just download a .py file (or a bunch of files) and just import them. Perhaps the biggest frustration/pain point in this article isn't npm as such, but that you can't "just" use some JS module.
Nothing would stop you from doing that (import thing from ‘./downloaded.js’) but it’s Just Not Done That Way and packages aren’t really built with that in mind, so it probably wouldn’t work very well in practice.
You definitely can. On the server side, you can literally do that. On the client side, you can "import" one with a script tag, and increasingly you can use modules in browsers too.
I've played around with a bunch of languages, and Rusts' is unequivocally the best experience I've had so far. Cargo works as intended, literally straight out of the box, crates.io and the Cargo.toml was straightforward to figure out. Certainly way less frustrating than figuring out how to use Python virtualenv when I was first starting.
that's fair, but newer languages like Rust have the benefit of hindsight, and can mandate one way to do things. to be clear, i'm all for that.
i don't think it's possible to do that any more with Python, the genie is out of the bottle. but you can get close with things like Black (auto-formatting) and Poetry (nicer deps mgmt and so much more). of course, how would a beginner know this? hopefully some day tools like that will become the default answers
Exactly. Not once in their diatribe did they provide a reason that they need those permissions. The fact that noone there knew why they were asking for those permissions in the first place is a huge red flag for me.
And why in the world are they asking for the cookies permission? That's a big, fat nope for me. It's as if they don't understand what they are asking for and the potential implications of passing that data around so haphazardly.
These folks need to take another hard look in the mirror before they point the finger, because their own house is way out of order.
> Of course it's worse when the user thinks the connection is encrypted when he actually has no idea who he's talking to.
If a website previously using a self-signed certificate switches to plain HTTP - how will that help me verify the identity of the server the next time I visit?
By removing the self-signed certificate, not only am I still unable to verify the identity of the server, but now my traffic is in plaintext for anyone on the local network to trivially intercept (in addition to whatever stranger I'm sending it to on the other end).
I understand your sentiment, and I know the slippery slope that you are referring to when you say that it's a dangerous mindset to be okay with unverified certificates. Unencrypted communication however, is not a solution to that problem.
> but now my traffic is in plaintext for anyone on the local network to trivially intercept
If they are able to trivially intercept your network traffic they are
probably also able to modify it (=> hijack untrusted HTTPS) or what
scenario am I missing here?
Of course unencrypted communication isn't a solution if your goal is to
have secure communication. But so isn't untrusted communication.
Either it's secure or not. You can't have something in-between. The
browser would have to display an icon that says "This connection is
secure but actually we don't really know so maybe it isn't". What are
you supposed to make of such information?
So, a big concern which drove much of the adoption of HTTPS and other security technologies for the Internet is mass public surveillance, often justified as for "national security" purposes.
The NSA for example is known to just suck up all the traffic it can get and put it in a pile for later analysis.
Maybe your mention of "Make a bomb in chem class tomorrow" was just a joke to a close friend about how much you hate school, and maybe an analyst will realise that and move on when they see it, but civil liberties advocates think it'd be better if that analyst couldn't type "bomb" into an NSA search engine and see every mention of the word by anybody in your city in the last six weeks. I agree.
Americans tried just telling the NSA not to collect this data, but the whole point of spooks is to do this stuff, short of terminating the agency they were always going to collect this data, it's in their nature. So the practical way forward is to encrypt everything.
Any TLS connection can't be snooped. Only the participants get to see the data. The NSA isn't going to live MITM every single TLS connection so even with self-signed certificates the effect is you prevent mass surveillance.
A targeted attack will MITM you, no doubt, and so that is the reason to insist on certificates, but it's wrong to insist as you do that there's no benefit without them.
> it's wrong to insist as you do that there's no benefit without them.
Ok, that wasn't really my intention. I was stating that a false
sense of security is worse than having (knowingly!) no security
at all.
So yes I agree, you're generally better off even with untrusted
encryption but that doesn't help in practical terms with our
current situation of HTTPS in web-browsers. Maybe it would have
been better if web-browsers would have just silently accepted
self-signed certificates while still showing the big red warning
about an insecure connection. I guess that will be solved with
QUIC/HTTP3.
> a false sense of security is worse than having (knowingly!) no security at all.
Agreed. If you know that you are insecure you're less likely to pass sensitive information over the connection.
IMO the culprit is browser behavior. For instance, when visiting unencrypted HTTP sites in Chrome you may or may not notice an unobtrusive, greyed out "Not Secure" label in the URL bar. Visit your own self-signed certificate dev site though, and Chrome will give you an error wall with nothing to click, and you have to type "thisisunsafe" to pass (the page does not tell you that typing "thisisunsafe" will get you through).
Perhaps the reasoning is that if a site is served unencrypted it shouldn't be serving sensitive information, whereas an invalid certificate is an easy indicator of something amiss... but wow, talk about obtuse.
Your concern is definitely valid though, and I'm concerned about it too.
The brick wall (it's unfortunate that there have been overrides in Chrome under phrases including 'badidea' because they encourage people to use them, correct design here is to make the brick wall unpassable, that's why we built it in the first place) is only present if this site has HSTS or similar requiring HTTPS.
If the site doesn't seem to require HTTPS but you've gone there with HTTPS and there's no trustworthy certificate then the browser gives you a different interstitial which has a button labelled Advanced which reveals a link "Proceed to ... (unsafe)" that will switch off further interstitials for this site but retain the "Not Secure" labelling.
The HTTPS site (once you reach it) gets access to all modern features, an HTTP site, even if you ignore all the warnings, does not. As an example that's particularly unsubtle, calls to all the WebAuthn APIs just give back an error as if the user has thumped "Cancel".
Edited to add: Also the grey "Not secure" is changed to red if you seem to interact with a form, because that's probably a terrible idea on an HTTP site. Eventually I expect it will just always be red (the change to notice form interactions was in 2018 and this is part of a planned gradual shift by Chrome and other browser vendors).
Everything what you're saying is true, but it doesn't change the fact that HTTPS with self-signed certificate is more secure than HTTP.
It took Letsencrypt to make HTTPS accessible to the majority of the web because there was no cheap way before, because self-signed certs were punished by browsers while unencrypted connections were fine. We could have been full on moving from an encrypted (self-signed) web to a trusted (CA) web by now instead of moving from a plain-text to a trusted web.
Also, self-signed certs still prevent a MITM if you ever connected to the site before, similar to the trust-on-first-connection behavior of SSH. Given the widespread deployment and trust of SSH I'm suprised this people act so different with HTTPS.
> Unencrypted communication however, is not a solution to that problem.
Could you point out who you are responding to who said that unencrypted communication is a solution to the problem? This strikes me as a straw man argument.
I've played with audio software for a long time and I recently experimented with Active Noise Cancellation. There are a few things to keep in mind:
* A "live" ANC process has no control over the environment from which it receives the audio signals that it acts adversarially against.
* When transmitting audio waves from one medium to another, there will be latency. Perhaps not much, but it will be there.
If you accept these two positions, then consider this:
* What happens when a sound wave that is being combatted (via phase inversion) suddenly stops, or inverts it's own phase? That's right, ANC could potentially double the amplitude of the frequency being combatted.
* I imagine that ANC technology takes advantage of latency to ensure that they don't damage people's hearing, but the nature of ANC requires low latency in general, otherwise you can't be sure that you are combatting the correct frequency (at which point you risk doubling the amplitude due to abrupt changes) - if someone more familiar with the actual algorithms could chime and correct me I will happily stand corrected :)
So first off, doubling the amplitude is a 6dB increase in SPL. So not that bad actually.
Second off, instantaneous sound is only a health issue when it's really loud, like a gunshot (130-140dB SPL) near the point at which the ear drum ruptures. That means that you need to be in an environment where the background noise is dangerously loud to begin with, and because of the way sound is made - this might be unlikely. Which is interesting, because early ANC did have these problems - when it was being used initially for military applications (helicopter/tank pilots iirc).
Lastly the important thing to remember is that ANC is usually part of a dual pronged approach to ear protection. Latency is a problem when you need to cancel high frequencies (where you get past about a quarter wavelength and interference can become constructive), but ANC excels at low frequencies (below about 500-1kHz it can be remarkable even). This is great because passive reduction strategies (sealing off the ear, thick padding, good fit/headband adjustment) are much more effective at high frequencies.
So TL;DR it was a problem, been fixed, and where it might happen is pretty rare for a consumer.
Also noise rarely spontaneously inverts phase at a particular frequency. That'd be weird.
Yeah, you're right about ear damage coinciding with exposure time - short bursts would have to be very loud to cause real damage.
And yes, it would be weird if a frequency range spontaneously inverted - the only scenario I can imagine that happening in is some jerk doing it on purpose.
The reason I became interested in ANC was because every night I would hear a terrible frequency being emitted from the air conditioner units above me (top floor apartment building), and during my experiments I quickly realized how hopeless it would be to effectively combat them due to the varying intensity of the sound throughout my apartment, the dynamic interactions of the sound with itself within my echoey wood floor studio, and my location at any one time. All valid points though, thanks for chiming in. I learned more about ANC :)
Edit: Btw my goal was ANC via speakers, not headphones. Headphones would be much easier since they only have a single, summed audio source.
If you mean by that that it couldn't be louder than what the speaker could produce: No, because you're adding to the original noise entering your ear from the outside.
Will the physical sound waves that I produce with my loud speakers cause Active Noise Control enabled headphones to "engage" with me? Could I adversarially engage with them at that point?
While the noise cancellation is active it will attempt to neutralize (destructively interfere with) sounds from the outside, including those generated by your speaker. You could indeed adversarially engage through something like a spontaneous phase shift (so the interference will become constructive, making the resulting signal louder) or generating a frequency the ANC can't compensate.
Another possibility for managing this would be to use a puppet agent / master setup, and use puppet directives to pin sensitive packages (i.e. the ones that comprise your application) to specific versions while allowing the rest of the system to update accordingly (assuming the pinned packages don't cause dependency issues - which should be tested before pushing).
So the process might look like this:
1. Manually update a test system and take note of the packages comprising your application and their new versions ('grep -E "<PATTERN>" --color=always' could be helpful here).
2. Run automated tests against the test build to ensure that new packages have not caused issues.
3. If any breaking changes are discovered, pin the offending packages to their unbroken versions. Rinse and repeat.
4. Once a stable build is found, update your puppet manifests to reflect any pinned packages and run it on a single test system (I use an isolated puppet master test server for this).
5. If all goes well on the test system, update the main puppet master server and wait for the agents to call home (don't forget to update the runinterval directive in puppet.conf so the agents don't call home every 30 minutes - even idempotent processes consume resources).
The ones who don't give up on the first night will likely have more luck (try, try again).