Some questions I ask myself when reading random posts with grand and important claims on any subject:
Where is this from? Who originally wrote it? Is this text’s origin really a random Facebook post, from a pseudonymous author with a cartoon profile picture and no claim of any serious credentials in the subject at hand? (Whether epidemiology or anything else)
Regardless of the merits of the text’s post (which I do not claim to be able to judge) all evidence has to be analysed for context as well as content. Simple “common sense” claims (with a couple of big words to impress non-epidemiologists like me) are made to debunk the models: where is the evidence rather than rhetoric, even some basic citations, and/or examples of or links to counter-modelling? The post doesn’t even link to the original model files from Imperial that they’re claiming to critique.
It’s perfectly _possible_ that the claims made in this Facebook post are correct, but it doesn’t mean anyone should take this post (and its conclusions) remotely seriously without asking some very robust questions of it.
I have another view really. Which is reading a paper analyzing pandemic overshoot. The authors covered a half dozen models. From very simple ones built on a simple differential equations to complex ones that factored in network effects. My take away is actual models[1] in this space tend to be robust.
The Imperial College Model is a complex model designed to answer subtle questions about the spread and containment of an epidemic. But there is nothing subtle about COVID19. The model predicts catastrophe unless you turn all the knobs to contain it to 11 and do it now.
The model is validated by real experience. Italy, New York both blew up. And elsewhere half hearted measures merely slowed the virus down, not stop it.
The true the deniers can't escape is. If what's been thrown at the pandemic is unnecessary. Then if so, why hasn't the pandemic just collapsed?
[1] As opposed to models that fit a prior defined curve to data. Those are shit.
> The model is validated by real experience. Italy, New York both blew up.
Do you know where the first clusters in New York where?
Because in Northern Italy, and especially around Bergamo, hospitals and then nursing care homes turned into infection centers, and with the population there so skewed with the most vulnerable (along with imperfect knowledge on the pathology) it was easy for the virus to kill them.
In fact most of the initial clusters were in hospitals, and negligence turned Alzano Lombardo in a nice spreading place.
Would a model tuned on what we know now, taking into account different infection routes and places, work the same way? I don't know, but it is a question worth asking even if in the end the model proves to be absolutely correct.
While those algorithms all have weaknesses they are not yet completely broken and are still in wide (if declining) use. TLS 1.0 is most affected by BEAST but all modern clients have mitigations that have proven to be effective against it. The biggest issue with 3DES is its use of 64-bit blocks making it vulnerable to the SWEET32 attack, however that requires a huge amount of traffic under the same key (100s of gigabytes). SHA1 has been shown to be weak but as far as I know there are no practical attacks against it yet.
These algorithms should obviously only be used in fallback to stronger ones but they are not broken to the point where they should never be used as SSL3, RC4, and MD5 have been.
Sure, I agree -- but that's not what the page claims. It says "insecure protocol versions and choices of algorithms are not supported, by design" -- the protocols and modes that I listed are known to have various insecurities, and it still supports them.
I agree that to be useful it's necessary to support old, less secure or even insecure modes, but this is at odds with the above stated goal.
You're right technically but don't you think that's a little bit pedantic?
If your goal is to truly improve the state of the art in the ecosystem, dropping anything that is even remotely insecure is appealing I get that and I do believe the people behind BearSSL would love to do that. However to truly improve anything you need two things: Popularity and improve security.
There is a conflict there because popularity requires, at least some, compatibility to what already exists. You need to balance out security and compatibility. I think there is room for discussion about where precisely that balance is. You could further tilt it towards security by helping users of the library get a sense of what they need to support. Ultimately though you can't just blindly drop everything that's somehow not perfectly secure. Doing so would not improve security at all.
It's a small sacrifice to have one library be a little bit less secure than it could be, if that helps to make everything more secure it all.
This. I recently worked on updating an embedded TLS implementation from TLS 1.0 to TLS 1.2. I was told that it didn't need to implement TLS 1.0 or TLS 1.1, but once deployed we found a lot of non-HTTPS servers still using TLS 1.0. In particular, Microsoft's Hotmail/MSN SMTP servers and multiple RADIUS servers on WPA/WPA2 Enterprise networks. It now allows for client connections to TLS 1.0 servers, but will only serve TLS 1.2 itself.
You kind of have to support SHA-1 still. Even with the browsers moving to deprecate it, many of the root certificates valid for another 10-20 years are still using it. (since the root certs ship with the browser, the security risk is lessened.)
If this is to be a general library that validates the entire certificate chain, then you'll need SHA-1.
Now if the library tries to advertise SHA1 in ServerHello by default, then that is indeed unfortunate.
Where is this from? Who originally wrote it? Is this text’s origin really a random Facebook post, from a pseudonymous author with a cartoon profile picture and no claim of any serious credentials in the subject at hand? (Whether epidemiology or anything else)
Regardless of the merits of the text’s post (which I do not claim to be able to judge) all evidence has to be analysed for context as well as content. Simple “common sense” claims (with a couple of big words to impress non-epidemiologists like me) are made to debunk the models: where is the evidence rather than rhetoric, even some basic citations, and/or examples of or links to counter-modelling? The post doesn’t even link to the original model files from Imperial that they’re claiming to critique.
It’s perfectly _possible_ that the claims made in this Facebook post are correct, but it doesn’t mean anyone should take this post (and its conclusions) remotely seriously without asking some very robust questions of it.