According to Wikipedia, influenza causes an estimated 41,400 deaths per year. 2017-2018 was the worst in 40 years and totaled about 61,000 (though the tally has not been finalised and some people have reported it as being as high as 80,000). The number of hospitalisations for the flue in that year was estimated at 800,000 people (clearly that isn't ICU). The number of medical visits was 21 million. I have no data for how many people were confirmed to have the flu, but were not hospitalised.
COVID-19 has killed 114,148 people to date. The number of hospitalisations are only reported up to the end of May 30 and appears to be 82 people per 100,000 people in the populations (thank you CDC for such an epically terrible statistic!) That works out to about 270,000 people. I'm going to use PCR tests as a proxy for "medical visits" for the flu, but in reality they aren't comparible. There are 22 million tests that have been carried out to date. The reason I use that as a proxy is I assume that there is reason to suspect that someone may have COVID-19 if they get a PCR test, just like there is reason to suspect that you have the flu if you have a "medical visit". The last piece of information would be the number of confirmed cases which is 2,045,549. So just under 10% of the cases that were tested were confirmed to have the disease.
Of the people who were hospitalised in the worst flu year in 40 years, about 10% died. Of the people who were hospitalised for COVID-19, about 42% died (well a little higher because I'm using June 10th data for deaths and May 30th for hospitalisations). The number of medical visits for the flu was 21 million in the worst year and the number of PCR tests for COVID-19 so far is 22 million (about the same). It may be a bad assumption,
That data does not actually agree with any other data I can get from the CDC web pages. Where did you get the link from? Even the total death rate is nearly half of the actual total death rate reported for those years.
If you scroll down to Table B you can see that ~2,800,000 people died, influenza and pneumonia account for 55,000 deaths. This is more than that 33,000 cases of reported influenza because not all pneumonia is caused by the flu.
The data in your link is very strange in that it implies a rate of 10% of all deaths is by pneumonia, rather than the ~2% in roughly everything else I can find on the topic. I wonder if we are losing some context here?
Literally nothing I can find on the CDC's website matches up with the data on that report so I'm at a loss what's going on with it. It's too bad there is no text because I think we're missing something.
"Even the total death rate is nearly half of the actual total death rate reported for those years."
Bottom of page:
NOTES: Data presented in this table are based on all complete death records received and processed by the Centers for Disease Control and Prevention’s National Center for Health Statistics (NCHS) as of March 17, 2016. Data for 2008 through 2014 are final data, while 2015 data are provisional. Due to the nature of provisional data, numbers are subject to change as additional death records are received. Influenza season is defined as early October through mid-May. Influenza and pneumonia deaths are defined as deaths with codes J09–J11 (any listed cause) and J12–18 (listed anywhere without influenza also listed), respectively, in the International Classification of Diseases, Tenth Revision. SOURCE: CDC/NCHS, National Vital Statistics System, 2008–2015.
Alright. If you are unable to look at any other data than the one page you have. 7,961 deaths to influenza in 2015-16. Is that the number you think will show that COVID-19 is more deadly than the flu? Because, I've got to say that I don't think that number is better for you than 23,000 deaths that the data I'm pointing to. Oh, of course there are 131,858 deaths due to pneumonia from other causes. Clearly something pretty powerful that wasn't influenza swept through that year. It's pretty strange that we keep statistics on the piddly old flu when there is something killing 20 times as many people, but is completely unidentified. I'm so glad we agreed to use this data rather than literally any other page on the CDC website!
It's not one year. The document covers 7 years. The pneumonia range is 126k to 138k. It's explicitely listed as "death records" and the source is given.
I'd love to actually read the paper to see if it is reasonable. Anybody have a link? Having said that (and keep in mind that I'm in favour of wearing masks and wear one myself when I go out in public), I think it's pretty clear (as indicated in the article) that masks alone won't bring the R value below 1.
The article says, "He [Richard Stutt, who co-led the study] said the findings showed that if widespread mask use were combined with social distancing and some lockdown measures, this could be 'an acceptable way of managing the pandemic and re-opening economic activity' long before the development and public availability of an effective vaccine against COVID-19, the respiratory illness caused by the coronavirus." (emphasis mine)
Anecdotally, in Japan (where I live) there seems to be a much higher than 50% use rate and there is still need for lockdowns. However, as the article suggests, it appears that the severity and length of the lockdowns might be reduced.
The question I have, though, is if the sheer number of people with the disease in the US and UK may be an issue. In South Korea, Taiwan and Japan, the total number of active cases never really got above 10K -- so the chances of meeting someone with the disease is really quite small. Potentially (as seems to be the case in Japan) you can get away with just dealing with cluster cases and let the stragglers go -- as long as the masks are effective enough to keep transmission rates low in those situations. But if you have millions of people with the disease, many of them in large cities, I wonder if it will be as effective.
Masks alone are probably not enough. But they do seem to help when combined with other preventative measures.
My impression in the US is that people continue not to take this seriously, which causes us to be lax in the basic measures that could get this under control. Very frustrating.
On the positive side, looks like the R1 for most US states is below 1 - https://rt.live/
TODO lists for which I use org mode, but you could use practically anything. I like a text editor for this rather than an app, per se, just because it keeps me in the flow. All you need is a place to jot down what you are planning to do next and to be able to arrange the order.
Usually I'll start with pretty high level ideas. If I have a story I'm working on, I'll put the description of the story in my TODO list. Then I'll think for about 5 minutes about what general things need to get done. I'll order these by some priority (doesn't really matter usually, to be honest). Then I'll start working on the first one.
Normally I need to poke into the code to really see what I have to do. I'll often add a sub-task to my first one that says, "Figure out what to do" or something like that. Then I'll do some exploratory coding for a few minutes. As I discover what needs to get done, I write it down in my TODO.
It's hard at first to stop yourself from just writing code, but pulling yourself back for the 20 seconds or so it takes to write down what you are just about to do can be surprisingly valuable. Don't censor yourself either. It's fine to guess what you need to do and then delete stuff that you realise is unnecessary later. As you are coding, any time you think, "Oh, I'm going to need X", add it to the TODO (again, difficult to train yourself to do it consistently!)
Once you get good at this, in my experience you will be quite interruptible. Any time I get distracted, or unfocussed or lack motivation, I just look at the top think on the TODO and say, "I'm just going to do that top thing". It always pulls me in.
I don't always code like this, but every time I do I'm dramatically more productive. I should always code like this, but... sometimes you want a relaxed day ;-)
It is incredibly difficult to remember something that you don't understand. In fact, your memory often elides things that you don't understand to the point where you will swear that something never happened when, in fact, it did. Statistically speaking, repetition is required for learning, but understanding is at least as important. In fact, sometimes if you can find no meaning for something, it helps to make up a meaning (for example the use of mnemonics).
On the other hand, true criticisms of Javascript, the language, are boring. Yeah, there are problems. You learn to deal with those problems. They aren't really that horrible. I worked with a lot of less convenient languages than JS (especially ES6, which is actually not bad). The standard library is pretty bad, but the language itself is easy enough to use that it's pretty trivial to implement what you need yourself. There are also a few pretty well written third party libraries with no other dependencies which you can use. The build environment is horrible, but not really any more horrible than some other environments I've had to deal with.
No, the real problem is that a lot of Javascript developers choose to stick hot pokers in their eyes. They don't read the code of their dependencies. They don't care how many ridiculous dependencies of dependencies they use. They refuse (absolutely refuse, to the point of calling you an imbecile if you even suggest it) to write their own tools. They choose the build tools that are the most wonky and are built on the most insane internal code -- because they don't care to ever look at that code. They look at the "box features" and say, "Oh, everyone is using that and it has all the features we want. You are crazy if you want that stupid boring thing that barely does anything (and yet works)". They don't do any planning for configuration management. They don't think about how they want to upgrade their dependencies, and especially don't dare think about inspecting the code in the dependencies. "Latest is best! If it breaks, we'll deal with it then".
Javascript is not really that bad. It really is that the community does not have a particularly good grasp on how to minimise risk in large projects. On the other hand, it's a common refrain on other platforms. While Javascript is not really that bad, other platforms are considerably better and you can get away with really poor practices for a lot longer. Not that they won't absolutely kick your ass eventually -- it's just going to be a couple of years away when you have moved to another company at a higher pay scale.
Yeah, I think you're hitting the nail on the head here. I'd go so far as to say that the language itself is actually better than C (in the sense that it's more intuitive and less error-prone). C has the excuse of age. But the C community has a culture of carefully designing architectures to minimize pain points and bugs, and avoid other pitfalls--and that was true 20 years ago when I started programming and C was a lot closer to as old as JS is now.
As I've said elsewhere, you can write good code in bad ecosystems, and some people are doing amazing things in JS.
I also have a small caveat. If you have a docker setup or a similar style of setup, if it is not obvious how to do this by hand, then you should write documentation on how to do it. Honestly, my standard is that you should have the same level of setup instructions as you would expect from a good open source project of the same complexity.
Quite a lot of Japanese people seem to recognise characters less by their shape and more by the stroke connections, in my experience. No matter what the tool used to write it, legibility really requires writing the characters in the correct order for the most part.
Actually, Chinese characters did originally have a defined linear order (or, rather several schools existed IIRC). It's been a while since I looked at this, but essentially they were ordered by radical and then pronunciation, with some other tie breaker rules. Japanese word dictionaries, though, are indeed usually ordered by pronunciation. Kanji dictionaries have a variety of different orders, but most of them are ordered by radical and stroke count.
Not the OP, and it's definitely not my style, but some people really like taking classes. They love the environment and energy that a university life gives them. I've known one or two people who have spent decades just taking classes (and accumulating degrees).
This is really weird to me but to each their own. I think it's partially because of ADHD but I really hate doing HW/writing essays that I feel are useless, which tends to be pretty common in uni.
To be fair, it is very common for Forth programmers to redefine the interpreter as they go. You literally change the language in your program. That's a very different expectation for other kinds of languages.
There are similar examples in just about any language out there. People use whatever tools the language ecosystem provides to change the language to fit some problems better. Some languages are easier to change and extend, some are harder, but that doesn't stop people from trying to do this anyway.
I think there's a level of familiarity with the language above which changing it is a natural thing to do. It can take years before you learn a "normal" language well enough to be able to do this, but with Forth, Scheme, Prolog, and the like, you're basically required to do this from the get-go. My intuition is that these languages simply target advanced, already experienced programmers, while completely ignoring the beginners. So it's more of the optimization for a different user-base, IMO. That would also explain how these languages are still alive, despite their communities being very small for the last 50 years.
COVID-19 has killed 114,148 people to date. The number of hospitalisations are only reported up to the end of May 30 and appears to be 82 people per 100,000 people in the populations (thank you CDC for such an epically terrible statistic!) That works out to about 270,000 people. I'm going to use PCR tests as a proxy for "medical visits" for the flu, but in reality they aren't comparible. There are 22 million tests that have been carried out to date. The reason I use that as a proxy is I assume that there is reason to suspect that someone may have COVID-19 if they get a PCR test, just like there is reason to suspect that you have the flu if you have a "medical visit". The last piece of information would be the number of confirmed cases which is 2,045,549. So just under 10% of the cases that were tested were confirmed to have the disease.
Of the people who were hospitalised in the worst flu year in 40 years, about 10% died. Of the people who were hospitalised for COVID-19, about 42% died (well a little higher because I'm using June 10th data for deaths and May 30th for hospitalisations). The number of medical visits for the flu was 21 million in the worst year and the number of PCR tests for COVID-19 so far is 22 million (about the same). It may be a bad assumption,
Just to sum up (and sorry for those on mobile):
Where the asterisk means my esitmate which may be completely wrong. Edit: The first line is the flu and the second line is COVID-19But any way you slice it, I don't think the numbers work out the way you are portraying it. Corrections to the above are very much welcomed!