Hacker Newsnew | past | comments | ask | show | jobs | submit | MarkMMullin's commentslogin

Except for some of us who come from the time you had to get permission to get an ID on the Arpanet. In that case, the WHO is not saying mathematically nice things about us. :-(


Ehh, nature, she always drags some series into the mix - the mortality rate is a function of distribution across age and other health factors - it doesn't seem to drive the immune system bonkers, so WHO is calling out higher risks to older people, esp those with additional factors such as cardiac or pulmonary impairment. Probably due to the fact this is fundamentally a respiratory issue. Abstractly, more data always helps in spite of the cost to get it, but for us in the USA, the last official action has been to delete the tests_administered count from daily CDC reporting. So that makes any projection more guesswork than not.


Given my first terminal was an ASR-33, this is of interest to me - yes I have insurance, and yes I have friends who got staggering bills because the anesthesiologist wasn't in-network (not to bash on anesthesiologists, I have counted some as good friends and they're the greatest geeks in medicine). - will this keep in-network hospitals using in-network providers for services ?


Common LISP could certainly generate assembler for Arduino and other MCUs, but there's no way you're fitting that fat old beast (whom I do still love) on one . :-)


If this was reddit, I'd award gold for that comment. Only thing I can add is that one should only value paid experience, i.e. the person working a day job doing a banking app and a personal job of adding high quality code to a complex OSS project, well hell, they only know a banking app


And his banking app was written in Java, so he's clearly unable to learn Scala or Groovy.


The 'problem' with modern science is that a large finite amount of stuff has been discovered, and new discoveries are most often built on that stuff. I splash around in the backwaters of machine learning, however I have to pick my targets carefully and maintain a very tight focus - the defined knowledge base has already grown to more than anyone can ingest in detail. Given the amount of knowledge that has to be absorbed to achieve mastery in a scientific discipline, and the fact that knowledge base keeps growing, it takes longer and longer to lay a foundation for just understanding what the hell people are talking about. Now, there are opportunities, for example the use of genetic algorithms as part of ML solutions, because for any 'real' researcher, GA's aren't the cool kid in the opinions of reviewers or funding agents. It's easier when you are the PI and the funding authority, even if the budget is quite a bit smaller


The problem with modern science is that there is no room for the unusual.

Faraday had no formal training, but his natural intuition, interest, and tenacity made him standout.

Newton was a brilliant, paranoid asshole.

Instead funding goes to credentialed career scientists whose greatest ability is self-promotion, fund raising, and stringing-along the public.

As an example: The next big particle accelerator sucks up billions; while alternative approaches to QM never get any attention because it’s a guaranteed way to kill a career and become a pariah.

So nobody is available to even try to create the theoretical framework at the investment of a few million.


There is still plenty of room for the unusual, it just isn't as easy as it was back in the day when most important discoveries were made by people working by themselves at home with little organizational funding.

It's a nice narrative that overgrown institutions are ruthlessly repressing all creativity. And I do believe there is some truth to that. We should work hard to understand why and then improve the situation. However, unfortunately, reality is usually more complex and also more mundane (in some ways) than any nice narratives we can come up with.


From my experiences in the life sciences going through the academic credentialing process (PhD to postdoc), there was room for the unusual but the way it worked was a little convoluted. Basically the grant funding agencies give you money for a project that they can understand and follow your logic on why it will succeed. Then when they fund you, you cut back on the resources required to get to that success and spend the savings on new ideas. The fun part about the "new idea" spending is you can look at most anything that your instrumentation can look for. The idea being all the tools in your lab, your departments lab, even collaborating institutions are available to play around with and probe. You can even build new instrumentation to look at new things with that money. This is how modern life science pushes ahead.


Or the other option is you get funding for an idea that falls within the same realm as your unusual idea. Then you spend the money on those overlapping projects.

A great example is a chemist I knew that love research with selenium (an uncommon element). He was most interested in what role it plays in organic chemistry. That’s it.

But when he wrote up grant proposals it was always about the anti-cancer properties of selenium compounds. Never mind the fact he had zero plans to actually pursues that end.


So what happens if you get caught? Or someone calls you out?


Nobody's checking. The focus is on what you're going to spend next year's money on, not how you spent last year's money. And if you're savvy, you can use tools that were already bought for some other purpose, or that don't cost much.

Working a day job in industry isn't categorically different, except that someone is probably watching your spending more closely. You have to figure out a way to set aside some time to work on your own interest, whether you do it at work or at home.

As for money, you can get technology made for 1/10 of what it costs your employer, by choosing your battles, cutting out all of the overhead, and using free stuff.


So I moved into industry out of post doc and I can tell you as long as you're getting your day job done and it isn't that expensive you can test most any idea. I'd say it's even easier to do it in industry because "not that expensive" to industry is like 10-fold more than in academia.


Except that you’re not allowed to be bored and mentally daydream.

Newton and Einstein both hit on some big ideas during lulls (@home to avoid plague for the former; patent office work for the latter).


In Newton's day a single person could probably learn all the math and science there was to be learned in the entire world.

Now things are much more specialized and require years of formal study just to get a base level of knowledge in one tiny aspect.


I doubt this is the case at all.

We have the luxury of hindsight to know which peculiar schools of thought were rubbish, so we don’t even teach them.

I’m sure there are many, many theorems built up with Euclidean Geometry that we don’t bother teaching anymore.

Because modern methods make the results trivial.

Today, what are we burdening upcoming scientists with unnecessarily?


Part of it has to also do with science becoming more empirical.


The number of unanswered questions grows even faster as more questions are answered.


Good questions are even harder to find than good answers.

When you come up with a good question, make sure lots of people hear it.


> a large finite amount of stuff has been discovered

https://en.wikipedia.org/wiki/Lists_of_unsolved_problems


Neural networks are a good example of something that was once ignored as an academic curiosity at best.


Neural networks followed a typical hype cycle — before they were ignored they were the next big thing.

Then they got ignored, and then they finally got useful.


Don't get me wrong, I like your cites - but at the end of the day, math is math. Either it works or it doesn't. If it works, apply it where you can! I've got a sneaking suspicion a PCA based analyzer for lidar generated point clouds is going to be a lot faster.......


Depends where you live - 20 years of salt nets you a frame that will break if you sneeze


I live near Boston.


Heh - had to go look up storage protect, aka 'red light errors' . Kept getting in trouble in the beginning of my career when I didn't realize snooping about lit up lights on the front panel . :-)


This has pretty broad applicability across a wide range of algorithms. The common failure mode when the machine fails to recognize the otherwise normal real face and body indicates that the whole face/skeleton relationship has fallen apart. Defeating this is interesting, as we have enough trouble just trying to recognize faces. To add to this "yes these are faces too but they are not faces too" is probably going to drive some researchers to drink. At the end of the day, this is a common flaw in a lot of deep learning systems, they're very brittle.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: