Hacker Newsnew | past | comments | ask | show | jobs | submit | halhen's commentslogin

Here's a take I did on the same thing where I create "stereo plots" (the cross eyed 3D thing): https://blog.k2h.se/post/stereo-plotting-in-r/ . Quite fascinating


Picking a neighborhood where you are relatively well off seems to usefully calibrate those assumptions. Maybe it has to do with status, which is relative.

Earning well and living in a regular middle class neighborhood of teachers, carpenters, and office workers, I don't feel stressed about keeping up with the Joneses I'm surrounded by. I can afford enough relatively (to my surroundings) nice things.

My social circle is not full of wealthy people, so living in a regular middle class area is also not low-status. I also feel like that the kids feel more secure about the things they do and have than they would be if they were always falling behind their friends, doing and having exactly the same stuff.


Yes living below your means is a key to happiness. Too many people end up pushing what they can afford without planning for tougher times.

I think one of the biggest positive impacts on me was my parents deciding to raise me until 16 in a developing country. They had a middle class income, but my country of origin they could afford to put me in the best schools and pay for plenty of out of school activity. That changed my mindset of what was possible. By the time I came back to Europe, I went to a free school, but performed much better than my peers, I believe mostly due to my own expectations for my future.


Or to PowerBI, which will any UUID to a string even in joins. That cast + string comparisons + killing of indexes is not conducive to performant queries...


It seems to me that large parts of the world -are- at war with Russia. And, importantly, we manage to put a lot of pressure on the enemy without getting even more people killed.


Agree! That, and (almost) everything is a vector... Which makes perfect sense for an analytics language.

Once I grokked that R became my default language for anything analytics.


That depends on the encoding, does it not? The binary sequence equal to ASCII "Hello world" might well be PII with many different encodings. By accident, of course, but nevertheless 33 bits of information would be enough.


I believe developers could be particularly good at identifying and communicating all required information from the start. We spend our lives thinking about how to pass information around, and get a lot of immediate feedback about it.

Think about it as a function: What arguments (information) would a function need to be able to return a value?


As a data scientist, I sometimes find myself going over 64 GB. Of course it all depends on how large data I'm working on. 128 GB RAM helps even with data of "just" 10-15 GB, since I can write quick exploratory transformation pipelines without having to think about keeping the number of copies down.

I could of course chop up the workload earlier, or use samples more often. Still, while not strictly necessary, I regularly find I get stuff done quicker and with less effort thanks to it.


Somewhat helpful for Shiny stack traces:

    options(shiny.fullstacktrace = TRUE)
You should be prepared to scroll a bit, but you might get the information you're looking for more often.


Thank you. I will give that a try next time.


I'm not well rounded enough to draw a clear path from where you are. For me Gelmans Data Analysis Using Regression and Multilevel/Hierarchical Models [0] drove home many, many points. More recently, I have a sense/hope that Pearl's The Book of Why [1] might take this to yet another level.

[0] http://www.stat.columbia.edu/~gelman/arm/ [1] https://www.amazon.com/Book-Why-Science-Cause-Effect/dp/0465...


Pearl's landmark rigorous book is Causality


Gelman & Hill


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: