Hacker Newsnew | past | comments | ask | show | jobs | submit | kfor's commentslogin

As well as many of the key innovations for OLEDs (patents they later sold to LG)


I like Vega-Lite: https://vega.github.io/vega-lite/

It’s built by folks from the same lab as D3, but designed as “a higher-level visual specification language on top of D3” [https://vega.github.io/vega/about/vega-and-d3/]

My favorite way to prototype a dashboard is to use Streamlit to lay things out and serve it and then use Altair [https://altair-viz.github.io/] to generate the Vega-Lite plots in Python. Then if you need to move to something besides Python to productionize, you can produce the same Vega-Lite definitions using the framework of your choice.


I build dashboards in Jupyter Lab. My plotting libraries are Altair, matplotlib, seaborn, Plotly - all work well in notebook.

My favorite is Altair. It provides interactivity for charts, so you can move/zoom your plots and have tooltips. It is much lighter than Plotly after saving the notebook to ipynb file. Altair charts looks much better than in matplotlib. One drawback, that exporting to PDF doesn't work. To serve notebook as dashboard with code hidden, I use Mercury framework, you can check example https://runmercury.com/tutorials/vega-altair-dashboard/

disclaimer: I'm author of Mercury framework https://github.com/mljar/mercury


We like Vega lite. I always find it gets me 95% of the way there very easily. It’s the best we’ve used. You can download as svgs too.

We use it for our online graphing in its native JavaScript form.


They do mention covariates in section 6.1 - specifically how this method doesn’t support them but ideas on how they could in the future such as via stacking:

> In this work, we have focused on univariate time series forecasting since it constitutes the most common of real-world time series use-cases. Nevertheless, practical forecasting tasks often involve additional information that must be taken into account. One example involves covariates, that can be either time-independent (e.g., color of the product) or time-varying (e.g., on which days the product is on sale). Another closely related problem is multivariate forecasting, where historic values of one time series (e.g., interest rates) can influence the forecast for another time series (e.g., housing prices). The number of covariates or multivariate dimensions can vary greatly across tasks, which makes it challenging to train a single model that can handle all possible combinations. A possible solution may involve training task-specific adaptors that inject the covariates into the pretrained forecasting model (Rahman et al., 2020). As another option, we can build stacking ensembles (Ting & Witten, 1997) of Chronos and other light-weight models that excel at handling covariates such as LightGBM (Ke et al., 2017).


Ah. Thank you. The same concept goes under different names, so one needs to search for all of "exogenous variables", "external regressors", "external factors" and "covariates".


Thanks for this! I've been going crazy for months that Twitter threads and replies are totally broken if you are logged out (which I always am since deleting my account in December). And now Nitter has led me to LibRedirect[1] which not only automatically redirects Twitter links but also lots of other common links like TikTok.

[1] https://libredirect.github.io/


Thanks for sharing, this looks great! Shame it is not available on Firefox for Android yet.


Neat! I wish there was a web version of LibRedirect. Would then be easy to make a DuckDuckGo Bang! for it, so we could do "!<bang> <original URL>" in DDG in any browser without installing an extension first.


Lots of great suggestions here, but one I haven't seen is providing deep links. Let users share the exact state of their dashboard with others, ideally without requiring some convoluted system of logging in and sharing things. We implemented it by allowing a json config in the url, then providing a button to copy a shortened URL containing the whole config.

Original creator of (the now woefully dated-looking) GBD Compare [https://vizhub.healthdata.org/gbd-compare/] here, where we found this super useful since we had so many controls that it could take a lot of clicking (and knowledge of the UI) to recreate a graph someone else was looking at. It really helped with reach, as folks could email/tweet their specific view then others could use that as a starting point to dive in without starting from scratch or having to create an account.


+10, yes, don't make people recreate all the filtering for something you're putting in their hands.


To add to this, there are two kinds of sharing.

Sharing the parameters, filters, etc.

Sharing the results.

They can both be very important.


Yeah, deep links are essential. Datadog's really good at this. They also have a nice preview generator so that when you drop that link into Slack it generates a preview image of the dashboard with all of the filters applied. It's great.


Do you worry that you risk introducing bias into your interview process with this sort of unstructured questioning? There is quite a bit of research [1] demonstrating that structured and standardized interviews across candidates are one of the most crucial ways of preventing various types of bias, conscious or not.

[1] Here's a useful summary article: https://hbr.org/2016/04/how-to-take-the-bias-out-of-intervie...


Good interviewers have the candidate feel like it's open ended by asking open ended questions that still tick the boxes they need to tick. Meaning the interviewer has a structure and a series of things to make sure to get information about, but to a casual listener of 3 different interviews, they might not even be able to piece out what those questions are, because you can make it fully contextual to the person.

With a bit of luck and skill you can get through a whole interview and take all your notes and the candidate doesn't even feel when you switch from one question to the other. Start open ended, make sure you can tick the boxes you have to tick from your questionnaire and dig in the threads you need to dig. Some people aren't good at doing this and they need to be more led by the interviewer, in those cases you can easily "adjust down" and be more explicit, but this way you get the best of both worlds.

Standardized notes with specific topics as well as the opportunity for people to tell you about what _they_ think was interesting about those situations, which is hard to predict from just a questionnaire.


You wouldn't want the entire interview to be unstructured, but because every candidate is different, I think it's more than fair to give people an opportunity to highlight their own strongest areas.

I wouldn't want to overlook a great candidate because I omitted to ask whether they invented UDP, or whatever.


Talking about their strongest areas or inventions can be part of a structured interview.


Not unless you ask. Resumes omit many things due to length constraints, and the candidate might prioritize recent work over quality work if it's too long ago.

Legacy skills are important too, but those details don't always make the cut in both the resume and job description.


> Not unless you ask

Yes well that's what the interviewer is doing: asking things,

And the interviewer can have a question in their list: Ask job applicant about their strongest skills? (And maybe inform the applicant beforehand if they want to think about what to say)


Doesn't that depend on what they're trying to accomplish?

If one is trying to determine the top-performing candidate (for the position being hired for), does bias actually interfere with that?

Especially considering the sorts of bias that are likely to be introduced (those adjacent to personality preferences), those who don't fit well are likely to poorly influence coworkers to a degree that negatively impacts total performance of a team.


When I looked into it a couple years ago, it was possible to configure nbdev to use e.g. GitLab instead of GitHub for those of us who can't use GitHub for whatever reason. Is this still possible with the rewrite? Any major things we'd be missing out on?

And thanks for putting together such an awesome resource, I'm excited to try kicking the tires on it again!


Frankly it's not something I've been working on -- the GitLab support in v1 was added and maintained by the community. I certainly want it to work (even although I'm not a GitLab user myself), so if you try it and find it doesn't, please send us PRs/Issues if you can.


Thanks, I’ll give it a shot sometime and share whatever I figure out


I highly recommend How to Lie with Statistics (1954) to learn more about these sorts of misleading stats: https://en.wikipedia.org/wiki/How_to_Lie_with_Statistics


Awesome book, Ravin! I’m waiting for my physical copy to arrive (should be here tomorrow!) before really diving in, but what I’ve skimmed in the digital copy so far is great.

Btw I’ve been using PyMC2 since 2010 and contracted a bit with PyMC Labs, so I’m surprised we’ve never bumped into each other!


Thanks for ordering one.

Which company are you with? Perhaps we did bump into each other, asking to see if that is the case


And it’s not just that the wheel well needs to be in front of the passengers, but also the front crumple zone is very important to passenger safety in many of the most common crash scenarios.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: