Hacker News new | past | comments | ask | show | jobs | submit login

I am a computational biologist with a heavy emphasis on the data analysis. I did try Jupyter a couple of years ago and here are my concerns with it, compared to my usual flow (Pycharm + pure python + pickle to store results of heavy processing).

1) Extracting functions is harder 2) Your git commits become completely borked 3) Opening some data-heavy notebooks is neigh impossible once they have been shut down 4) Import of other modules you have in local is pretty non-trivial. 5) Refactoring is pretty hard 6) Sphinx for autodoc extraction is pretty much out of the picture 7) Non-deterministic re-runs - depending on the cell execution order you can get very different results. That's an issue when you are coming back to your code a couple of months later and try to figure what you did to get there.

There are likely work-arounds for most of these problems, but the issue is that with my standard workflow they are non-issues to start with.

In my experience, Jupyter is pretty good if you rely only on existing libraries that you are piecing together, but once you need to do more involved development work, you are screwed.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: