Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Hey! I’m a coauthor with this guy :^). Incredibly fascinating (and extremely smart) dude. His interest in origami is what drew me to MIT in the first place, since I was interested in it as well from a young age. A few years ago I took his class on the computability/complexity theory of folding, and we ended up solving an open problem in the field and published a paper out of it [0]. If you had told highschool me something like that would happen, I never would have believed it :)

[0]: https://arxiv.org/abs/1901.08564



Hmm,

Scanning the topics here, what seems really interesting is how basic data structure are, naturally, containers for data but these advanced data structures are more like "mix-ins" or effects, qualities that can be added to a container. Time travel most obviously but also the various optimizing tree containers.

Not sure how to make this an interesting question. But it's interesting. How many effects can we add to a given structure? Can we make these additions automatically? Can we combine all of these or are some incompatible with each other? What are the cost-benefits between these?


> If you had told highschool me something like that would happen, I never would have believed it :)

I find it interesting that a common story of people who did big things is that nobody would have predicted they'd end up doing the things they did.

Reminds me of my own reflections on things that I have done that I never would have expected I'd be doing. As well as this recent article by the creator of Entrepreneur magazine [0].

"literally nobody, myself included, would have once predicted that I’d be running a magazine called Entrepreneur"

[0]: https://www.entrepreneur.com/article/331221


Someone needs to apply Bayes theorem to determine if not believing in oneself is truly a prerequisite to achieving greatness.


Is it "not believing in oneself" or is it "life takes unexpected turns and you don't always end up where you thought you would"?


I don't think it's about not believing in oneself. I actually think the opposite is a requisite -- that you must believe in yourself, even if you don't know where you're heading. Or even if you know where you're heading, and it's not the path that you had originally charted. You can't see 20 moves ahead, but you know a good next move.


I don't think people would even try something in the first place if deep down they didn't believe it could work out.


Hi there. What would you suggest to a person who loves these kind of stuff (data structures and algorithms) but finds it hard to develop the underlying intuitions for coming up with these data-structure and algorithm? I personally just hammer down on deliberate practice to recognise patterns and underlying concepts to solve as many problems as I can on my own. I was wondering apart from general DS and Algo what topics in Mathematics might supplement this process?


I recently had someone ask me about a problem in machine learning. The algorithm for it turned out to be something I had read about 30 years earlier, where it was being used to accelerate the simulation of the motion of stars in galaxies. I had never expected to use that algorithm, but its existence, if not details, was there in my long term memory, waiting to surface.

So, I suggest you just read everything you can, getting the gist of things so if you encounter something similar you know where to look.

I also suggest looking for little tricks you can apply again and again. For example, some of the data structures in that course use a trick where you break the structure into a top part, using one algorithm, and a bunch of leaf parts, each of size maybe O(log n), that are handled using another algorithm. Often this combination does better than either algorithm by itself.


Few questions:

1. Where can I find that algorithm you are talking about?

2. The algorithm which you stated seems a bit complex from the algos taught in the undergrad level, so what are the pre-requisites for understanding it that I must be aware of?

3. What books or papers you would suggest for reading?

4. How the hell you remember something you read decades ago :)?


The ML problem involved, for a set of points on a line, computing the sum (over all the pairs of distinct points) of a function d(x_i,x_j) that is, in some sense, well behaved. The purpose was to compute the similarity of two sets of real numbers.

It turns out this can be done (to sufficient accuracy) in nearly linear time using something called the fast multipole method.

http://www.umiacs.umd.edu/labs/cvl/pirl/vikas/publications/F...

I wouldn't consider the "O(log n) chunks" trick all that complex, btw, although Tarjan did have to point it out to me when I should have used it.


> Tarjan did have to point it out to me when I should have used it.

Wow, you worked with Tarjan. How cool!!!.


Not directly, but he did point out the improvement.


Really, to quote the old "How do I get to Carnegie Hall?" it's "practice practice practice" - it's doing this stuff often enough so that the correct data structure becomes obvious is how you get good at this game


Recognized his face and name instantly ! I know him by reputation from his origami work and appearance in a documentary, seems to be a very cool guy.


The first 17 minutes of the Time Travel lecture is the best introduction to Git I've seen and he didn't even mention Git.


That's amazing. Congrats!




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: