Hacker News new | past | comments | ask | show | jobs | submit login

I wonder how rope behaves after lots of changes. Take 1GB file, modify every line multiple times. Now whole file is partitioned severely. Moving around a file could be much slower. For me moving around file quickly is much more important then big edits. Probably practically it's still good enough.

I'm developing my own text editor. To test naive implementation of text editing routines - array of line strings. I concatenated all sources of recent Linux kernel 4.6.3 once. From what I remember it was all .c, .h and .S files. Resulting file had 542MB and 19'726'498 lines. It was good enough even adding a line at front.




The true strength of the rope data structure its performance in worst case conditions. Even after a huge series of edits, it recombines leaves and rebalances the tree so it's almost as efficient as when it's loaded freshly. It's not like a piece table, which is amazingly efficient at first and then fragments.

[The actual analysis is much more subtle, as the tree invariants are a minimum and maximum number of children for non-root non-leaf nodes, and a minimum and maximum leaf size. When freshly loaded, the code is careful to maximally pack both, and after extensive editing the distribution will be more varied. However, it's still rigorously O(log n), so it ends up being a small constant factor. Might be worth doing some empirical performance testing.]




Consider applying for YC's Summer 2025 batch! Applications are open till May 13

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: