upvote
Yes, absolutely. I've since switched to rope-backed buffers, but I don't think the rope itself is actually adding much from a performance standpoint, even for really very large files.

We talk about big-O complexity a lot when talking about things like this, but modern machines are scarily good at copying around enormous linear buffers of data. Shifting even hundreds of megabytes of text might not even be visible in your benchmark profiling, if done right.

When benchmarking, I discovered that the `to_pos`/`to_coord` functions, which translate between buffer byte positions and screen coordinates, were by far the heaviest operation. I could have solved that problem entirely simply by maintaining a list of line offsets and binary-searching through it.

reply
That's true for code editing, but it's nice to not have to reach for a different solution when editing huge files. Sometimes I like to open up big log files, JSON test data, etc.
reply
I am always surprised even vim chokes on files with one massive line. That could be a useful optimization too.
reply
Do you actually edit big log files?
reply