Ask a bunch of coding agents and they will give you these two versions, which means it's likely that the LLMs have seen these way more often than the mmap version. Both Opus and GPT even pushed back when I asked for mmap, both said it would "add complexity".
I would have expected something like this:
- Scan the file serially.
- For each word, find and increment a hash table entry.
- Sort and print.
In theory, technically, this does require slightly more memory—but it’s a tiny amount more; just a copy of each unique word, and if this is natural language then there aren’t very many. Meanwhile, OOP’s approach massively pressures the page cache once you get to the “print” step, which is going to be the bulk of the runtime.
It’s not even a full copy of each unique word, actually, because you’re trading it off against the size of the string pointers. That’s… sixteen bytes minimum. A lot of words are smaller than that.
Handling that is in my opinion way more complex than letting the kernel figure it out via mmap. The kernel knows way more than you about the underlying block devices, and you can use madvise with MADV_SEQUENTIAL to indicate that you will read the whole file sequentially. (That might free pages prematurely if you keep references into the data rather than copy the first occurance of each word though, so perhaps not ideal in this scenario.)