tr -s '[:space:]' '\n' < file.txt | sort | uniq -c | sort -rn
I’d like to know the memory profile of this. The bottleneck is obviously sort which buffers everything in memory. So if we replace this with awk using a hash map to keep count of unique words, then it’s a much smaller data set in memory:
tr -s '[:space:]' '\n' < file.txt | awk '{c[$0]++} END{for(w in c) print c[w], w}' | sort -rn
I’m guessing this will beat Python and C++?
I look at memory profiles of rnomal apps and often think "what is burning that memory".
Modern compression works so well, whats happening? Open your taskmaster and look through apps and you might ask yourself this.
For example (lets ignore chrome, ms teams and all the other bloat) sublime consumes 200mb. I have 4 text files open. What is it doing?
Alone for chrome to implement tab suspend took YEARS despite everyone being aware of the issue. And addons existed which were able to do this.
I bought more ram just for chrome...
- 100MB 'image' (ie executable code; the executable itself plus all the OS libraries loaded.)
- 40MB heap
- 50MB "mapped file", mostly fonts opened with mmap() or the windows equivalent
- 45MB stack (each thread gets 2MB)
- 40MB "shareable" (no idea)
- 5MB "unusable" (appears to be address space that's not usable because of fragmentation, not actual RAM)
Generally if something's using a lot of RAM, the answer will be bitmaps of various sorts: draw buffers, decompressed textures, fonts, other graphical assets, and so on. In this case it's just allocated but not yet used heap+stacks, plus 100MB for the code.
Edit: I may be underestimating the role of binary code size. Visual Studio "devenv.exe" is sitting at 2GB of 'image'. Zoom is 500MB. VSCode is 300MB. Much of which are app-specific, not just Windows DLLs.
But isn't it crazy how we throw out so much memory just because of random buffers? It feels wrong to me
There's a common noob complaint about "Linux using all my RAM!" where people are confused about the headline free/buffers numbers. If there's a reasonable chance data could be used again soon it's better to leave it in RAM; if the RAM is needed for something else, the current contents will get paged out. Having a chunk of RAM be genuinely unallocated to anything is doing nothing for you.
The portions that are allocated but not yet used might just be page table entries with no backing memory, making them free. Except for the memory tracking the page table entries. Almost free....
A lot of "image" will be mmapped and clean. Anything you don't actually use from that will be similarly freeish. Anything that's constantly needed will use memory. Except if it's mapped into multiple processes, then it's needed but responsibility is spread out. How do you count an app's memory usage when there's a big chunk of code that needs to sit in RAM as long as any of a dozen processes are running? How do you count code that might be used sometime in the next few minutes or might not be depending on what the user does?
ASLR is not an obstacle -- the same exact code can be mapped into different base addresses in different processes, so they can be backed by the same actual memory.
As a corrolary to this: I look at CPU utilization graphs. Programs are completely idle. "What is burning all that CPU?!"
I remember using a computer with RAM measured in two-digit amounts of MiB. CPU measured in low hundreds of MHz. It felt just as fast -- sometimes faster -- as modern computers. Where is all of that extra RAM being used?! Where is all of that extra performance going?! There's no need for it!
Yes, so do I. It was limited to 800x600x16 color mode or 320x200x256. A significant amount of memory gets consumed by graphical assets, especially in web browsers which tend to keep uncompressed copies of images around so they can blit them into position.
But a lot is wasted, often by routing things through single bottlenecks in the whole system. Antivirus programs. Global locks. Syncing to the filesystem at the wrong granularity. And so on.
Of course, some software other than desktop environments have seen important innovation, such as LSPs in IDEs which allows avoiding every IDE implementing support for every language. And SSDs were truly revolutionary in hardware, in making computers feel faster. Modern GPUs can push a lot more advanced graphics as well in games. And so on. My point above was just about your basic desktop environment. Unless you use a tiling window manager (which I tried but never liked) nothing much has happened for a very long time. So just leave it alone please.
Add to that: unicode handling, support for bigger displays, mixed-DPI, networking and device discovery is much less of a faff, sound mixing is better, power management and sleep modes much improved. And some other things I'm forgetting.
IE, in a JVM (Java) or dotnet (C#) process, the garbage collector allocates some memory from the operating system and keeps reusing it as it finds free memory and the program needs it.
These systems are built with the assumption that RAM is cheap and CPU cycles aren't, so they are highly optimized CPU-wise, but otherwise are RAM inefficient.
> sublime consumes 200mb. I have 4 text files open. What is it doing?
To add to what others have said: Depending on the platform a good amount will be the system itself, various buffers and caches. If you have a folder open in the side bar, Sublime Text will track and index all the files in there. There's also no limit to undo history that is kept in RAM.
There's also the possibility that that 200MB includes the subprocesses, meaning the two python plugin hosts and any processes your plugins spawn - which can include heavy LSP servers.
https://waspdev.com/articles/2025-11-04/some-software-bloat-...
Visual Studio runs the memory profiler in debug mode right from the start, it is the default configuration, you need to disable it.
https://learn.microsoft.com/en-us/visualstudio/profiling/mem...
Huh? Sublime Text? I have like 100 files open and it uses 12mb. Sublime is extremely lean.
Do you have plugins installed?
Memroy statistics says 200mb and a peak of 750mb in the past (for whatever reason)
Edit: From what I can tell, Sublime is allocated 100mb of virtual memory even if it's only using about 10mb in practice.
Electron really loves to claim absurd amounts of memory, e.g. slack has claimed just over 1TB of virtual memory, but is only using just north of 200MB.
Contrast this with Rust, which had the benefit of being developed several decades later. Here Option and str (string views) were in the standard library from the beginning, and every library and application uses them as fundamental vocabulary types. Combined with good support for chaining and working with these types (e.g. Option has map() to replace the content if it exists and just pass it along if None).
Retrofitting is hard, and I have no doubt there will be new ideas that can't really be retrofitted well into Rust in another decade or two as well. Hopefully at that point something new will come along that learned from the mistakes of the past.
std::optional example OTOH is also a bad example because it is heavily opinionated, and having it baked into the API across the standard library would be a really wrong choice to do.
Optional being opinionated I don't think I agree with. It is better to have an optional of something that can't be null (such as a reference) than have everything be implicitly nullable (such as raw pointers). This means you have to care about the nullable case when it can happen, and only when it can happen.
There is a caveat for C++ though: optional<T&> is larger in memory than a rae pointer. Rust optimises this case to be the same size (one pointer) by noting that the zero value can never be valid, so it is a "niche" that can be used for something else, such as the None variant of the Option. Such niche optimisation applies widely across the language, to user defined types as well. That would be impossible tp retrofit on C++ without at the very least breaking ABI, and probably impossible even on a language level. Maybe it could be done on a type by type basis with an attribute to opt in.
The caveat is that niche optimizations are not perfectly portable, they can have edge cases. Strict portability is likely why the C++ standard makes niche optimization optional.
(And the possibility to implement whatever you want, ofc.)
Memory and storage are not "cheap" anymore. Power may also rise in cost
Under these conditions, memory usage and binary size are irrefutably relevant^1
To some, this might feel like going backwards in time toward the mainframe era. Another current HN item with over 100 points, "Hold on to your hardware", reflects on how consumer hardware may change as a result
To me, the past was a time of greater software efficiency; arguably this was necessitated by cost. Perhaps higher costs in the present and future could lead to better software quality. But whether today's programmers are up for the challenge is debatable. It's like young people in finance whose only experience is in a world with "zero" interest rates. It's easier to whine about lowering rates than to adapt
With the money and poltical support available to "AI" companies, the incentive for efficiency of any kind is lacking. Perhaps their "no limits" operations, e.g., its effects on supply, may provide an incentive for others' efficiency
1. As an underpowered computer user that compiles own OS and writes own simple programs, I've always rejected large binary size and excessive memory use, even in times of "abundance"
I wonder if frameworks like dotnet or JVM will introduce reference counting as a way to lower the RAM footprint?
This has negligible overhead in most cases. For instance, if the shared counter is already in some cache memory the overhead is smaller than a normal non-atomic access to the main memory. The intrinsic overhead of an atomic instruction is typically about the same as that of a simple memory access to data that is stored in the L3 cache memory, e.g. of the order of 10 nanoseconds at most.
Moreover, many memory allocators use separate per-core memory heaps, so they avoid any accesses to shared memory that need atomic instructions or locking, except in the rare occasions when they interact with the operating system.
This is such a problem that the JVM gives threads their own allocation pools to write to before flushing back to the main heap. All to reduce the number of atomic writes to the pointer tracking memory in the heap.
A lifetime system could possibly eliminate those, but it'd be hard to add to the JVM at this point. The JVM sort of has it in terms of escape analysis, but that's notoriously easy to defeat with pretty typical java code.
Swift routinely optimizes out reference count traffic.
It makes more sense for application developers to think about the unnecessary complexity that they add to software.
Go also does M&S and yet uses less memory. Why? Because go isn't compacting, it's instead calling malloc and free based on the results of each GC. This means that go has slower allocation and a bigger risk of memory fragmentation, but also it keeps the go memory usage reduced compared to the JVM.
> Peak memory consumption is 1.3 MB. At this point you might want to stop reading and make a guess on how much memory a native code version of the same functionality would use.
I wish I knew the input size when attempting to estimate, but I suppose part of the challenge is also estimating the runtime's startup memory usage too.
> Compute the result into a hash table whose keys are string views, not strings
If the file is mmap'd, and the string view points into that, presumably decent performance depends on the page cache having those strings in RAM. Is that included in the memory usage figures?
Nonetheless, it's a nice optimization that the kernel chooses which hash table keys to keep hot.
The other perspective on this is that we sought out languages like Python/Ruby because the development cost was high, relative to the hardware. Hardware is now more expensive, but development costs are cheaper too.
The take away: expect more push towards efficiency!
At this point I'd make two observations:
- how big is the text file? I bet it's a megabyte, isn't it? Because the "naive" way to do it is to read the whole thing into memory.
- all these numbers are way too small to make meaningful distinctions. Come back when you have a gigabyte. It gets more interesting when the file doesn't fit into RAM at all.
The state of the art here is : https://nee.lv/2021/02/28/How-I-cut-GTA-Online-loading-times... , wherein our hero finds the terrible combination of putting the whole file in a single string and then running strlen() on it for every character.
I have to disagree. Bad performance is often a result of a death of a thousands cuts. This function might be one among countless similarly inefficient library calls, programs and so on.
The edit in the article says ~1.5kb
Though I believe the “naive” streaming read could very well be superior here.
Not so much, because you only need some fraction of that memory when the program is actually running; the OS is free to evict it as soon as it needs the RAM for something else. Non-file-backed memory can only be evicted by swapping it out and that's way more expensive,
import re, operator
def count_words(filename):
with open(filename, 'rb') as fp:
data= memoryview(fp.read())
word_counts= {}
for match in re.finditer(br'\S+', data):
word= data[match.start(): match.end()]
try:
word_counts[word]+= 1
except KeyError:
word_counts[word]= 1
word_counts= sorted(word_counts.items(), key=operator.itemgetter(1), reverse=True)
for word, count in word_counts:
print(word.tobytes().decode(), count)
We could also use `mmap.mmap`. >>> 'x\u2009 a'.split()
['x', 'a']
# incorrect; in bytes mode, `\S` doesn't know about unicode whitespace
>>> list(re.finditer(br'\S+', 'x\u2009 a'.encode()))
[<re.Match object; span=(0, 4), match=b'x\xe2\x80\x89'>, <re.Match object; span=(7, 8), match=b'a'>]
# correct, in unicode mode
>>> list(re.finditer(r'\S+', 'x\u2009 a'))
[<re.Match object; span=(0, 1), match='x'>, <re.Match object; span=(5, 6), match='a'>] import mmap, codecs
from collections import Counter
def word_count(filepath):
freq = Counter()
decode = codecs.getincrementaldecoder('utf-8')().decode
with open(filepath, 'rb') as f, mmap.mmap(f.fileno(), 0, access=mmap.ACCESS_READ) as mm:
for chunk in iter(lambda: mm.read(65536), b''):
freq.update(decode(chunk).split())
freq.update(decode(b'', final=True).split())
return freq... Ah, but I suppose the existing code hasn't avoided that anyway. (It's also creating regex match objects, but those get disposed each time through the loop.) I don't know that there's really a way around that. Given the file is barely a KB, I rather doubt that the illustrated techniques are going to move the needle.
In fact, it looks as though the entire data structure (whether a dict, Counter etc.) should a relatively small part of the total reported memory usage. The rest seems to be internal Python stuff.
edit: OP's fully native C++ version using Pystd
I don't know if the implementation is written in a "low-level" way to be more accessible to users of other programming languages, but it can certainly be done more simply leveraging the standard library:
from collections import Counter
import sys
with open(sys.argv[1]) as f:
words = Counter(word for line in f for word in line.split())
for word, count in words.most_common():
print(count, word)
At the very least, manually creating a (count, word) list from the dict items and then sorting and reversing it in-place is ignoring common idioms. `sorted` creates a copy already, and it can be passed a sort key and an option to sort in reverse order. A pure dict version could be: import sys
with open(sys.argv[1]) as f:
counts = {}
for line in f:
for word in line.split():
counts[word] = counts.get(word, 0) + 1
stats = sorted(counts.items(), key=lambda item: item[1], reverse=True)
for word, count in stats:
print(count, word)
(No, of course none of this is going to improve memory consumption meaningfully; maybe it's even worse, although intuitively I expect it to make very little difference either way. But I really feel like if you're going to pay the price for Python, you should get this kind of convenience out of it.)Anyway, none of this is exactly revelatory. I was hoping we'd see some deeper investigation of what is actually being allocated. (Although I guess really the author's goal is to promote this Pystd project. It does look pretty neat.)
Rust is high-level enough to still be fun for me (tokio gives me most of the concurrency goodies I like), but the memory usage is often like 1/10th or less compared to what I would write in Clojure.
Even though I love me some lisp, pretty much all my Clojure utilities are in Rust land now.
- since GC languages became prevalent, and maybe high level programming in general, coders arent as economic with their designs. Memory isn't something a coder should worry about apparently.
- far more people code apps in web languages because they don't know anything else. These are anywhere from 5-10 levels of abstraction away from the metal, naturally inefficient.
- increasing scope... I can only describe this one by example, web browsers must implement all manner of standards etc that it's become a mammoth task, especially compared to 90s. Same for compilers, oses, heck even computers thenselves were all one-man jobs at some point because things were simpler cos we knew less.
But it's not necessarily an apples to apples comparison. It's not unfair to python because of the runtime overhead. It's unfair because it's a different algorithm with fundamentally different memory characteristics.
A fairer comparison would be to stream the file in C++ as well and maintain internal state for the count. For most people that would be the first/naive approach as well when they programmed something like this I think. And it would showcase what the actual overhead of the python version is.
Wouldn't memory mapping the data in Python be the more fair comparison? If the language doesn't support that, then this seems to absolutely be a fair comparison.
> For most people that would be the first/naive approach as well when they programmed something like this I think.
I disagree, my mind immediately goes to mmap when I have to deal with a single file that I have to read in it's entirety. I think the non-obvious solution here is rather io-uring (which I would expect to be faster if dealing with lots of small files, as you can load them async concurrently from the file system).
Ask a bunch of coding agents and they will give you these two versions, which means it's likely that the LLMs have seen these way more often than the mmap version. Both Opus and GPT even pushed back when I asked for mmap, both said it would "add complexity".
I would have expected something like this:
- Scan the file serially.
- For each word, find and increment a hash table entry.
- Sort and print.
In theory, technically, this does require slightly more memory—but it’s a tiny amount more; just a copy of each unique word, and if this is natural language then there aren’t very many. Meanwhile, OOP’s approach massively pressures the page cache once you get to the “print” step, which is going to be the bulk of the runtime.
It’s not even a full copy of each unique word, actually, because you’re trading it off against the size of the string pointers. That’s… sixteen bytes minimum. A lot of words are smaller than that.
Handling that is in my opinion way more complex than letting the kernel figure it out via mmap. The kernel knows way more than you about the underlying block devices, and you can use madvise with MADV_SEQUENTIAL to indicate that you will read the whole file sequentially. (That might free pages prematurely if you keep references into the data rather than copy the first occurance of each word though, so perhaps not ideal in this scenario.)
The C++ code is still building a tally by incrementing keys of a hash map one at a time, and then dumping (reversed) key/value pairs out into a list and sorting. The file is small and the Python code is GCing the `line` each time through the outer loop. At any rate it seems like a big chunk of the Python memory usage is just constant (sort of; stuff also gets lazily loaded) overhead of the Python runtime, so.
Nice post.
(P.S. I'm also Finnish)
from collections import Counter
stats = Counter(x.strip() for l in open(sys.argv[1]) for x in l)
The ultimate bittersweet revenge would be to run our algorithms inside the RAM owned by these cloud companies. Should be possible using free accounts.
If you just mean they come across as annoyed by AI, that's true, but that's also way too wide of a category to infer basically anything else about them.
I agree they are stealing it but I also see the benefit of it for society and for myself.
Suckerberg downloaded terabytes of books for training, while people around me got sued to hell 20 years ago for downloading one mp3 file.
and Zuck isn’t sued for downloading either, he is sued for reproduction by the AI not being derivative enough, but so far all branches of government support that
FB and so are CIA fronts and they can do anything they please. Until they hit against Disney and lobbying giants and if a CIA idiot tries to sue/bribe/blackmail them they can order Hollywood to rot their images into pieces with all the wars they promoted in Middle East and Latin America just to fill the wallets of CEO's. That among some social critique movie on FB about getting illegal user data all over the world to deny insurances and whatnot. And OFC with a clear mention of the Epstein case with related people, just in case the Americans forgot about it.
Then the US industry and military complex would collapse in months with brainwashed kids running away from the army. Not to mention to the Call of Duty franchise and the like. It would be the end of Boeing and several more, of course. To hell to profit driven wars for nothing.
Ah, yes, AIPAC lobbies and the like. Good luck taming right wing wackos hating the MAGA cult more than the 'woke' people themselves. These will be the first ones against you after sinking the US image for decades, even more than the illegal Iraq war with no WMD's and the Bush/Cheney mafia.
The outcome of this? proper and serious engineering a la Airbus. Instant profit-driven MBA and war sickos being kicked out from the spot. OFC the AI snakeoil sellers except for the classical AI/NN against concrete cases (image detection and the like), these will survive fine, even better because these kind of jobs are highly specific and they are not statistical text parrots. They can provide granted results unlike LLM's prone to degrade because the human based content feeding needs to be continuous, while for tumour detection a big enough sample can cover a 99% of the cases.
R&D on electric vehicles/energy and nuclear power like nowhere else. And, for sure, the EV equivalent of a Ford T for Americans. A cheap and reliable one, good enough for the common Joe/Mary without being a luxury item. A new Golden Age would rise, for sure. But the oil mafia will try to fight them like crazy.
A fundamentally anti-civilisational mindset.
Its a little bit hypocritic which often enough ends in realism aka "okay we clearly can't fight their copyright infridgments because they are too powerful and too rich but at least we can use the good side of it".
Nothing btw. enforces all of this to happen THAT fast besides capitalism. We could slow down, we could do it better or more right.
What LLMs are NOT is intelligent in the same way as a human, which is to say they are not "AGI". They may be loosely AGI-equivalent for certain tasks, software development being the poster child. LLMs have no equivalent of "judgement", and they lie ("hallucinate") with impunity if they don't know the answer. Even with coding, they'll often do the wrong thing, such as writing tests that don't test anything.
It seems likely that LLMs will be one component of a truly conscious AI (AGI+), in the same way our subconscious facility to form sentences is part of our intelligence. We'll see how quickly the other pieces arrive, if ever.
Now, I don't believe AI will ever amount to enough to be a critical threat to human life, you know, beyond the immense amounts of wasted energy they propose to convert into something more useful, like a market crash or heat and noise, or both.
Not sure how you can call someone opposed to any of that "anti-civilisational" matter-of-factly.
native to what? how c++ is more native than python?
I would consider all of C, C++, Zig, Rust, Fortran etc to produce native binaries. While things like Cython exist, that wasn't what was used here (and for various reasons would likely still have more overhead than those I mentioned).
delaying comp sci differentiation for a few months
I wonder if assembly based solutions will become in vogue