import mmap, codecs
from collections import Counter
def word_count(filepath):
freq = Counter()
decode = codecs.getincrementaldecoder('utf-8')().decode
with open(filepath, 'rb') as f, mmap.mmap(f.fileno(), 0, access=mmap.ACCESS_READ) as mm:
for chunk in iter(lambda: mm.read(65536), b''):
freq.update(decode(chunk).split())
freq.update(decode(b'', final=True).split())
return freq... Ah, but I suppose the existing code hasn't avoided that anyway. (It's also creating regex match objects, but those get disposed each time through the loop.) I don't know that there's really a way around that. Given the file is barely a KB, I rather doubt that the illustrated techniques are going to move the needle.
In fact, it looks as though the entire data structure (whether a dict, Counter etc.) should a relatively small part of the total reported memory usage. The rest seems to be internal Python stuff.