Discussion
Nibble Stew
griffindor: Nice!> Peak memory consumption is 1.3 MB. At this point you might want to stop reading and make a guess on how much memory a native code version of the same functionality would use.I wish I knew the input size when attempting to estimate, but I suppose part of the challenge is also estimating the runtime's startup memory usage too.> Compute the result into a hash table whose keys are string views, not stringsIf the file is mmap'd, and the string view points into that, presumably decent performance depends on the page cache having those strings in RAM. Is that included in the memory usage figures?Nonetheless, it's a nice optimization that the kernel chooses which hash table keys to keep hot.The other perspective on this is that we sought out languages like Python/Ruby because the development cost was high, relative to the hardware. Hardware is now more expensive, but development costs are cheaper too.The take away: expect more push towards efficiency!
est: I think py version can be shortened as:from collections import Counterstats = Counter(x.strip() for l in open(sys.argv[1]) for x in l)
voidUpdate: Would that decrease memory usage though?
biorach: "copyright infringement factories"
tzot: Well, we can use memoryview for the dict generation avoiding creation of string objects until the time for the output: import re, operator def count_words(filename): with open(filename, 'rb') as fp: data= memoryview(fp.read()) word_counts= {} for match in re.finditer(br'\S+', data): word= data[match.start(): match.end()] try: word_counts[word]+= 1 except KeyError: word_counts[word]= 1 word_counts= sorted(word_counts.items(), key=operator.itemgetter(1), reverse=True) for word, count in word_counts: print(word.tobytes().decode(), count) We could also use `mmap.mmap`.
fix4fun: Digression: Nowadays when RAM is expensive good old zram is gaining popularity ;) Try to check on trends.google.com . Since 2025-09 search for it doubled ;)