4/12/2026 at 6:40:24 AM
A “simplest” hash function is completely dependent on what you are using the hash function for and the guarantees you want a hash function to make. An optimal permutation of an integer is different from a small string hash is different from a block checksum. Literally, you are optimizing the algorithm for entirely different properties. No algorithm can satisfy all of them even approximately.The full scope of things hash functions are commonly used for requires at least four algorithms if you care about performance and optimality. It is disconcertingly common to see developers using hash algorithms in contexts where they are not fit for purpose. Gotta pick the right tool for the job.
by jandrewrogers
4/12/2026 at 7:52:37 AM
You are right!For example, when you know that your input is uniformly randomly distributed, then truncation is a perfectly good hash function. (And a special case of truncation is the identity function.)
The above condition might sound too strong to be practical, but when you are eg dealing with UUIDs it is satisfied.
Another interesting hash function: length. See https://news.ycombinator.com/item?id=6919216 for a bad example. For a good example: consider rmlint and other file system deduplicators.
These deduplicators scan your filesystem for duplicates (amongst other things). You don't want to compare every file against every other file. So as a first optimisation, you compare files only by some hash. But conventional hashes like sha256 or crc take O(n) to compute. So you compute cheaper hashes first, even if they are weaker. Truncation, ie only looking at the first few bytes is very cheap. Determining the length of a file is even cheaper.
by eru
4/12/2026 at 9:54:39 AM
Now I'm no expert in that matter, but the fs deduplicators I've seen were block, not file, based. Those can clearly not use the file length as they are blissfully unaware of files (or any structure for that matter). Those use a rather expensive hash function (you really want to avoid hash collisions), but (at least some ten years ago) memory, not processing speed, was the limiting factor.by guenthert
4/12/2026 at 10:26:26 AM
https://github.com/sahib/rmlint is the one I had in mind.> Those use a rather expensive hash function (you really want to avoid hash collisions), [...]
Then we are clearly not thinking of the same kind of software.
> but (at least some ten years ago) memory, not processing speed, was the limiting factor.
In what I described, IO is the limiting factor. You want to avoid having to read the whole file, if you can.
I think you are thinking of block level online deduplicators that are integrated into the file system?
by eru
4/12/2026 at 10:39:32 AM
> https://github.com/sahib/rmlint is the one I had in mind.Ah, right, thanks. I now dimly recall some old project realizing fs-snapshots using hard links, which one could consider some sort of deduplication as well.
> I think you are thinking of block level online deduplicators that are integrated into the file system?
Indeed, I was.
by guenthert
4/12/2026 at 8:27:10 AM
This comment feels about half as long as it ought to be. Can you say more?by andai