Final edit:This ambiguity is documented at least back to 1984, by IBM, the pre-eminent computer company of the time.
In 1972 IBM started selling the IBM 3333 magnetic disk drive. This product catalog [0] from 1979 shows them marketing the corresponding disks as "100 million bytes" or "200 million bytes" (3336 mdl 1 and 3336 mdl 11, respectively). By 1984, those same disks were marketed in the "IBM Input/Output Device Summary"[1] (which was intended for a customer audience) as "100MB" and "200MB"
0: (PDF page 281) "IBM 3330 DISK STORAGE" http://electronicsandbooks.com/edt/manual/Hardware/I/IBM%20w...
1: (PDF page 38, labeled page 2-7, Fig 2-4) http://electronicsandbooks.com/edt/manual/Hardware/I/IBM%20w...
Also, hats off to http://electronicsandbooks.com/ for keeping such incredible records available for the internet to browse.
-------
Edit: The below is wrong. Older experience has corrected me - there has always been ambiguity (perhaps bifurcated between CPU/OS and storage domains). "And that with such great confidence!", indeed.
-------
The article presents wishful thinking. The wish is for "kilobyte" to have one meaning. For the majority of its existence, it had only one meaning - 1024 bytes. Now it has an ambiguous meaning. People wish for an unambiguous term for 1000 bits, however that word does not exist. People also might wish that others use kibibyte any time they reference 1024 bytes, but that is also wishful thinking.
The author's wishful thinking is falsely presented as fact.
I think kilobyte was the wrong word to ever use for 1024 bytes, and I'd love to go back in time to tell computer scientists that they needed to invent a new prefix to mean "1,024" / "2^10" of something, which kilo- never meant before kilobit / kilobyte were invented. Kibi- is fine, the phonetics sound slightly silly to native English speakers, but the 'bi' indicates binary and I think that's reasonable.
I'm just not going to fool myself with wishful thinking. If, in arrogance or self-righteousness, one simply assumes that every time they see "kilobyte" it means 1,000 bytes - then they will make many, many failures. We will always have to take care to verify whether "kilobyte" means 1,000 or 1,024 bytes before implementing something which relies on that for correctness.
2/3/2026
at
5:51:37 PM
You've got it exactly the wrong way around. And that with such great confidence!There was always a confusion about whether a kilobyte was 1000 or 1024 bytes. Early diskettes always used 1000, only when the 8 bit home computer era started was the 1024 convention firmly established.
Before that it made no sense to talk about kilo as 1024. Earlier computers measured space in records and words, and I guess you can see how in 1960, no one would use kilo to mean 1024 for a 13 bit computer with 40 byte records. A kiloword was, naturally, 1000 words, so why would a kilobyte be 1024?
1024 bearing near ubiquitous was only the case in the 90s or so - except for drive manufacturing and signal processing. Binary prefixes didn't invent the confusion, they were a partial solution. As you point out, while it's possible to clearly indicate binary prefixes, we have no unambiguous notation for decimal bytes.
by cedilla
2/3/2026
at
6:00:58 PM
> Early diskettes always used 1000Even worse, the 3.5" HD floppy disk format used a confusing combination of the two. Its true capacity (when formatted as FAT12) is 1,474,560 bytes. Divide that by 1024 and you get 1440KB; divide that by 1000 and you get the oft-quoted (and often printed on the disk itself) "1.44MB", which is inaccurate no matter how you look at it.
by Sophira
2/3/2026
at
6:28:22 PM
I'm not seeing evidence for a 1970s 1000-byte kilobyte. Wikipedia's floppy disk page mentions the IBM Diskette 1 at 242944 bytes (a multiple of 256), and then 5¼-inch disks at 368640 bytes and 1228800 bytes, both multiples of 1024. These are sector sizes. Nobody had a 1000-byte sector, I'll assert.
by card_zero
2/3/2026
at
6:49:25 PM
The wiki page agrees with parent, "The double-sided, high-density 1.44 MB (actually 1440 KiB = 1.41 MiB or 1.47 MB) disk drive, which would become the most popular, first shipped in 1986"
by dooglius
2/3/2026
at
7:13:40 PM
To make things even more confusing, the high-density floppy introduced on the Amiga 3000 stored 1760 KiB
by bcrl
2/3/2026
at
8:38:39 PM
At least there it stored exactly 3,520 512-byte sectors, or 1,760 KB. They didn't describe them as 1.76MB floppies.
by kstrauser
2/3/2026
at
6:44:44 PM
Human history is full of cases where silly mistakes became precedent. HTTP "referal" is just another example.I wonder if there's a wikipedia article listing these...
by publicdebates
2/3/2026
at
7:33:21 PM
It's "referer" in the HTTP standard, but "referrer" when correctly spelled in English. https://en.wikipedia.org/wiki/HTTP_referer
by nayuki
2/3/2026
at
6:54:54 PM
it's, way older in than the 1990's! In computering, "K" always meant 1024 at least from 1970's.Example: in 1972, DEC PDP 11/40 handbook [0] said on first page: "16-bit word (two 8-bit bytes), direct addressing of 32K 16-bit words or 64K 8-bit bytes (K = 1024)". Same with Intel - in 1977 [1], they proudly said "Static 1K RAMs" on the first page.
[0] https://pdos.csail.mit.edu/6.828/2005/readings/pdp11-40.pdf
[1] https://deramp.com/downloads/mfe_archive/050-Component%20Spe...
by theamk
2/3/2026
at
7:47:05 PM
It was exactly this - and nobody cared until the disks (the only thing that used decimal K) started getting so big that it was noticeable. With a 64K system you're talking 1,536 "extra" bytes of memory - or 1,536 bytes of memory lost when transferring to disk.But once hard drives started hitting about a gigabyte was when everyone started noticing and howling.
by bombcar
2/3/2026
at
6:08:06 PM
It was earlier than the 90s, and came with popular 8-bit CPUs in the 80s. The Z-80 microprocessor could address 64kb (which was 65,536 bytes) on its 16-bit address bus.Similarly, the 4104 chip was a "4kb x 1 bit" RAM chip and stored 4096 bits. You'd see this in the whole 41xx series, and beyond.
by angst_ridden
2/3/2026
at
6:41:32 PM
> The Z-80 microprocessor could address 64kb (which was 65,536 bytes) on its 16-bit address bus.I was going to say that what it could address and what they called what it could address is an important distinction, but found this fun ad from 1976[1].
"16K Bytes of RAM Memory, expandable to 60K Bytes", "4K Bytes of ROM/RAM Monitor software", seems pretty unambiguous that you're correct.
Interestingly wikipedia at least implies the IBM System 360 popularized the base-2 prefixes[2], citing their 1964 documentation, but I can't find any use of it in there for the main core storage docs they cite[3]. Amusingly the only use of "kb" I can find in the pdf is for data rate off magnetic tape, which is explicitly defined as "kb = thousands of bytes per second", and the only reference to "kilo-" is for "kilobaud", which would have again been base-10. If we give them the benefit of the doubt on this, presumably it was from later System 360 publications where they would have had enough storage to need prefixes to describe it.
[1] https://commons.wikimedia.org/wiki/File:Zilog_Z-80_Microproc...
[2] https://en.wikipedia.org/wiki/Byte#Units_based_on_powers_of_...
[3] http://www.bitsavers.org/pdf/ibm/360/systemSummary/A22-6810-...
by magicalist
2/3/2026
at
6:29:03 PM
Even then it was not universal. For example, that Apple I ad that got posted a few days ago mentioned that "the system is expandable to 65K".
https://upload.wikimedia.org/wikipedia/commons/4/48/Apple_1_...
by pdw
2/3/2026
at
8:39:59 PM
Someone here the other day said that it could accept 64KB of RAM plus 1KB of ROM, for 65KB total memory.I don't know if that's correct, but at least it'd explain the mismatch.
by kstrauser
2/3/2026
at
7:04:54 PM
Seems like a typo given that the ad contains many mentions of K (8K, 32K) and they're all of the 1024 variety.
by wvenable
2/3/2026
at
7:11:03 PM
If you're using base 10, you can get "8K" and "32K" by dividing by 10 and rounding down. The 1024/1000 distinction only becomes significant at 65536.
by duskwuff
2/3/2026
at
7:16:10 PM
Still the advertisement is filled with details like the number of chips, the number of pins, etc. If you're dealing with chips and pins, it's always going to base-2.
by wvenable
2/3/2026
at
7:18:57 PM
only when the 8 bit home computer era started was the 1024 convention firmly established.That's the microcomputer era that has defined the vast majority of our relationship with computers.
IMO, having lived through this era, the only people pushing 1,000 byte kilobytes were storage manufacturers, because it allows them to bump their numbers up.
https://www.latimes.com/archives/la-xpm-2007-nov-03-fi-seaga...
by snozolli
2/3/2026
at
7:00:38 PM
> 1024 bearing near ubiquitous was only the case in the 90s or soMore like late 60s. In fact, in the 70s and 80s, I remember the storage vendors being excoriated for "lying" by following the SI standard.
There were two proposals to fix things in the late 60s, by Donald Morrison and Donald Knuth. Neither were accepted.
Another article suggesting we just roll over and accept the decimal versions is here:
https://cacm.acm.org/opinion/si-and-binary-prefixes-clearing...
This article helpfully explains that decimal KB has been "standard" since the very late 90s.
But when such an august personality as Donald Knuth declares the proposal DOA, I have no heartburn using binary KB.
https://www-cs-faculty.stanford.edu/~knuth/news99.html
by zephen
2/3/2026
at
8:24:19 PM
It goes back way further than that. The first IBM harddrive was the IBM 350 for the IBM 305 RAMDAC. It was 5 million characters. Not bytes, bytes weren't "a thing" yet. 5,000,000 characters. The very first harddrive was base-10.Here's my theory. In the beginning, everything was base10. Because humans.
Binary addressing made sense for RAM. Especially since it makes decoding address lines into chip selects (or slabs of core, or whatever) a piece of cake, having chips be a round number in binary made life easier for everyone.
Then early DOS systems (CP/M comes to mind particularly) mapped disk sectors to RAM regions, so to enable this shortcut, disk sectors became RAM-shaped. The 512-byte sector was born. File sizes can be written in bytes, but what actually matters is how many sectors they take up. So file sizing inherited this shortcut.
But these shortcuts never affected "real computers", only the hamstrung crap people were running at home.
So today we have multiple ecosystems. Some born out of real computers, some with a heavy DOS inheritance. Some of us were taught DOS's limitations as truth, and some of us weren't.
by soneil
2/4/2026
at
12:26:42 AM
RAMAC, not RAMDAC: https://en.wikipedia.org/wiki/History_of_IBM_magnetic_disk_d...However it doesn't seem to be divided into sectors at all, more like each track is like a loop of magnetic tape. In that context it makes a bit more sense to use decimal units, measuring in bits per second like for serial comms.
Or maybe there were some extra characters used for ECC? 5 million / 100 / 100 = 500 characters per track, leaves 72 bits over for that purpose if the actual size was 512.
First floppy disks - also from IBM - had 128-byte sectors. IIRC, it was chosen because it was the smallest power of two that could store an 80-column line of text (made standard by IBM punched cards).
Disk controllers need to know how many bytes to read for each sector, and the easiest way to do this is by detecting overflow of an n-bit counter. Comparing with 80 or 100 would take more circuitry.
by rep_lodsb
2/3/2026
at
9:03:17 PM
Almost all computers have used power-of-2 sized sectors. The alternative would involve wasted bits (e.g. you can't store as much information in 256 1000-byte units as 256 1024-byte units, so you lose address space) or have to write multiplies and divides and modulos in filesystem code running on machines that don't have opcodes for any of those.You can get away with those on machines with 64 bit address spaces and TFLOPs of math capacity. You can't on anything older or smaller.
by kstrauser