4/20/2026 at 8:41:05 AM
> In ancient times, floating point numbers were stored in 32 bits.This was true only for cheap computers, typically after the mid sixties.
Most of the earliest computers with vacuum tubes used longer floating-point number formats, e.g. 48-bit, 60-bit or even weird sizes like 57-bit.
The 32-bit size has never been acceptable in scientific computing with complex computations where rounding errors accumulate. The early computers with floating-point hardware were oriented to scientific/technical computing, so bigger number sizes were preferred. The computers oriented to business applications usually preferred fixed-point numbers.
The IBM System/360 family has definitively imposed the 32-bit single-precision and 64-bit double-precision sizes, where 32-bit is adequate for input data and output data and it can be sufficient for intermediate values when the input data passes through few computations, while otherwise double-precision must be used.
by adrian_b
4/20/2026 at 12:58:54 PM
You are totally correct but I need you to recognize that "in ancient times" includes the 1990s.I am...very sorry to be the one delivering this news. It was not a pleasant realization for me, either.
by adampunk
4/20/2026 at 1:27:38 PM
A few years after 1980, especially after 1985, the computers with coprocessors like Intel 8087 or Motorola 68881 became the most numerous computers with floating-point hardware, and for them the default FP size was 80-bit.So the 1990s were long after the time when 32-bit FP numbers were normal. FP32 was revived only by GPUs, for graphic applications where precision matters much less.
Already after 1974, the C programming language made double-precision the default FP size, not the 32-bit single-precision size, for the same reason why Intel 8087 introduced extended precision. Single-precision computations for traditional applications are suitable only for experts, not for ordinary computer users.
While before C the programming languages used single-precision 32-bit numbers as the default size, the recommendations were already to use only double-precision wherever complicated expressions were computed.
I have started using computers by punching cards for a mainframe, but that was already at a time when 32-bit FP numbers were not normally used, but only 64-bit FP numbers.
The best chances of seeing 32-bit single-precision numbers in use was in the decade from 1965 to 1975, at the users of cheap mainframes or of minicomputers without hardware floating-point units, where floating-point emulation was done in software and emulating double-precision was significantly slower.
Before the mid sixties, there were more chances to see 36-bit floating-point numbers as the smallest FP size.
by adrian_b