2/14/2026 at 6:03:21 PM
Author here for all your 8087 questions...by kens
2/14/2026 at 7:36:09 PM
Ken,Way back (circa 1988ish timeframe) I remember a digital logic professor giving a little aside on the 8087 and remarking at the time that it (the 8087) used some three value logic circuits (or maybe four value logic). That instead of it being all binary, some parts used base 3 (or 4) to squeeze more onto the chip.
From your microscopic investigations, have you seen any evidence that any part of the chip uses anything other than base 2 logic?
by pwg
2/15/2026 at 12:15:19 AM
The ROM in the 8087 was very unusual: It used four transistor sizes so it could store two bits per transistor, so the storage was four-level. Analog comparators converted the output from the ROM back to binary. This was necessary to fit the ROM onto the die. The logic gates on the chip were all binary.I wrote about this in detail a few years ago: https://www.righto.com/2018/09/two-bits-per-transistor-high-...
by kens
2/15/2026 at 5:02:41 AM
Thanks, I must have missed that older post somehow.by pwg
2/15/2026 at 1:53:10 AM
Do you know what other prior systems did for co-processor instructions? The 8086 and 8087 must have been designed together for this approach to work, so presumably there is a reason they didn't choose what other systems did.It is notable that ARM designed explicit co-processor instructions, allowing for 16 co-processors. They must have taken the 8086/8087 approach into account when doing that.
by rogerbinns
2/15/2026 at 5:47:25 AM
AMD's Am9511 floating-point chip (1977) acted like an I/O device, so you could use it with any processor. You could put it in the address space, write commands to it, and read back results. (Or you could use DMA with it for more performance.) Intel licensed it as the Intel 8231, targeting it at the 8080 and 8085 processors.Datasheet: https://www.hartetechnologies.com/manuals/AMD/AMD%209511%20F...
by kens
2/15/2026 at 3:04:39 PM
I remembered Weitek as making math co-processors but it turns out they did an 80287 equivalent, and nobody appears to have done an 8087 equivalent. Wikipedia claims the later co-processors used I/O so this complicated monitoring the bus approach seems to have only been used by one generation of architecture.by rogerbinns
2/16/2026 at 4:39:39 AM
Yes, the 80287 and 387 used some I/O port addresses reserved by Intel to transfer the opcode, and a "DMA controller" like interface on the main processor for reading/writing operands, using the COREQ/COACK pins.Instead of simply reading the first word of a memory operand and otherwise ignoring ESC opcodes, the CPU had to be aware of several different groups of FPU opcodes to set up the transfer, with a special register inside its BIU to hold the direction (read or write), address, and segment limit for the operand.
It didn't do all protection checks "up front", since that would have required even more microcode, and also they likely wanted to keep the interface flexible enough to support new instructions. At that time I think Intel also had planned other types of coprocessor for things like cryptography or business data processing, those would have used the same interface but with completely different operand lengths.
So the CPU had to check the current address against the segment limit in the background whenever the coprocessor requested to transfer the next word. This is why there was a separate exception for "coprocessor segment overrun". Then of course the 486 integrated the FPU and made it all obsolete again.
by rep_lodsb