Ted Harding writes:
The basic idea was that an 8-bit byte would store two decimal digits, one in each half-byte ("nibble") of 4 bits, of course in binary form "0" = 0000, "1" = 0001, ... , "8" = 1000, "9" = 1001. The only arithmetic available is add or subtract, which operate on full 8-bit or 16-bit binary numbers. However,
This bit I understand (and, in fact, still have to use today; it's still used a lot in industrial kit, although to be honest "misused" would usually be fairer).
A key advantage in pocket calculators and similar is that the raw data as presented on the databus can easily be split out four bits (ie four "wires") at a time, each going to a seperate 7-seg display, which have internal logic to decode those four bits into the 0-9 digits. Without that a *lot* of extra logic is required.
These days it is probably the single biggest cause of confusion amongst new industrial programmers, and is rarely needed but often encountered.
However, I'm still not clear on the specific connection with fractional numbers. BCD to me is just a different way of representing decimal integers (and at its simplest means internal instructions which allow incrementing 9 to result in 16(dec) = 10(hex)). Maybe I'm looking too hard...
PS: I also have that Zilog manual, although it's not as readily to hand as yours clearly is :-) On of my first major programming applications, written in my teens and as far as I know still in use today in a pretty high-profile location, was a Z80-based protocol converter allowing two radically different fire alarm systems to talk to each other. I have fond memories of that chip!