It has been noted that numbers in this format may be unnormalized.
One possible use for unnormalized numbers is significance arithmetic. This format, however, comes with a set of rules about the "ideal exponent" of the result of an arithmetic operation (this term acknowledges that the range of exponents is finite, and thus cases will arise where the choice of exponents to use in the representation of a number may be limited) that do not correspond to the rules of significance arithmetic. Instead, they follow the IEEE-754 philosophy of producing exact results as far as possible.
The basic intent of those rules is that 100 plus 5.25 should be 105.25 and not 105.250000000; 2.7 times 8.4 should be 22.68 and not 22.680000000. Thus, it is intended that the routines that input and output numbers should create unnormalized values based on the form of numbers read in, and should print numbers with trailing zeroes omitted to the extent indicated by the degree of unnormalization to be printed.
This is a further extension of the reason for using a decimal exponent base in the JOSS system, so that .3 plus .7 might be 1.0 instead of 0.9999999999; the goal is not merely decimal arithmetic, but humanized arithmetic. Doing this within the computations themselves, rather than merely removing trailing zeroes on output, is what is novel about this format.
Previous attempts at humanizing the arithmetic operations of computers such as that in JOSS have tended to be dismissed by the computing community as not worth the trouble, but given the popularity of spreadsheets, for example, it may be that this will prove to be a useful idea.
One thing that occurs to me is that perhaps a decimal floating-point number ought to have a flag bit indicating whether the bits past the end of the number are to be taken as certainly zero, or unknown, so that if either of the numbers in an operation have that bit set, the rules of significance arithmetic are followed instead of those of humanized arithmetic; this would make for a general floating-point arithmetic that is also able to handle the numbers one usually thinks of floating-point as being applicable to, values of physical quantities of limited precision. Actually, this is somewhat of an oversimplification: if a trailing asterisk is used to indicate the flag bit, for addition the rules would work like this:
2.345 + 7.1 = 9.445
2.345* + 7.1 = 9.445*
2.345 + 7.1* = 9.4*
2.345* + 7.1* = 9.4*
If the less precise quantity in an addition has the flag bit set, the rules of significance arithmetic are followed, and the flag is preserved; but if the more precise one has the flag bit set, then the less precise one is still taken as the exact quantity it claims to be.
In the case of multiplication, we also have multiple cases:
26.34 * 1.7 = 44.778
26.34* * 1.7 = 44.77*
26.34 * 1.7* = 45*
26.34* * 1.7* = 45*
Here, the number of significant digits, instead of the precision as a magnitude, is what is compared.
When I think of numbers represented internally in decimal form, I also tended to think of COBOL programs, not spreadsheets, and if one is using a program to calculate a payroll, one would be normally using fixed-point numbers as well: if new rounding rules are needed, inventing a new floating-point format for that purpose seemed wasteful to me. But once it is understood that the idea is to have a general tool that can be easily used for arbitrary calculations, relieving users, as opposed to programmers, of having to specify the range of numbers becomes an obvious necessity.
It may also be noted that IBM intends to license its Densely Packed Decimal patent on a royalty-free basis to implementors of this format as it is about to be specified in the revised IEEE 754 standard.