From: "Justin R. Bendich" <[email protected]>
Sent: Tuesday, 30 August 2011 1:29 PM


Do any of you old-timers (e.g. Fairchild, Cole) know how the EDIT (ED)
instruction came to be the way it is? It's one of the original IBM 360
instructions. Has to be the most complicated of them.

The EDMK instruction is the most complicated.

Did they have microcode back then?

Yes.

The main thing that bugs me about this instruction is that if you want
leading zeroes, you have to add a "fix up" instruction at the end,

No, to get leading zeros, the second character of the pattern must be
a significance start character, viz., 21.
Thus, the pattern would begin with something like '402120....'X

or
prepend an extra zero by means of the fill character. This is because
the significance starter does not turn on the S-trigger for that byte.

Making the fill character a zero '0' is not a nuisance,
unless one wants later characters to be replaced by zero.

A trivial 'solution' is to have a some leading zeros in the
decimal number being converted.

Also, the way the fill character is set, where it has to be the first
byte, bugs me.

Where else is it to come from?

Why is it like that? It IS a useful instruction, but it's annoying.

Reply via email to