On 8/30/2011 2:12 PM, Guido van Rossum wrote:
On Tue, Aug 30, 2011 at 10:50 AM, stefan brunthaler
<ste...@brunthaler.net>  wrote:
Do you really need it to match a machine word? Or is, say, a 16-bit
format sufficient.

Hm, technically no, but practically it makes more sense, as (at least
for x86 architectures) having opargs and opcodes in half-words can be
efficiently expressed in assembly. On 64bit architectures, I could
also inline data object references that fit into the 32bit upper half.
It turns out that most constant objects fit nicely into this, and I
have used this for a special cache region (again below 2^32) for
global objects, too. So, technically it's not necessary, but
practically it makes a lot of sense. (Most of these things work on
32bit systems, too. For architectures with a smaller size, we can
adapt or disable the optimizations.)

Do I sense that the bytecode format is no longer platform-independent?
That will need a bit of discussion. I bet there are some things around
that depend on that.

I find myself more comfortable with the Cesare Di Mauro's idea of expanding to 16 bits as the code unit. His basic idea was using 2, 4, or 6 bytes instead of 1, 3, or 6. It actually tended to save space because many ops with small ints (which are very common) contract from 3 bytes to 2 bytes or from 9(?) (two instructions) to 6. I am sorry he was not able to followup on the initial promising results. The dis output was probably easier to read than the current output.

Perhaps he made a mistake in combining the above idea with a shift from stack to hybrid stack+register design.

--
Terry Jan Reedy

_______________________________________________
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com

Reply via email to