Charles R Harris wrote: > > > Well, the common machines are 32 bit and 64 bit, so for instance > extended precision (usually 80 bits) ends up as 96 bits (3*32) on the > first and 128 (2*64) bits on the second, with the extra bits ignored. > The items in c structures will often have empty spaces filling in > between them unless specific compiler directives are invoked and the > whole will be aligned on the appropriate boundary. For instance, on my > machine > > struct bar { > char a; > int b; > }; So the data buffer in numpy is not really a C array, but more like a structure ? (I am talking about the actual binary data)
In C, I think you always have sizeof(*a) bytes between a[0] and a[i]. That is sizeof(item) and addresses shift always match: we always have a[i] = (char*)a + sizeof(*a) * i. For example, in your example for extended precision, the computation may be done in 80 bits, but sizeof(long double) is 12 on my machine, not 10. There is a match between sizeof and spacing between items; otherwise, I don't see how pointer arithmetic would be possible with arrays in C otherwise ? But this goes quite further from my usual C knowledge, so I may be wrong here too. cheers, David _______________________________________________ Numpy-discussion mailing list Numpy-discussion@scipy.org http://projects.scipy.org/mailman/listinfo/numpy-discussion