Turning algs for old NumPy modules into numpy code I suffer from this:
Upon further processing of returns of numpy calculations, lots of data in an 
apps object tree will become elementary numpy types.
First there is some inefficiency in calculations. And then you get data 
inflation and questionable dependencies - e.g. with pickle,ZODB,mpi's ... :


>>> l=array((1.,0))
>>> l.prod()
0.0
>>> cPickle.dumps(_)
"cnumpy.core.multiarray\nscalar\np1\n(cnumpy\ndtype\np2\n(S'f8'\nI0\nI1\ntRp3\n(I2\nS'<'\nNNNI-1\nI-1\ntbS'\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00'\ntRp4\n."
>>> cPickle.dumps(0.0)
'F0\n.'
>>> l=array((1,0))
>>> l.prod()
0
>>> cPickle.dumps(_)
"cnumpy.core.multiarray\nscalar\np1\n(cnumpy\ndtype\np2\n(S'i4'\nI0\nI1\ntRp3\n(I2\nS'<'\nNNNI-1\nI-1\ntbS'\\x00\\x00\\x00\\x00'\ntRp4\n."
>>> cPickle.dumps(0)
'I0\n.'
>>> type(l.prod())
<type 'numpy.int32'>


To avoid this you'd need a type cast in Python code everywhere you get scalars 
from numpy into a python variable. Error prone task. Or check/re-render your 
whole object tree.
Wouldn't it be much better if numpy would return Python scalars for float64 
(maybe even for float32) and int32, int64 ... where possible? (as numarray and 
Numeric did)
I suppose numpy knows internally very quickly how to cast. 
Or is there maybe a config-setting to turn numpy this way?

Robert
-- 
http://mail.python.org/mailman/listinfo/python-list

Reply via email to