Thank you for your replies.
I might just use arrays.
Maybe in the future something like pypy might help reduce the python overhead.
Best,
Luca
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discus
Hello,
I have noticed a significant speed difference between the array and the matrix
implementation of the dot product, especially for not-so-big matrices.
For example:
In [1]: import numpy as np
In [2]: b = np.random.rand(104,1)
In [3]: bm = np.mat(b)
In [4]: a = np.random.rand(8, 104)
In [5]:
Hello!
> What do people think of applying patch #1085.
Fine with me.
> I'd rename the function ...
Let me know if you want me to make these canges
or feel free to make them.
> It looks like the routine doesn't try to determine if the
> views actually overlap, just if they might potentially
> sha
Hello David,
thank you.
I followed your suggestion but I was unable to make it work.
I surprisingly found that with numpy in a different folder, it worked.
I am afraid it is due to the fact that the first one is not a linux filesystem
and cannot deal with permission and ownership.
This would make
Thanks for your reply.
So, what is the correct way to test a numpy development version without
installing it in /usr/lib/... or /usr/local/lib/.. ?
What do you guys do?
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/ma
I am sorry.
I followed your suggestion.
I re-checked out the svn folder and then compiled with
$ python setup.py build_src --inplace build_ext --inplace
but I get the same behaviour.
If I am inside I get the NameError, if I am outside and use path.insert, it
successfully performs zero tests.
I ha
Thank you both for your help!
$ python setup.py build_src --inplace build_ext --inplace
I'll give it a try.
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion
Thank you for your instantaneuos reply!
This is what I usually do:
from the numpy folder I run (emptying the build folder if I just fetched svn
updates)
$ python setup build.py
$ cd build/lib-...
$ ipython
In [1]: import numpy as np
In [2]: np.__version__
Out[2]: '1.4.0.dev7417'
Everything works
Hello
I noticed that python's "any" can be faster than numpy's "any" (and the
similarly for "all").
Then I wondered why.
I realized that numpy implements "any" as logical_or.reduce (and "all" as
logical_and.reduce).
This means that numpy cannot take advantage of short-circuiting.
Looking at the t
> numpy.may_share_memory() should be pretty cheap. It's just arithmetic.
True, but it is in python. Not something that should go in construct_arrays of
ufunc_object.c, I suppose.
But the same approach can be translated to C, probably.
I can try if we decide
http://projects.scipy.org/numpy/ticket/
> Lacking a robust use case, I would prefer to keep the current
> behavior. It is likely that nothing would break if we changed it, but
> without a use case, I would prefer to be conservative.
Fair enough.
>> When working on
>> http://projects.scipy.org/numpy/ticket/1085
>> I had to walk the chai
My vote (if I am entitled to) goes to "change the code".
Whether or not the addressee of .base is an array, it should be "the object
that has to be kept alive such that the data does not get deallocated" rather
"one object which will keep alive another object, which will keep alive another
objec
Here it is...
http://projects.scipy.org/numpy/ticket/1229
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion
I think the original bug is due to
line 535 of numpy/core/src/multiarray/ctors.c (svn)
that should be:
intp numcopies, nbytes;
instead of:
int numcopies, nbytes;
To resume:
in line 535 of numpy/core/src/multiarray/ctors.c
and
in line 209 of numpy/core/src/multiarray/item_selection.c
int sh
I can confirm this bug for the last svn.
Also:
>>> a.put([2*1024*1024*1024 + 100,], 8)
IndexError: index out of range for array
in this case, I think the error is that in
numpy/core/src/multiarray/item_selection.c
in PyArray_PutTo line 209 should be:
intp i, chunk, ni, max_item, nv, tmp;
inst
I think you do not need to do the chain up walk on view creation.
If the assumption is that base is the ultimate base, on view creation
you can do something like (pseudo-code):
view.base = parent if parent.owndata else parent.base
___
NumPy-Discussion ma
Thanks for your quick answer.
Is there a reason for that?
Am I wrong or it only makes life harder, such as:
while (PyArray_Check(base) && !PyArray_CHKFLAGS(base, NPY_OWNDATA)) {
base = PyArray_BASE(base);
}
plus the possible error you underlined, plus the fact that
this keeps a chain of zo
Hello,
I cannot quite understand whether ndarray.base is the closest base,
the one from which the view was made or the ultimate base, the one
actually containing the data.
I think the documentation and the actual behaviour mismatch.
In [1]: import numpy as np
In [2]: x = np.arange(12)
In [3]: y =
I got the same problem when compiling a new svn revision with some intermediate
files left from the build of a previous revision.
Removing the content of the build folder before compiling the new version
solved the issue.
___
NumPy-Discussion mailing li
if I get your question correctly, np.tile could be what you need
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion
Python shouldn't be the problem here.
Even on a 32bit machine
>>> a = 0x8000
2147483648L
>>> a=np.zeros(a, dtype=bool)
ValueError: negative dimensions are not allowed
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/m
I just realized that Sebastian posted its 'uname -a' and he has a 64bit machine.
In this case it should work as mine (the 64bit one) does.
Maybe during the compilation some flags prevented a full 64bit code to be
compiled?
___
NumPy-Discussion mailing li
Hi,
with the standard ubuntu version (1.2.1), I get
>>> a=np.zeros(0x80,dtype='b1') # OK
>>> a=np.zeros(0x8000,dtype='b1')
TypeError: data type not understood
>>> a=np.zeros(0x8000,dtype=bool)
ValueError: negative dimensions are not allowed
while with 1.4.0.dev7375, I get
>>> a=np
Hi Ruben,
> In both cases, as the print
> statement shows, offspr is already created.
>>> offspr[...] = r + a[:, None]
means "fill the existing object pointed by offspr with r + a[:, None]" while
>>> offspr = r + a[:,None]
means "create a new array and assign it to the variable offspr (after
dec
Hi Sturla,
> The proper way to speed up "dot(a*b+c*sqrt(d), e)" is to get rid of
> temporary intermediates.
I implemented a patch
http://projects.scipy.org/numpy/ticket/1153
that reduces the number of temporary intermediates.
In your example from 4 to 2.
There is a big improvement in terms of me
I am sorry but it doesn't make much sense.
How do you measure the performance?
Are you sure you include the creation of the "c" output array in the time spent
(which is outside the for loop but should be considered anyway)?
Here are my results...
In [84]: a = np.random.rand(8,26)
In [85]: b = n
Hi Ruben
One dimensional arrays can be thought of as rows. If you want a column, you
need to append a dimension.
>>> d = a + b[:,None]
which is equivalent to
>>> d = a + b[:,np.newaxis]
Best,
Luca
___
NumPy-Discussion mailing list
NumPy-Discussion@sc
I experience the same problem.
A few more additional test cases:
In [1]: import numpy
In [2]: numpy.lexsort([numpy.arange(5)[::-1].copy(), numpy.arange(5)])
Out[2]: array([0, 1, 2, 3, 4])
In [3]: numpy.lexsort([numpy.arange(5)[::-1].copy(), numpy.arange(5.)])
Out[3]: array([0, 1, 2, 3, 4])
In [
Thank you, Robert, for the quick reply.
I just saw the line
#define PyUFunc_None -1
in the ufuncobject.h file.
It is always the same, you choose a sentinel thinking
that it doesn't conflict with any possible value and
you later find there is one such case.
As said it is not a big deal.
I wouldn't
If I understand the problem...
if you are 100% sure that "', '" only occurs between fields
and never within, you can use the 'split' method of the string
which could be faster than regexp in this simple case.
___
NumPy-Discussion mailing list
NumPy-Discus
Hello,
I know I am splitting the hair, but should not
np.bitwise_and.identity be -1 instead of 1?
I mean, something with all the bits set?
I am checking whether all elements of a vector 'v'
have a certain bit 'b' set:
if np.bitwise_and.reduce(v) & (1 << b):
# do something
If v is empty, the ex
As Gaël pointed out you cannot create A, B and then C
as the concatenation of A and B without duplicating
the vectors.
> I am looking for a way to have a non contiguous array C in which the
> "left" (1, 2000) elements point to A and the "right" (1, 4000)
> elements point to B.
But you
Hello Sturla,
I had a quick look at your code.
Looks fine.
A few notes...
In "select" you should replace numpy with np.
In "_median" how can you, if n==2, use s[] if s is not defined?
What if n==1?
Also, I think when returning an empty array, it should be of
the same type you would get in the ot
import sys
f = open(sys.argv[1], 'rt')
for l in f:
if len(l.split('|')) != 12:
print(l)
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion
I have tried
$ awk -F '|' '{if(NF != 12) print NR;}' /tmp/pp.txt
and besides the first 23 lines and the last 3 lines of the file,
also the following have a number of '|' different from 11:
1635
2851
5538
i.e. BIKIN, BENGUERIR and TERESINA AIRPORT.
But I completely agree with you, genfromtxt could p
> The main issue is probably just choosing an appropriate float return
> type, and personally I believe this should be same as numpy's default
> float.
I completely agree.
Maybe we could let the user decide whether to use a different type.
It is already somehow possible through the "out" argumen
Or
a[:,:,None] + b[:,None,:]
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion
One solution I can think of still requires one loop (instead of three):
import numpy as np
a = np.arange(12).reshape(3,4)
b = np.arange(15).reshape(3,5)
z = np.empty(a.shape + (b.shape[-1],))
for i in range(len(z)):
z[i] = np.add.outer(a[i], b[i])
_
Another solution (elegant?? readable??) :
>> x[slice(-n or None)] # with n == 0, 1, ...
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion
The problem is that n is integer and integer do not have different
representations for 0 and -0 (while floats do).
Therefore it is impossible to disambiguate the following two case scenarios:
>> b[:n] # take the first n
>> b[:-n] # take all but the last n
when n ==0.
One possible solution (you
As you stress on "repeat the array ... rather than repeat each element",
you may want to consider tile as well:
>>> np.tile(a, [10,1])
array([[0, 1, 2],
[0, 1, 2],
[0, 1, 2],
[0, 1, 2],
[0, 1, 2],
[0, 1, 2],
[0, 1, 2],
[0, 1, 2],
[0, 1, 2],
You can do something a bit tricky but possibly working.
I made the assumption of a C-ordered 1d vector.
import numpy as np
import numpy.lib.format as fmt
# example of chunks
chunks = [np.arange(l) for l in range(5,10)]
# at the beginning
fp = open('myfile.npy', 'wb')
d = dict(
desc
Hello!
I have access to both a 32bit and a 64bit linux machine.
I had to change your code (appended) because I got an error about
not being able to create a mmap larger than the file.
Here are the results...
On the 32bit machine:
lc...@xps2:~$ python /tmp/ppp.py
Created 1 writeable mmaps contain
Hi Hans,
You can follow this thread on approximately the same topic
http://news.gmane.org/find-root.php?group=gmane.comp.python.numeric.general&article=31551
It should have been fixed in
http://projects.scipy.org/numpy/changeset/7133
Best,
Luca
___
NumP
I am afraid I misunderstand your question
because I do not get the results you expected.
def pdyn(a, p):
a = np.sort(a)
n = round((1-p) * len(a))
return a[int((n+1)/2)], a[len(a)-1-int(n/2)] # a[-int(n/2)] would not work
if n<=1
>>> pdyn([0, 0, 0, 0, 1, 2, 3, 4, 5, 2000], 1)
(0, 2000
You can do it "by hand" by sorting the array and taking the
corresponding elements or you can use
scipy.stats.scoreatpercentile
that also interpolates.
Best,
Luca
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/l
Hello Hans,
> Although it should be noted that in C/C++,
> the result of uint8+uint8 is int.
But C/C++ works with scalars and often temporary results
are kept in registers.
On the contrary, numpy works with arrays. We cannot expect
(a+b)*c
to grow from uint8 to uint16 and then uint32 :-D
> For e
Hi Hans!
> Ideally, I'd like numpy to be "fixed"
what do you mean by "fixed"?
Are you referring to Out[2] or Out[3]?
In [1]: a = numpy.array([200], numpy.uint8)
In [2]: a + a
Out[2]: array([144], dtype=uint8)
Please do not "fix" this, that IS the correct output.
What should numpy do? Promote eve
Just my 2 cents.
It is duplicated code.
But it is only 3 lines.
"identity" does not need to handle rectangular matrices and non-principal
diagonals,
therefore it can be reasonably faster (especially for small matrices, I guess).
___
NumPy-Discussion mai
Hello,
all these alternative mechanisms for initializing
arrays risk to break current code. Isn't it?
Then one would need to specify the data type
with a kw argument while with the current
implementation the second argument is the data type
irregardless of whether or not it is given with
the dtype
Hi Pauli,
in my PC I have tried this and some of the regressions disappear,
maybe you can give it a try.
At the present state is compiler- and architecture-dependent,
therefore not the best choice. But it may be worth trying.
Best,
Luca
/* My additions are unindented */
/*
> That is what I thought at first, but then what is the difference between
> array_types and scalar_types? Function signature is:
> *find_common_type(array_types, scalar_types)*
As I understand it, the difference is that in the following case:
np.choose(range(5), [np.arange(1,6), np.zeros(5, dtyp
Hi,
I am not very confident with types but I will try to give you
my opinion.
As for the first part of the question
> >>> np.find_common_type([np.ndarray, np.ma.MaskedArray, np.recarray], [])
> dtype('object')
I think that the first argument of np.find_common_type should be
the type of an _eleme
I have submitted Ticket #1167 with a patch
to speed up diag and eye.
On average the code is 3 times faster (but
up to 9!).
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion
>> if v.flags.f_contiguous:
>> v, k, s = v.T, -k, s[::-1]
>Is this correct? The .flat iterator always traverses the array in virtual
>C-order, not in the order it's laid out in memory.
The code could work (and gives the same results) even
without the two lines above which in
Hello,
I happened to have a look at the code for np.diag
and found it more complicated that necessary.
I think it can be rewritten more cleanly and
efficiently.
Appended you can find both versions.
The speed improvement is significant:
In [145]: x = S.rand(1000,1300)
In [146]: assert all(diag(x,-1
I tried to implement what you suggest.
The patch is in the ticket page.
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion
Let me see if I understand correctly...
what you suggest is something like:
1) adding an argument flag to construct_arrays
that enables/disables the feature
2) adding the same argument flag to construct_loop which
is passed untouched to construct_arrays
3) set the flag to "disable" in the const
Hello Gaël,
I think it might be an option.
Also one could have an internal flag which says whether or not is safe
to overwrite inputs with ref_count=1.
Then import_array() sets this flag to "unsafe" (i.e. current behaviour).
If the user of the numpy C-api is aware of how the new feature works,
he
Hi,
you can have a look at the method interp2d of scipy.interpolate.
I think it is what you are looking for.
Best,
Luca
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion
Hello Pauli,
excuse me if I insist, PyArray_Conjugate is not the problem.
If when using the numpy API, it is accepted something like:
ob1 = PyArray_CreateSomehowAnArray();
obj2 = PyArray_DoSomethingWithArray(obj1,...);
obj3 = PyArray_DoSomethingElseWithArray(obj1,...);
Py_DECREF(obj1);
then
Hello
The problem is not PyArray_Conjugate itself.
The problem is that whenever you call a function from the C side
and one of the inputs has ref_count 1, it can be overwritten.
This is not a problem from the python side because if the
ufunc sees a ref_count=1 it means that no python object is ref
> On thing to keep in mind is that the inputs might be different views of the
> same array so the elements might accessed in an unexpected order.
Only inputs owning their own data and with refcount 1 (i.e. no other array can
be a view of it)
are re-used as outputs.
__
@Charles R Harris
>> For example 'sqrt(a**2 + b**2)' can be performed...
> I think this particular function is already available as a ufunc.
I am not sure it is implemented as ufunc.
But in any case it was just an example.
Another example is
sin(2*pi*w+phi)
that is currently implemented allocat
Hi Stefan,
I am afraid I did not explain myself clear enough.
Of course
c = a + b + d
leaves a, b, and d unchanged.
The only array that is overwritten is (a+b) which is a temporary
array that would be destroyed anyway.
Normally the operation above is performed like this:
1) allocation of a tempora
Hello guys,
I made a patch for numpy which allows performing
operations in-place to save memory allocations.
For example 'sqrt(a**2 + b**2)' can be performed
allocating only two arrays instead of four.
You find the details in ticket 1153 of numpy-core.
I thought maybe you could be interested.
I am
66 matches
Mail list logo