nges between 0.6 and 0.7 should produce different
numerical results (beyond standard floating point margins).
--
Nathan Bell wnb...@gmail.com
http://www.wnbell.com/
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion
p.random.random(n)
z = np.random.random(n)
npix = 100
bins = np.linspace(0, 1.0, npix + 1)
image = np.histogram2d(x, y, bins=bins, weights=z)[0]
--
Nathan Bell wnb...@gmail.com
http://www.wnbell.com/
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion
ducts will ultimately map to sparse matrix
multiplication, so I'd imagine your best bet is to use A.T * B (for
column matrices A and B in csc_matrix format).
--
Nathan Bell wnb...@gmail.com
http://graphics.cs.uiuc.edu/~wnbell/
___
Numpy-discussion
parse.coo_matrix(Z)
As of SciPy 0.7, all the sparse matrix constructors accept dense
matrices and array-like objects.
The problem with the matrix case is that Z[i] is rank-2 when a rank-1
array is expected.
--
Nathan Bell wnb...@gmail.com
http://graphics.cs.uiuc.edu/~wnbell/
__
al Python
2.6.1 msi installer and blindly clicked 'next' a few times. I then
installed nose from the source .tar.gz.
Python was installed to C:\Python26\, which I assume means all users.
--
Nathan Bell wnb...@gmail.com
http://graphics.cs.uiuc.edu/~wnbell/
8.916s
OK (KNOWNFAIL=6, SKIP=1)
Mission accomplished?
--
Nathan Bell wnb...@gmail.com
http://graphics.cs.uiuc.edu/~wnbell/
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion
ge).
We should have this resolved in the next scipy release (either 0.7.x or 0.8).
--
Nathan Bell wnb...@gmail.com
http://graphics.cs.uiuc.edu/~wnbell/
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion
ointer to a device array. Of course this requires that the other
expensive parts of your algorithm also execute on the GPU so you're
not shuttling data over the PCIe bus all the time.
Full Disclosure: I'm a researcher at NVIDIA
--
Nathan Bell wnb...@gmail.com
http://graphics.cs.uiuc.ed
n [3]: A
Out[3]:
array([[[0, 1],
[2, 3]],
[[4, 5],
[6, 7]]])
In [4]: B = array([0,1,0])
In [5]: A[tuple(B)]
Out[5]: 2
--
Nathan Bell [EMAIL PROTECTED]
http://graphics.cs.uiuc.edu/~wnbell/
___
Numpy-discussion mailing list
Numpy-discussi
d it be appropriate to merge it into the
Sphinx documentation for scipy.sparse [2], or should the Sphinx docs
be more concise?
[1] http://www.scipy.org/SciPyPackages/Sparse
[2] http://docs.scipy.org/doc/scipy/reference/sparse.html
--
Nathan Bell [EMAIL PROTECTED]
http://graphics.cs.uiuc.edu/~wnbe
On Fri, Oct 3, 2008 at 11:21 AM, Michel Dupront
<[EMAIL PROTECTED]> wrote:
>
> I was using swig 1.3.24.
> I installed the last swig version 1.3.36 and now it is working fine !
> and it makes me very very happy !!!
>
SWIG often has that effect on people :)
--
Nathan Bell [
#x27;s useful nevertheless.
--
Nathan Bell [EMAIL PROTECTED]
http://graphics.cs.uiuc.edu/~wnbell/
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion
o we
definitely want to support it.
--
Nathan Bell [EMAIL PROTECTED]
http://graphics.cs.uiuc.edu/~wnbell/
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion
ve semantics.
Users are more likely to remember that "NaNs always propagate" than
"as stated in the C99 standard...".
--
Nathan Bell [EMAIL PROTECTED]
http://graphics.cs.uiuc.edu/~wnbell/
___
Numpy-discussion mailing list
Numpy-discussion
faster than flattening the
arrays yourself.
--
Nathan Bell [EMAIL PROTECTED]
http://graphics.cs.uiuc.edu/~wnbell/
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion
nything.
>
If you're familiar with MATLAB, look here:
http://www.scipy.org/NumPy_for_Matlab_Users
In the table you'll find the following equivalence:
find(a>0.5) <-> where(a>0.5)
--
Nathan Bell [EMAIL PROTECTED]
http://graphics.cs.uiuc.edu/~wnbell/
__
scipy you could use a sparse
matrix to perform the operation. I think the following does what you
want.
from scipy.sparse import coo_matrix
X += coo_matrix( (Y, (K,zeros(m,dtype=int)), shape=(n,1)).sum(axis=1)
This reduces to a simple C++ loop, so speed should be good:
http://projects.scipy.org/sc
in the STL makes left-balancing fairly
straightforward.
FWIW I also have a pure python implementation here:
http://code.google.com/p/pydec/source/browse/trunk/pydec/math/kd_tree.py
--
Nathan Bell [EMAIL PROTECTED]
http://graphics.cs.uiuc.edu/~wnbell/
___
Nu
On Sun, Sep 21, 2008 at 1:28 PM, Dinesh B Vadhia
<[EMAIL PROTECTED]> wrote:
>
> But, I want to pick up the column index of non-zero elements per row.
>
http://www.scipy.org/Numpy_Example_List_With_Doc#nonzero
--
Nathan Bell [EMAIL PROTECTED]
http://graphics.cs.u
Could someone with a better knowledge of distutils look over the
following SciPy ticket:
http://scipy.org/scipy/scipy/ticket/738
Short version: distutils compiles with -march=pentium-m on a machine
that can't execute SS2 instructions.
--
Nathan Bell [EMAIL PROTECTED]
http://graphics.cs.uiu
--
Ran 1726 tests in 8.813s
OK (KNOWNFAIL=1)
--
Nathan Bell [EMAIL PROTECTED]
http://graphics.cs.uiuc.edu/~wnbell/
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion
y be overkill. OTOH you may be able
to get everything to need from the sparsetools source code. Feel free
to pillage it as you require :)
Should you go the SWIG path, I can help explain some of the more cryptic parts.
--
Nathan Bell [EMAIL PROTECTED]
http://graphics.cs.uiuc.edu/~wnbell/
___
be able to do the following:
>
> arange(0, 100, 0.1)
>
It appears to be great already :)
In other words, arange(0, 100, 0.1) does exactly what you want.
--
Nathan Bell [EMAIL PROTECTED]
http://graphics.cs.uiuc.edu/~wnbell/
___
Numpy-discussion ma
sing something ?
Can you could trap it in __getattr_ instead? For instance:
http://projects.scipy.org/scipy/scipy/browser/trunk/scipy/sparse/csr.py#L87
--
Nathan Bell [EMAIL PROTECTED]
http://graphics.cs.uiuc.edu/~wnbell/
___
Numpy-discussion mailing
and the benefits continue to grow.
>
Just think of the savings that could be achieved if all 2.1 million
Walmart employees were outfitted with colostomy bags.
0.5 hours / day for bathroom breaks * 2,100,000 employees * 365
days/year * $7/hour = $2,682,750,000/year
Granted, I'm probably n
ks in any given year and every little
> helps!
>
There are other components of NumPy/SciPy that are more worthy of
optimization. Given that programmer time is a scarce resource, it's
more sensible to direct our efforts towards making the other 98.5% of
the computation faster.
/la
hem? Or are they only necessary for making certain routines faster?
>
I don't know about OSX specifically, but my understanding is that you
can build NumPy and SciPy without those libraries. Performance in
certain dense linear algebra operations will be slower, but
scipy.sparse will be
alg/eigen
If you're familiar with MATLAB's eigs(), then you'll find ARPACK easy to use.
--
Nathan Bell [EMAIL PROTECTED]
http://graphics.cs.uiuc.edu/~wnbell/
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion
interface that takes a tuple as the first argument
use numpy.random.random_sample(shape_tuple).
--
Nathan Bell [EMAIL PROTECTED]
http://graphics.cs.uiuc.edu/~wnbell/
___
Numpy-discussion mailing list
Numpy-discussi
ed to use nose?
>
When making frequent changes to test_foo.py, it's often nice to run
test_foo.py directly, rather than installing the whole package and
then testing via nose.
I would leave the decision up to the maintainers of the individual
submodules. Personally, I find
;>
>
> Check, the answer is 4, as you got for the 32-bit. What would the answer
> be on a 64-bit architecture? Why is this diagnostic?
It would be 8 on a 64-bit architecture (with a 64-bit binary): 8
bytes = 64 bits, 4 bytes = 32 bits.
--
Nathan B
ax_value_of(np.uint8):
>x = max_value_of(np.uint8)
That kind of information is available via numpy.finfo() and numpy.iinfo():
In [12]: finfo('d').max
Out[12]: 1.7976931348623157e+308
In [13]: iinfo('i').max
Out[13]: 2147483647
In [14]: iinfo(uint8).max
Out[14]: 255
--
Natha
http://sd-2116.dedibox.fr/pydocweb/doc/numpy.core.umath.exp/
--
Nathan Bell [EMAIL PROTECTED]
http://graphics.cs.uiuc.edu/~wnbell/
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion
tic of integers near max and minimum values
> is fraught with danger.
>
It would be a mistake to assume that many/most NumPy users know the
oddities of two's complement signed integer representations.
--
Nathan Bell [EMAIL PROTECTED]
http://graphics.cs.uiuc.edu/~wnbell/
_
s on integer arrays are somewhat
dangerous, and best left to more sophisticated users anyway.
Interestingly, MATLAB (v7.5.0) takes a different approach:
>> A = int8([ -128, 1])
A =
-1281
>> abs(A)
ans =
1271
>> -A
ans =
127 -1
>
I think he was advocating using the corresponding unsigned type, not a
larger type.
e.g.
abs(int8) -> uint8
abs(int64) -> uint64
--
Nathan Bell [EMAIL PROTECTED]
http://graphics.cs.uiuc.edu/~wnbell/
___
Numpy-discussion mailing
]: -2
>
I would call that an overflow.
Have you considered that other people might have a different notion of
"how numpy is supposed to work"?
--
Nathan Bell [EMAIL PROTECTED]
http://graphics.cs.uiuc.edu/~wnbell/
___
Numpy-disc
ses is more important than
preserving the dtype when summing.
Anyway, the point is moot. There's no way to change x.sum() without
breaking lots of code.
--
Nathan Bell [EMAIL PROTECTED]
http://graphics.cs.uiuc.edu/~wnbell/
___
Numpy-discussion ma
ize is effectively upcast.
IMO this is desirable since small types are very likely to overflow.
--
Nathan Bell [EMAIL PROTECTED]
http://graphics.cs.uiuc.edu/~wnbell/
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mail
than (x.sum() ->
bool) since x.any() already exists.
--
Nathan Bell [EMAIL PROTECTED]
http://graphics.cs.uiuc.edu/~wnbell/
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion
t; can make it any faster
A C implementation would certainly be faster, perhaps 5x faster, due
to short-circuiting the AND operations and the fact that you'd only
pass over the data once.
OTOH I'd be very surprised if this is the slowest part of your application.
--
Nathan Bell [EMAIL
On Sat, May 17, 2008 at 9:30 PM, Brian Granger <[EMAIL PROTECTED]> wrote:
>
> Please correct any new errors I have introduced.
>
Thanks Brian, I think that's a fair representation.
Minor typo "course grained" -> "coarse-grained"
--
Nathan Bell [E
ontrived scenarios like
the above don't inspire my confidence either. I have yet to see a
benchmark that reveals the claimed benefits of Cython.
--
Nathan Bell [EMAIL PROTECTED]
http://graphics.cs.uiuc.edu/~wnbell/
___
Numpy-discussion mailing list
cal C/C++ libraries.
More disingenuous FUD here: http://www.sagemath.org/doc/html/prog/node36.html
--
Nathan Bell [EMAIL PROTECTED]
http://graphics.cs.uiuc.edu/~wnbell/
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion
* arr)
then you can use SWIG typemaps pick the correct function to use.
You can also make SWIG upcast to the appropriate types. For example,
if in Python you passed an int array and a double array to:
foo(double * arr1, double * arr2)
then you can have SWIG automatically upcast the int array t
err = scipy.weave.inline(code,
['a','v','N_a', 'N_v','indices'],
type_converters = scipy.weave.converters.blitz,
compiler = 'gcc',
support_code = '#include
raising a general Warning with a message like the following?
"matrix indexing of the form x[0] is ambiguous, consider the explicit
format x[0,:]"
--
Nathan Bell [EMAIL PROTECTED]
http://graphics.cs.uiuc.edu/~wnbell/
___
Numpy-discussion mai
plug one hole while creating others, especially in a
minor release. I suspect that if we surveyed end-users we'd find
that "my code still works" is a much higher priority than "A[0][0] now
does what I expect".
IMO scalar indexing should raise a warning
On Fri, May 9, 2008 at 9:56 AM, Charles R Harris
<[EMAIL PROTECTED]> wrote:
>
> Of course, if Nathan has already made the changes we will drive him crazy if
> we back them out now
This shouldn't be a problem, scipy.sparse should work with either
Thanks for your concern though
cking out.
That should be true.
--
Nathan Bell [EMAIL PROTECTED]
http://graphics.cs.uiuc.edu/~wnbell/
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion
re needed.)
>
That's correct, the necessary changes to scipy.sparse were not very substantial.
--
Nathan Bell [EMAIL PROTECTED]
http://graphics.cs.uiuc.edu/~wnbell/
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.sci
51 matches
Mail list logo