On Fri, Nov 19, 2010 at 8:33 PM, wrote:
-np.inf>-np.inf
> False
>
> If the only value is -np.inf, you will return nan, I guess.
>
np.nanmax([-np.inf, np.nan])
> -inf
That's a great corner case. Thanks, Josef. This looks like it would fix it:
change
if ai > amax:
amax = ai
to
i
On Fri, Nov 19, 2010 at 10:59 PM, Keith Goodman wrote:
> On Fri, Nov 19, 2010 at 7:51 PM, wrote:
>>
>> does this give you the correct answer?
>>
> 1>np.nan
>> False
>>
>> What's the starting value for amax? -inf?
>
> Because "1 > np.nan" is False, the current running max does not get
> updat
On Fri, Nov 19, 2010 at 8:05 PM, Charles R Harris
wrote:
> This doesn't look right:
>
> @cython.boundscheck(False)
> @cython.wraparound(False)
> def nanmax_2d_float64_axisNone(np.ndarray[np.float64_t, ndim=2] a):
> "nanmax of 2d numpy array with dtype=np.float64 along axis=None."
> cdef P
On Fri, Nov 19, 2010 at 8:42 PM, Keith Goodman wrote:
> On Fri, Nov 19, 2010 at 7:19 PM, Charles R Harris
> wrote:
> >
> >
> > On Fri, Nov 19, 2010 at 1:50 PM, Keith Goodman
> wrote:
> >>
> >> On Fri, Nov 19, 2010 at 12:29 PM, Keith Goodman
> >> wrote:
> >> > On Fri, Nov 19, 2010 at 12:19 PM,
On Fri, Nov 19, 2010 at 7:51 PM, wrote:
>
> does this give you the correct answer?
>
1>np.nan
> False
>
> What's the starting value for amax? -inf?
Because "1 > np.nan" is False, the current running max does not get
updated, which is what we want.
>> import nanny as ny
>> np.nanmax([1, np.
On Fri, Nov 19, 2010 at 7:51 PM, wrote:
> On Fri, Nov 19, 2010 at 10:42 PM, Keith Goodman wrote:
>> It basically loops through the data and does:
>>
>> allnan = 1
>> ai = ai[i,k]
>> if ai > amax:
>> amax = ai
>> allnan = 0
>
> does this give you the correct answer?
>
1>np.nan
> False
On Fri, Nov 19, 2010 at 10:42 PM, Keith Goodman wrote:
> On Fri, Nov 19, 2010 at 7:19 PM, Charles R Harris
> wrote:
>>
>>
>> On Fri, Nov 19, 2010 at 1:50 PM, Keith Goodman wrote:
>>>
>>> On Fri, Nov 19, 2010 at 12:29 PM, Keith Goodman
>>> wrote:
>>> > On Fri, Nov 19, 2010 at 12:19 PM, Pauli Vir
On Fri, Nov 19, 2010 at 7:19 PM, Charles R Harris
wrote:
>
>
> On Fri, Nov 19, 2010 at 1:50 PM, Keith Goodman wrote:
>>
>> On Fri, Nov 19, 2010 at 12:29 PM, Keith Goodman
>> wrote:
>> > On Fri, Nov 19, 2010 at 12:19 PM, Pauli Virtanen wrote:
>> >> Fri, 19 Nov 2010 11:19:57 -0800, Keith Goodman
On Fri, Nov 19, 2010 at 8:19 PM, Charles R Harris wrote:
>
>
> On Fri, Nov 19, 2010 at 1:50 PM, Keith Goodman wrote:
>
>> On Fri, Nov 19, 2010 at 12:29 PM, Keith Goodman
>> wrote:
>> > On Fri, Nov 19, 2010 at 12:19 PM, Pauli Virtanen wrote:
>> >> Fri, 19 Nov 2010 11:19:57 -0800, Keith Goodman w
On Fri, Nov 19, 2010 at 1:50 PM, Keith Goodman wrote:
> On Fri, Nov 19, 2010 at 12:29 PM, Keith Goodman
> wrote:
> > On Fri, Nov 19, 2010 at 12:19 PM, Pauli Virtanen wrote:
> >> Fri, 19 Nov 2010 11:19:57 -0800, Keith Goodman wrote:
> >> [clip]
> >>> My guess is that having separate underlying f
I am wrapping up a small package to parse a particular ascii-encoded
file format generated by a program we use heavily here at the lab. (In
the unlikely event that you work at a synchrotron, and use Certified
Scientific's "spec" program, and are actually interested, the code is
currently available
Hello everybody,
I was wondering if there is an elegant way of overwriting 'nan' string
representation with 'NaN' when saving numpy array containing np.nan values with
numpy.savetxt() function. numpy.set_printoptions(nanstr='NaN') setting (which
has default value of 'NaN' already) does not seem
Apologies, I accidentally hit send...
On Tue, Nov 16, 2010 at 9:20 AM, Darren Dale wrote:
> I am wrapping up a small package to parse a particular ascii-encoded
> file format generated by a program we use heavily here at the lab. (In
> the unlikely event that you work at a synchrotron, and use Ce
On Fri, Nov 19, 2010 at 3:18 PM, Christopher Barker
wrote:
> On 11/19/10 11:19 AM, Keith Goodman wrote:
> > On Fri, Nov 19, 2010 at 10:55 AM, Nathaniel Smith wrote:
> >> Why not make this a patch to numpy/scipy instead?
> >
> > My guess is that having separate underlying functions for each dtype,
Tue, 16 Nov 2010 09:20:29 -0500, Darren Dale wrote:
[clip]
> module. So I am wondering about the performance of np.fromstring:
Fromstring is slow, probably because it must work around locale-
dependence of the underlying C parsing functions. Moreover, the Numpy
parsing mechanism generates many in
On 11/19/10 11:19 AM, Keith Goodman wrote:
> On Fri, Nov 19, 2010 at 10:55 AM, Nathaniel Smith wrote:
>> Why not make this a patch to numpy/scipy instead?
>
> My guess is that having separate underlying functions for each dtype,
> ndim, and axis would be a nightmare for a large project like Numpy.
Tue, 16 Nov 2010 09:41:04 -0500, Darren Dale wrote:
[clip]
> That loop takes 0.33 seconds to execute, which is a good start. I need
> some help converting this example to return an actual numpy array. Could
> anyone please offer a suggestion?
Easiest way is probably to use ndarray buffers and resi
On Fri, Nov 19, 2010 at 12:29 PM, Keith Goodman wrote:
> On Fri, Nov 19, 2010 at 12:19 PM, Pauli Virtanen wrote:
>> Fri, 19 Nov 2010 11:19:57 -0800, Keith Goodman wrote:
>> [clip]
>>> My guess is that having separate underlying functions for each dtype,
>>> ndim, and axis would be a nightmare for
On Fri, Nov 19, 2010 at 12:19 PM, Pauli Virtanen wrote:
> Fri, 19 Nov 2010 11:19:57 -0800, Keith Goodman wrote:
> [clip]
>> My guess is that having separate underlying functions for each dtype,
>> ndim, and axis would be a nightmare for a large project like Numpy. But
>> manageable for a focused p
On Fri, Nov 19, 2010 at 12:10 PM, wrote:
> What's the speed advantage of nanny compared to np.nansum that you
> have if the arrays are larger, say (1000,10) or (1,100) axis=0 ?
Good point. In the small examples I showed so far maybe the speed up
was all in overhead. Fortunately, that's not
Fri, 19 Nov 2010 11:19:57 -0800, Keith Goodman wrote:
[clip]
> My guess is that having separate underlying functions for each dtype,
> ndim, and axis would be a nightmare for a large project like Numpy. But
> manageable for a focused project like nanny.
Might be easier to migrate the nan* function
On Fri, Nov 19, 2010 at 2:35 PM, Keith Goodman wrote:
> On Fri, Nov 19, 2010 at 11:12 AM, Benjamin Root wrote:
>
>> That's why I use masked arrays. It is dtype agnostic.
>>
>> I am curious if there are any lessons that were learned in making Nanny that
>> could be applied to the masked array fun
On Fri, Nov 19, 2010 at 11:12 AM, Benjamin Root wrote:
> That's why I use masked arrays. It is dtype agnostic.
>
> I am curious if there are any lessons that were learned in making Nanny that
> could be applied to the masked array functions?
I suppose you could write a cython function that oper
On Fri, Nov 19, 2010 at 10:55 AM, Nathaniel Smith wrote:
> On Fri, Nov 19, 2010 at 10:33 AM, Keith Goodman wrote:
>> Nanny uses the magic of Cython to give you a faster, drop-in replacement for
>> the NaN functions in NumPy and SciPy.
>
> Neat!
>
> Why not make this a patch to numpy/scipy instead
On Fri, Nov 19, 2010 at 12:55 PM, Nathaniel Smith wrote:
> On Fri, Nov 19, 2010 at 10:33 AM, Keith Goodman
> wrote:
> > Nanny uses the magic of Cython to give you a faster, drop-in replacement
> for
> > the NaN functions in NumPy and SciPy.
>
> Neat!
>
> Why not make this a patch to numpy/scipy
On Fri, Nov 19, 2010 at 10:33 AM, Keith Goodman wrote:
> Nanny uses the magic of Cython to give you a faster, drop-in replacement for
> the NaN functions in NumPy and SciPy.
Neat!
Why not make this a patch to numpy/scipy instead?
> Nanny uses a separate Cython function for each combination of n
=
Nanny
=
Nanny uses the magic of Cython to give you a faster, drop-in replacement for
the NaN functions in NumPy and SciPy.
For example::
>> import nanny as ny
>> import numpy as np
>> arr = np.random.rand(100, 100)
>> timeit np.nansum(arr)
1 loops, best of 3: 6
On Tue, Nov 16, 2010 at 10:31 AM, Darren Dale wrote:
> On Tue, Nov 16, 2010 at 9:55 AM, Pauli Virtanen wrote:
>> Tue, 16 Nov 2010 09:41:04 -0500, Darren Dale wrote:
>> [clip]
>>> That loop takes 0.33 seconds to execute, which is a good start. I need
>>> some help converting this example to return
Hi,
On 19 November 2010 03:48, Sachin Kumar Sharma wrote:
> Does graphic output like maps, histogram, crossplot, tornado charts is good
> enough with basic installation or needs some additional packages?
You might want to ask this question at the scipy mailig-list.
For maps, you need basemap or
Sorry, I accidentally hit send long before I was finished writing. But
to answer your question, they contain many 2048-element multi-channel
analyzer spectra.
Darren
On Tue, Nov 16, 2010 at 9:26 AM, william ratcliff
wrote:
> Actually,
> I do use spec when I have synchotron experiments. But why
30 matches
Mail list logo