Hi all,
There are some new test errors
==
ERROR: Test with missing and filling values
--
Traceback (most recent call last):
File
"/data/home/nwagner/local/li
2010/11/22 Gael Varoquaux :
> On Mon, Nov 22, 2010 at 11:12:26PM +0100, Matthieu Brucher wrote:
>> It seems that a simplex is what you need. It uses the barycenter (more
>> or less) to find a new point in the simplex. And it works well only in
>> convex functions (but in fact almost all functions h
> It's not an error but a harmless (although confusing) warning message.
> You should be able to filter it by adding the following to
> scipy/__init__.py:
>
> import warnings
> warnings.filterwarnings(action='ignore', message='.*__builtin__.file
> size changed.*')
>
> Can you check if that works
On Mon, Nov 22, 2010 at 5:27 PM, Matthieu Brucher
wrote:
> 2010/11/22 Gael Varoquaux :
>> On Mon, Nov 22, 2010 at 11:12:26PM +0100, Matthieu Brucher wrote:
>>> It seems that a simplex is what you need.
>>
>> Ha! I am learning new fancy words. Now I can start looking clever.
>>
>>> > I realize that
On Mon, Nov 22, 2010 at 11:12:26PM +0100, Matthieu Brucher wrote:
> It seems that a simplex is what you need. It uses the barycenter (more
> or less) to find a new point in the simplex. And it works well only in
> convex functions (but in fact almost all functions have an issue with
> this :D)
One
2010/11/22 Gael Varoquaux :
> On Mon, Nov 22, 2010 at 11:12:26PM +0100, Matthieu Brucher wrote:
>> It seems that a simplex is what you need.
>
> Ha! I am learning new fancy words. Now I can start looking clever.
>
>> > I realize that maybe I should rephrase my question to try and draw more
>> > out
On Mon, Nov 22, 2010 at 11:12:26PM +0100, Matthieu Brucher wrote:
> It seems that a simplex is what you need.
Ha! I am learning new fancy words. Now I can start looking clever.
> > I realize that maybe I should rephrase my question to try and draw more
> > out of the common wealth of knowledge on
2010/11/22 Gael Varoquaux :
> On Mon, Nov 22, 2010 at 09:12:45PM +0100, Matthieu Brucher wrote:
>> Hi ;)
>
> Hi bro
>
>> > does anybody have, or knows where I can find some N dimensional
>> > dichotomy optimization code in Python (BSD licensed, or equivalent)?
>
>> I don't know any code, but it sho
On Mon, Nov 22, 2010 at 09:12:45PM +0100, Matthieu Brucher wrote:
> Hi ;)
Hi bro
> > does anybody have, or knows where I can find some N dimensional
> > dichotomy optimization code in Python (BSD licensed, or equivalent)?
> I don't know any code, but it should be too difficult by bgoing
> throug
I think that the only speedup you will get is defining an index only
once and reusing it.
2010/11/22 Ernest Adrogué :
> 22/11/10 @ 14:04 (-0600), thus spake Robert Kern:
>> > This way, I get the elements (0,1) and (1,1) which is what
>> > I wanted. The question is: is it possible to omit the [0,1]
22/11/10 @ 14:04 (-0600), thus spake Robert Kern:
> > This way, I get the elements (0,1) and (1,1) which is what
> > I wanted. The question is: is it possible to omit the [0,1]
> > in the index?
>
> No, but you can write generic code for it:
>
> t[np.arange(t.shape[0]), x, y]
Thank you. This i
2010/11/22 Gael Varoquaux :
> Hi list,
Hi ;)
> does anybody have, or knows where I can find some N dimensional dichotomy
> optimization code in Python (BSD licensed, or equivalent)?
I don't know any code, but it should be too difficult by bgoing
through a KdTree.
> Worst case, it does not look
22/11/10 @ 11:20 (-0800), thus spake John Salvatier:
> I didn't realize the x's and y's were varying the first time around.
> There's probably a way to omit it, but I think the conceptually
> simplest way is probably what you had to begin with. Build an index by
> saying i = numpy.arange(0, t.shape
2010/11/21 Ernest Adrogué :
> Hi,
>
> Suppose an array of shape (N,2,2), that is N arrays of
> shape (2,2). I want to select an element (x,y) from each one
> of the subarrays, so I get a 1-dimensional array of length
> N. For instance:
>
> In [228]: t=np.arange(8).reshape(2,2,2)
>
> In [229]: t
> O
22/11/10 @ 11:08 (-0800), thus spake Christopher Barker:
> On 11/21/10 11:37 AM, Ernest Adrogué wrote:
> >>so you want
> >>
> >>t[:,x,y]
> >
> >I tried that, but it's not the same:
> >
> >In [307]: t[[0,1],x,y]
> >Out[307]: array([1, 7])
> >
> >In [308]: t[:,x,y]
> >Out[308]:
> >array([[1, 3],
> >
Hi list,
does anybody have, or knows where I can find some N dimensional dichotomy
optimization code in Python (BSD licensed, or equivalent)?
Worst case, it does not look too bad to code, but I am interested by any
advice. I haven't done my reading yet, and I don't know how ill-posed a problem
Hi,
On Mon, Nov 22, 2010 at 11:35 AM, Christopher Barker
wrote:
> On 11/20/10 11:04 PM, Ralf Gommers wrote:
>> I am pleased to announce the availability of NumPy 1.5.1.
>
>> Binaries, sources and release notes can be found at
>> https://sourceforge.net/projects/numpy/files/.
>>
>> Thank you to ev
On 11/20/10 11:04 PM, Ralf Gommers wrote:
> I am pleased to announce the availability of NumPy 1.5.1.
> Binaries, sources and release notes can be found at
> https://sourceforge.net/projects/numpy/files/.
>
> Thank you to everyone who contributed to this release.
Yes, thanks so much -- in particu
I didn't realize the x's and y's were varying the first time around.
There's probably a way to omit it, but I think the conceptually
simplest way is probably what you had to begin with. Build an index by
saying i = numpy.arange(0, t.shape[0])
then you can do t[i, x,y]
On Mon, Nov 22, 2010 at 11:0
On Mon, Nov 22, 2010 at 2:04 PM, Keith Goodman wrote:
> On Mon, Nov 22, 2010 at 11:00 AM, wrote:
>
>> I don't think that works for complex numbers.
>> (statsmodels has now a preference that calculations work also for
>> complex numbers)
>
> I'm only supporting int32, int64, float64 for now. Gett
On 11/21/10 11:37 AM, Ernest Adrogué wrote:
>> so you want
>>
>> t[:,x,y]
>
> I tried that, but it's not the same:
>
> In [307]: t[[0,1],x,y]
> Out[307]: array([1, 7])
>
> In [308]: t[:,x,y]
> Out[308]:
> array([[1, 3],
> [5, 7]])
what is your t? Here's my example, which I think matches wh
On Mon, Nov 22, 2010 at 1:59 PM, Keith Goodman wrote:
> On Mon, Nov 22, 2010 at 10:51 AM, wrote:
>> On Mon, Nov 22, 2010 at 1:39 PM, Keith Goodman wrote:
>>> On Mon, Nov 22, 2010 at 10:32 AM, wrote:
On Mon, Nov 22, 2010 at 1:26 PM, Keith Goodman wrote:
> On Mon, Nov 22, 2010 at 9:03
On Mon, Nov 22, 2010 at 11:00 AM, wrote:
> I don't think that works for complex numbers.
> (statsmodels has now a preference that calculations work also for
> complex numbers)
I'm only supporting int32, int64, float64 for now. Getting the other
ints and floats should be easy. I don't have plans
On Mon, Nov 22, 2010 at 1:51 PM, wrote:
> On Mon, Nov 22, 2010 at 1:39 PM, Keith Goodman wrote:
>> On Mon, Nov 22, 2010 at 10:32 AM, wrote:
>>> On Mon, Nov 22, 2010 at 1:26 PM, Keith Goodman wrote:
On Mon, Nov 22, 2010 at 9:03 AM, Keith Goodman wrote:
> @cython.boundscheck(Fals
On Mon, Nov 22, 2010 at 10:51 AM, wrote:
> On Mon, Nov 22, 2010 at 1:39 PM, Keith Goodman wrote:
>> On Mon, Nov 22, 2010 at 10:32 AM, wrote:
>>> On Mon, Nov 22, 2010 at 1:26 PM, Keith Goodman wrote:
On Mon, Nov 22, 2010 at 9:03 AM, Keith Goodman wrote:
> @cython.boundscheck(Fal
On Mon, Nov 22, 2010 at 1:39 PM, Keith Goodman wrote:
> On Mon, Nov 22, 2010 at 10:32 AM, wrote:
>> On Mon, Nov 22, 2010 at 1:26 PM, Keith Goodman wrote:
>>> On Mon, Nov 22, 2010 at 9:03 AM, Keith Goodman wrote:
>>>
@cython.boundscheck(False)
@cython.wraparound(False)
def nanstd
On Mon, Nov 22, 2010 at 10:32 AM, wrote:
> On Mon, Nov 22, 2010 at 1:26 PM, Keith Goodman wrote:
>> On Mon, Nov 22, 2010 at 9:03 AM, Keith Goodman wrote:
>>
>>> @cython.boundscheck(False)
>>> @cython.wraparound(False)
>>> def nanstd_twopass(np.ndarray[np.float64_t, ndim=1] a, int ddof):
>>>
On Mon, Nov 22, 2010 at 1:26 PM, Keith Goodman wrote:
> On Mon, Nov 22, 2010 at 9:03 AM, Keith Goodman wrote:
>
>> @cython.boundscheck(False)
>> @cython.wraparound(False)
>> def nanstd_twopass(np.ndarray[np.float64_t, ndim=1] a, int ddof):
>> "nanstd of 1d numpy array with dtype=np.float64 alo
On Mon, Nov 22, 2010 at 9:03 AM, Keith Goodman wrote:
> @cython.boundscheck(False)
> @cython.wraparound(False)
> def nanstd_twopass(np.ndarray[np.float64_t, ndim=1] a, int ddof):
> "nanstd of 1d numpy array with dtype=np.float64 along axis=0."
> cdef Py_ssize_t i
> cdef int a0 = a.shape[
On Mon, Nov 22, 2010 at 12:28 PM, Keith Goodman wrote:
> On Mon, Nov 22, 2010 at 9:13 AM, wrote:
>
>> Two pass would provide precision that we would expect in numpy, but I
>> don't know if anyone ever tested the NIST problems for basic
>> statistics.
>
> Here are the results for their most diffi
Basically, indexing in Python is a little slower, the number of things indexing
can do is more varied, and more to the point, the objects returned from arrays
are NumPy scalars (which have math which is not optimized).
If you do element-by-element indexing, it's generally best to use Python lis
On Mon, Nov 22, 2010 at 9:13 AM, wrote:
> Two pass would provide precision that we would expect in numpy, but I
> don't know if anyone ever tested the NIST problems for basic
> statistics.
Here are the results for their most difficult dataset. But I guess
running one test doesn't mean anything.
On Mon, Nov 22, 2010 at 12:07 PM, Benjamin Root wrote:
> On Mon, Nov 22, 2010 at 11:03 AM, Keith Goodman wrote:
>>
>> On Sun, Nov 21, 2010 at 5:56 PM, Robert Kern
>> wrote:
>> > On Sun, Nov 21, 2010 at 19:49, Keith Goodman
>> > wrote:
>> >
>> >> But this sample gives a difference:
>> >>
>>
On Mon, Nov 22, 2010 at 11:03 AM, Keith Goodman wrote:
> On Sun, Nov 21, 2010 at 5:56 PM, Robert Kern
> wrote:
> > On Sun, Nov 21, 2010 at 19:49, Keith Goodman
> wrote:
> >
> >> But this sample gives a difference:
> >>
> a = np.random.rand(100)
> a.var()
> >> 0.080232196646619805
>
On Sun, Nov 21, 2010 at 5:56 PM, Robert Kern wrote:
> On Sun, Nov 21, 2010 at 19:49, Keith Goodman wrote:
>
>> But this sample gives a difference:
>>
a = np.random.rand(100)
a.var()
>> 0.080232196646619805
var(a)
>> 0.080232196646619791
>>
>> As you know, I'm trying to make a d
On Mon, Nov 22, 2010 at 2:30 AM, Pauli Virtanen wrote:
> Sun, 21 Nov 2010 23:26:37 -0700, Charles R Harris wrote:
> [clip]
> > Yes, indexing is known to be slow, although I don't recall the precise
> > reason for that. Something to do with way integers are handled or some
> > such. There was some
On Mon, Nov 22, 2010 at 6:05 AM, Hagen Fürstenau wrote:
>> ISTM that this elementary functionality deserves an implementation
>> that's as fast as it can be.
>
> To substantiate this, I just wrote a simple implementation of
> "categorical" in "numpy/random/mtrand.pyx" and it's more than 8x faster
> ISTM that this elementary functionality deserves an implementation
> that's as fast as it can be.
To substantiate this, I just wrote a simple implementation of
"categorical" in "numpy/random/mtrand.pyx" and it's more than 8x faster
than your version for multiple samples of the same distribution
Sun, 21 Nov 2010 23:26:37 -0700, Charles R Harris wrote:
[clip]
> Yes, indexing is known to be slow, although I don't recall the precise
> reason for that. Something to do with way integers are handled or some
> such. There was some discussion on the list many years ago...
It could be useful if so
>> but this is bound to be inefficient as soon as the vector of
>> probabilities gets large, especially if you want to draw multiple samples.
>>
>> Have I overlooked something or should this be added?
>
> I think you misunderstand the point of multinomial distributions.
I'm afraid the multiple sa
On Sun, Nov 21, 2010 at 09:03:22PM -0500, Wes McKinney wrote:
> Maybe let's have the next thread on SciPy-user-- I think what we're
> talking about is general enough to be discussed there.
Yes, a lot of this is of general interest.
I'd be particularly interested in having the NaN work land in sci
On 2010-11-22, at 2:51 AM, Hagen Fürstenau wrote:
> but this is bound to be inefficient as soon as the vector of
> probabilities gets large, especially if you want to draw multiple samples.
>
> Have I overlooked something or should this be added?
I think you misunderstand the point of multinomia
42 matches
Mail list logo