On Tue, 03 Jan 2017, Stephan Hoyer wrote:
> >> testing on stable debian box with elderly numpy, where it does behave
> >> sensibly:
> >> $> python -c "import numpy; print('numpy version: ', numpy.__version__);
> >> a=2; b=-2; print(pow(a,b)); print(pow(numpy.array(a), b))"
> >> ('numpy version:
On Tue, 03 Jan 2017, Stephan Hoyer wrote:
>On Tue, Jan 3, 2017 at 9:00 AM, Yaroslav Halchenko
>wrote:
> Sorry for coming too late to the discussion and after PR "addressing"
> the issue by issuing an error was merged [1].A I got burnt by new
>
On Tue, 11 Oct 2016, Peter Creasey wrote:
> >> I agree with Sebastian and Nathaniel. I don't think we can deviating from
> >> the existing behavior (int ** int -> int) without breaking lots of existing
> >> code, and if we did, yes, we would need a new integer power function.
> >> I think it's be
I have no stats on either anyone is looking at
http://yarikoptic.github.io/numpy-vbench besides me at times, so I
might be just crying into the wild:
I have moved running of numpy-vbench on a bit newer/more powerful box,
and that is why benchmark results are being reestimated (thus you might
still
On Mon, 07 Apr 2014, Sturla Molden wrote:
> > so I would assume that the devil is indeed in R post-processing and would
> > look
> > into it (if/when get a chance).
> I tried to look into the R source code. It's the worst mess I have ever
> seen. I couldn't even find their Mersenne twister.
it
On Sun, 06 Apr 2014, Sturla Molden wrote:
> > R, Python std library, numpy all have Mersenne Twister RNG implementation.
> > But
> > all of them generate different numbers. This issue was previously
> > discussed in
> > https://github.com/numpy/numpy/issues/4530 : In Python, and numpy generat
Hi NumPy gurus,
We wanted to test some of our code by comparing to results of R
implementation which provides bootstrapped results.
R, Python std library, numpy all have Mersenne Twister RNG implementation. But
all of them generate different numbers. This issue was previously discussed in
https
On Sun, 24 Nov 2013, Nathaniel Smith wrote:
> > On this positive note (it is boring to start a new thread, isn't it?) --
> > would you be interested in me transfering numpy-vbench over to
> > github.com/numpy ?
> If you mean just moving the existing git repo under the numpy
> organization, like g
On Mon, 25 Nov 2013, Fernando Perez wrote:
> ok -- since no negative feedback received -- submitted as is. �I will
> let you know when it gets rejected or accepted.
>Let me know if it's accepted: I'll be keynoting at PyCon'14, and since my
>focus will obviously be scientific comp
On Tue, 15 Oct 2013, Nathaniel Smith wrote:
> What do you have to lose?
> > btw -- fresh results are here http://yarikoptic.github.io/numpy-vbench/ .
> > I have tuned benchmarking so it now reflects the best performance across
> > multiple executions of the whole battery, thus eliminating spurio
ok -- since no negative feedback received -- submitted as is. I will
let you know when it gets rejected or accepted.
cheers,
On Tue, 15 Oct 2013, Yaroslav Halchenko wrote:
> Hi Guys,
> PyCon 2014 will be just around the corner from where I am, so I decided
> to attend. Being lazy (o
On Tue, 15 Oct 2013, Nathaniel Smith wrote:
> > and I think it might be worth letting people know about my little project.
> > I
> > would really appreciate your sincere feedback (e.g. "not worth it" would be
> > valuable too). Here is the title/abstract
> > numpy-vbench -- speed benchmark
Hi Guys,
PyCon 2014 will be just around the corner from where I am, so I decided
to attend. Being lazy (or busy) I haven't submitted any big talk but thinking
to submit few lightning talks (just 5 min and 400 characters abstract limit),
and I think it might be worth letting people know about my l
On Fri, 06 Sep 2013, josef.p...@gmail.com wrote:
> On Fri, Sep 6, 2013 at 3:21 PM, Yaroslav Halchenko
> wrote:
> > FWIW -- updated runs of the benchmarks are available at
> > http://yarikoptic.github.io/numpy-vbench which now include also
> > maintenance/1.8.x bra
On Fri, 06 Sep 2013, Daπid wrote:
> some old ones are
> still there, some might be specific to my CPU here
>How long does one run take? Maybe I can run it in my machine (Intel i5)
>for comparison.
In current configuration where I "target" benchmark run to around 200ms
(thus pos
FWIW -- updated runs of the benchmarks are available at
http://yarikoptic.github.io/numpy-vbench which now include also
maintenance/1.8.x branch (no divergences were detected yet). There are
only recent improvements as I see and no new (but some old ones are
still there, some might be specific to
I am glad to announce that now you can see benchmark timing plots for
multiple branches, thus being able to spot regressions in maintenance
branches and compare enhancements in relation to previous releases.
e.g.
* improving upon 1.7.x but still lacking behind 1.6.x
http://www.onerussian.com/tmp
On Wed, 24 Jul 2013, Pauli Virtanen wrote:
> How about splitting doc/sphinxext out from the main Numpy repository to
> a separate `numpydoc` repo under Numpy project?
+1
> It's a separate Python package, after all. Moreover, this would make it
> easier to use it as a git submodule (e.g. in Sci
://www.onerussian.com/tmp/numpy-vbench/vb_vb_core.html#numpy-ones-100
Cheers,
On Fri, 19 Jul 2013, Yaroslav Halchenko wrote:
> I have just added a few more benchmarks, and here they come
> http://www.onerussian.com/tmp/numpy-vbench/vb_vb_linalg.html#numpy-linalg-pinv-a-float32
> it seems to be ve
On Mon, 22 Jul 2013, Benjamin Root wrote:
> At some point I hope to tune up the report with an option of viewing the
> plot using e.g. nvd3 JS so it could be easier to pin point/analyze
> interactively.
>shameless plug... the soon-to-be-finalized matplotlib-1.3 has a WebAgg
>
At some point I hope to tune up the report with an option of viewing the
plot using e.g. nvd3 JS so it could be easier to pin point/analyze
interactively.
On Sat, 20 Jul 2013, Pauli Virtanen wrote:
> 20.07.2013 01:38, Nathaniel Smith kirjoitti:
> > The biggest ~recent change in master's linalg wa
On Fri, 19 Jul 2013, Warren Weckesser wrote:
> Well, this is embarrassing: https://github.com/numpy/numpy/pull/3539
> Thanks for benchmarks! I'm now an even bigger fan. :)
Great to see that those came of help! I thought to provide a detailed
details (benchmarking all recent commits) to provid
ll be related to 80% faster det()? ;)
norm was hit as well a bit earlier, might well be within these commits:
https://github.com/numpy/numpy/compare/24a0aa5...29dcc54
I will rerun now benchmarking for the rest of commits (was running last
in the day iirc)
Cheers,
On Tue, 16 Jul 2013, Yaroslav Halc
On Thu, 18 Jul 2013, Charles R Harris wrote:
> yeah... That is how I thought "it is working", but I guess it was left
> without asanyarraying for additional flexibility/performance so any
> array-like object could be used, not just ndarray derived classes.
>Speaking of which,
On Thu, 18 Jul 2013, Skipper Seabold wrote:
> Not sure anyways if my direct numpy.mean application to pandas DataFrame
> is
> "kosher" -- initially I just assumed that any argument is asanyarray'ed
> first
> -- but I think here catching TypeError for those incompatible .m
Hi everyone,
Some of my elderly code stopped working upon upgrades of numpy and
upcoming pandas: https://github.com/pydata/pandas/issues/4290 so I have
looked at the code of
2481 def mean(a, axis=None, dtype=None, out=None, keepdims=False):
2482 """
...
2489 Parameters
249
o with the initial detected performance hit, but in some cases
seems still to reasonably locate commits hitting on performance.
Enjoy,
On Tue, 09 Jul 2013, Yaroslav Halchenko wrote:
> Julian Taylor contributed some benchmarks he was "concerned" about, so
> now the collection is e
ch/vb_vb_reduce.html#numpy-all-fast
http://www.onerussian.com/tmp/numpy-vbench/vb_vb_reduce.html#numpy-any-fast
Enjoy
On Mon, 01 Jul 2013, Yaroslav Halchenko wrote:
> FWIW -- updated plots with contribution from Julian Taylor
> http://www.onerussian.com/tmp/numpy-vbench-20130701/vb_vb_indexi
FWIW -- updated plots with contribution from Julian Taylor
http://www.onerussian.com/tmp/numpy-vbench-20130701/vb_vb_indexing.html#mmap-slicing
;-)
On Mon, 01 Jul 2013, Yaroslav Halchenko wrote:
> Hi Guys,
> not quite the recommendations you expressed, but here is my ugly
> attempt t
Hi Guys,
not quite the recommendations you expressed, but here is my ugly
attempt to improve benchmarks coverage:
http://www.onerussian.com/tmp/numpy-vbench-20130701/index.html
initially I also ran those ufunc benchmarks per each dtype separately,
but then resulting webpage is loong which bring
On Mon, 06 May 2013, Sebastian Berg wrote:
> > if you care to tune it up/extend and then I could fire it up again on
> > that box (which doesn't do anything else ATM AFAIK). Since majority of
> > time is spent actually building it (did it with ccache though) it would
> > be neat if you come up
On Wed, 01 May 2013, Sebastian Berg wrote:
> > btw -- is there something like panda's vbench for numpy? i.e. where
> > it would be possible to track/visualize such performance
> > improvements/hits?
> Sorry if it seemed harsh, but only skimmed mails and it seemed a bit
> like the an obvious pie
On Wed, 01 May 2013, Sebastian Berg wrote:
> There really is no point discussing here, this has to do with numpy
> doing iteration order optimization, and you actually *want* this. Lets
> for a second assume that the old behavior was better, then the next guy
> is going to ask: "Why is np.add.red
On Wed, 01 May 2013, Matthew Brett wrote:
> > There really is no point discussing here, this has to do with numpy
> > doing iteration order optimization, and you actually *want* this. Lets
> > for a second assume that the old behavior was better, then the next guy
> > is going to ask: "Why is np.
1, 2013 at 6:24 PM, Matthew Brett wrote:
> > HI,
> > On Wed, May 1, 2013 at 9:09 AM, Yaroslav Halchenko
> > wrote:
> >> 3. they are identical on other architectures (e.g. amd64)
> > To me that is surprising. I would have guessed that the order is the
> > sa
On Wed, 01 May 2013, Nathaniel Smith wrote:
> > not sure there is anything to fix here. Third-party code relying on a
> > certain outcome of rounding error is likely incorrect anyway.
> Yeah, seems to just be the standard floating point indeterminism.
> Using Matthew's numbers and pure Python flo
On Wed, 01 May 2013, Nathaniel Smith wrote:
>> Thanks everyone for the feedback.
>> Is it worth me starting a bisection to catch where it was introduced?
>Is it a bug, or just typical fp rounding issues? Do we know which answer
>is correct?
to ignorant me, even without considerin
Thanks everyone for the feedback.
Is it worth me starting a bisection to catch where it was introduced?
On Wed, 01 May 2013, Sebastian Berg wrote:
> > so might be wrong). One simple try hinting that this may be going on
> > would be to data fortran order.
> Well I guess it is optimized and the
could anyone on 32bit system with fresh numpy (1.7.1) test following:
> wget -nc http://www.onerussian.com/tmp/data.npy ; python -c 'import numpy as
> np; data1 = np.load("/tmp/data.npy"); print np.sum(data1[1,:,0,1]) -
> np.sum(data1, axis=1)[1,0,1]'
0.0
because unfortunately it seems on fr
e been a 1d (original definition of s is s=(c_double*min(x,y))())
ok -- added printing of dtype of that array -- with 1.6.2 it is reported as
object while with 1.7.0b1 -- float64...
from here I pass it onto experts! ;)
On Thu, 06 Sep 2012, Yaroslav Halchenko wrote:
> On Thu, 06 Sep 2012,
On Thu, 06 Sep 2012, Aron Ahmadia wrote:
>Are you running the valgrind test with the Python suppression
>
> file:�[1]http://svn.python.org/projects/python/trunk/Misc/valgrind-python.supp
yes -- on Debian there is /usr/lib/valgrind/python.supp which comes
with python package and I belie
On Wed, 05 Sep 2012, Nathaniel Smith wrote:
> It is an intentional change:
> https://github.com/numpy/numpy/commit/b7cc20ad#L5R77
> but the benefits aren't necessarily *that* compelling, so it could
> certainly be revisited if there are unforeseen downsides. (Mostly it
> means that intermediate
val_EvalFrameEx (in
/home/yoh/python-env/numpy/bin/python)
==10281==by 0x4F1DAF: PyEval_EvalCodeEx (in
/home/yoh/python-env/numpy/bin/python)
On Wed, 05 Sep 2012, Yaroslav Halchenko wrote:
> Recently Sandro uploaded 1.7.0b1 into Debian experimental so I decided to see
> if this b
:3].base is a, a[:4][:3].base.base is a'
1.6.2
True False True
1.7.0rc1.dev-ea23de8
True True False
On Wed, 05 Sep 2012, Yaroslav Halchenko wrote:
> pymvpa2_2.1.0-1.dscok FAILED
> http://www.onerussian.com/Linux/deb/logs/python-numpy_1.7.0~b1-1_amd64.test
Recently Sandro uploaded 1.7.0b1 into Debian experimental so I decided to see
if this bleeding edge version doesn't break some of its dependees... Below is
a copy of
http://www.onerussian.com/Linux/deb/logs/python-numpy_1.7.0~b1-1_amd64.testrdepends.debian-sid/python-numpy_1.7.0~b1-1_amd64.testrde
ndeed I should just put a logic in place
to treat those cases separately.
--
=--=
Keep in touch www.onerussian.com
Yaroslav Halchenko www.ohloh.net/accounts/yarikoptic
__
vance ;-)
--
=--=
Keep in touch www.onerussian.com
Yaroslav Halchenko www.ohloh.net/accounts/yarikoptic
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/
olumns sharing the
same attribute (e.g. a label for a sample in classification problems).
--
.-.
=-- /v\ ----=
Keep in touch// \\ (yoh@|www.)onerussian.com
Yaroslav Halchenko /( )\ ICQ#: 60653192
Hi Warren,
> The "problem" is that the tuple is converted to an array in the
> statement that does the comparison, not in the construction of the
> array. Numpy attempts
> to convert the right hand side of the == operator into an array.
> It then does the comparison using the two arrays.
Thanks
On Thu, 14 Jan 2010, josef.p...@gmail.com wrote:
> It looks difficult to construct an object array with only 1 element,
> since a tuple is interpreted as different array elements.
yeap
> It looks like some convention is necessary for interpreting a tuple in
> the array construction, but it doesn'
Dear NumPy People,
First I want to apologize if I misbehaved on NumPy Trac by reopening the
closed ticket
http://projects.scipy.org/numpy/ticket/1362
but I still feel strongly that there is misunderstanding
and the bug/defect is valid. I would appreciate if someone would waste
more of his time t
51 matches
Mail list logo