Sebastian Berg sipsolutions.net> writes:
>
> On Mo, 2015-03-16 at 15:53 +0000, Dave Hirschfeld wrote:
> > I have a number of large arrays for which I want to compute the mean
and
> > standard deviation over a particular axis - e.g. I want to compute
the
> > stat
I have a number of large arrays for which I want to compute the mean and
standard deviation over a particular axis - e.g. I want to compute the
statistics for axis=1 as if the other axes were combined so that in the
example below I get two values back
In [1]: a = randn(30, 2, 1)
For the me
Daniel Smith icloud.com> writes:
>
> Hello everyone,I originally brought an optimized einsum routine
forward a few weeks back that attempts to contract numpy arrays together
in an optimal way. This can greatly reduce the scaling and overall cost
of the einsum expression for the cost of a few
Andrew Nelson writes:
>
> Dear list,I have a 4D array, A, that has the shape (NX, NY, 2, 2). I
wish to perform matrix multiplication of the 'NY' 2x2 matrices, resulting
in the matrix B. B would have shape (NX, 2, 2). I believe that np.einsum
would be up to the task, but I'm not quite sure o
Julian Taylor googlemail.com> writes:
>
> On 23.10.2014 19:21, Dave Hirschfeld wrote:
> > Hi,
> > I accidentally passed a pandas DatetimeIndex to `np.arange` which
caused
> > it to segfault. It's a pretty dumb thing to do but I don't think it
> > s
Hi,
I accidentally passed a pandas DatetimeIndex to `np.arange` which caused
it to segfault. It's a pretty dumb thing to do but I don't think it
should cause a segfault!
Python 2.7.5 |Continuum Analytics, Inc.| (default, Jul 1 2013,
12:37:52)
[MSC v.1500 64 bit (AMD64)] on win32
Type "help",
It seems that the docs website is down?
http://docs.scipy.org/doc/
-Dave
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion
Chao YUE gmail.com> writes:
>
>
>
> Dear all,
> I have a simple question. Is there a way to denote the unchanged dimension
in the reshape function? like suppose I have an array named "arr" having
three dims with the first dimension length as 48, I want to reshape the
first dim into 12*4, bu
Julian Taylor googlemail.com> writes:
>
> On 16.05.2014 10:59, Dave Hirschfeld wrote:
> > Julian Taylor googlemail.com> writes:
> >
> > Yes, I'd heard about the improvements and am very excited to try them out
> > since indexing is one of the bottlene
Julian Taylor googlemail.com> writes:
>
>
> if ~50% faster is fast enough a simple improvement would be to replace
> the use of PyArg_ParseTuple with manual tuple unpacking.
> The PyArg functions are incredibly slow and is not required in
> VOID_copyswap which just extracts 'Oi".
>
> This 50%
Sebastian Berg sipsolutions.net> writes:
>
> On Do, 2014-05-15 at 12:31 +0000, Dave Hirschfeld wrote:
> > As can be seen from the code below (or in the notebook linked beneath)
fancy
> > indexing of a structured array is twice as slow as indexing both fields
> > i
As can be seen from the code below (or in the notebook linked beneath) fancy
indexing of a structured array is twice as slow as indexing both fields
independently - making it 4x slower?
I found that fancy indexing was a bottleneck in my application so I was
hoping to reduce the overhead by comb
Jeff Reback gmail.com> writes:
>
> Dave,
>
> your example is not a problem with numpy per se, rather that the default
generation is in local timezone (same as what python datetime does).
> If you localize to UTC you get the results that you expect.
>
The problem is that the default datetime
Sankarshan Mudkavi uwaterloo.ca> writes:
>
> Hey all,
> It's been a while since the last datetime and timezones discussion thread
was visited (linked below):
>
> http://thread.gmane.org/gmane.comp.python.numeric.general/53805
>
> It looks like the best approach to follow is the UTC only appro
Sturla Molden gmail.com> writes:
>
> gmail.com> wrote:
>
> > I use official numpy release for development, Windows, 32bit python,
> > i.e. MingW 3.5 and whatever old ATLAS the release includes.
> >
> > a constant 13% cpu usage is 1/8 th of my 8 virtual cores.
>
> Based on this and Alex' mess
alex ncsu.edu> writes:
>
> Hello list,
>
> Here's another idea resurrection from numpy github comments that I've
> been advised could be posted here for re-discussion.
>
> The proposal would be to make np.linalg.svd more like scipy.linalg.svd
> with respect to input checking. The argument aga
Ralf Gommers gmail.com> writes:
>
> On Fri, Nov 8, 2013 at 8:22 PM, Charles R Harris
gmail.com> wrote:
>
>
> and think that the main thing missing at this point is fixing the datetime
problems.
>
>
> Is anyone planning to work on this? If yes, you need a rough estimate of
when this is r
Nathaniel Smith pobox.com> writes:
>
>
> As soon as you talk about attributes "returning" things you've already
> broken Python's mental model... attributes are things that sit there,
> not things that execute arbitrary code. Of course this is not how the
> actual implementation works, attribut
gmail.com> writes:
>
> I think a H is feature creep and too specialized
>
> What's .H of a int a str a bool ?
>
> It's just .T and a view, so you cannot rely that conj() makes a copy
> if you don't work with complex.
>
> .T is just a reshape function and has **nothing** to do with matrix
al
Alan G Isaac gmail.com> writes:
>
> On 7/22/2013 3:10 PM, Nathaniel Smith wrote:
> > Having .T but not .H is an example of this split.
>
> Hate to do this but ...
>
> Readability counts.
+10!
A.conjugate().transpose() is unspeakably horrible IMHO. Since there's no way
to avoid a copy you
The example below demonstrates the fact that the datetime64 constructor
ignores the dtype argument if passed in. Is this conscious design decision or
a bug/oversight?
In [55]: from datetime import datetime
...: d = datetime.now()
...:
In [56]: d
Out[56]: datetime.datetime(2013, 6, 12,
Charles R Harris gmail.com> writes:
>
> Hi All,I think it is time to start the runup to the 1.8 release. I don't
know of any outstanding blockers but if anyone has a PR/issue that they feel
needs to be in the next Numpy release now is the time to make it known.Chuck
>
It would be good to get
>
>
Sorry, having trouble keeping up with this thread!
Comments, specific to my (limited) use-cases are inline:
Chris Barker - NOAA Federal noaa.gov> writes:
>
>
> I thought about that -- but if you have timedelta without datetime,
> you really just have an integer -- we haven't bought anythin
Charles R Harris gmail.com> writes:
>
> Hi All,There is a PR that adds some blas and lapack functions to numpy. I'm
thinking that if that PR is merged it would be good to move all of the blas
and lapack functions, including the current ones in numpy/linalg into a single
directory somewhere in
Travis Oliphant continuum.io> writes:
>
>
> Mark Wiebe and I are both still tracking NumPy development and can provide
context and even help when needed. Apologies if we've left a different
impression. We have to be prudent about the time we spend as we have other
projects we are pursui
Nathaniel Smith pobox.com> writes:
>
> On Wed, Apr 3, 2013 at 2:26 PM, Dave Hirschfeld
> gmail.com> wrote:
> >
> > This isn't acceptable for my use case (in a multinational company) and I
found
> > no reasonable way around it other than bypassing the num
Andreas Hilboll hilboll.de> writes:
>
> >
> > I think your point about using current timezone in interpreting user
> > input being dangerous is probably correct --- perhaps UTC all the way
> > would be a safer (and simpler) choice?
>
> +1
>
+10 from me!
I've recently come across a bug due t
I have two NxMx3 arrays and I want to reduce over the last dimension of the
first array by selecting those elements corresponding to the index of the
maximum value of each 3-vector of the second array to give an NxM result.
Hopefully that makes sense? If not hopefully the example below will shed
Robert Kern gmail.com> writes:
>
> >>> >
> >>> > One alternative that does not expand the API with two-liners is to let
> >>> > the ndarray.fill() method return self:
> >>> >
> >>> > a = np.empty(...).fill(20.0)
> >>>
> >>> This violates the convention that in-place operations never return
> >
Mark Bakker gmail.com> writes:
>
> I think there is a problem with assigning a 1D complex array of length one
> to a position in another complex array.
> Example:
> a = ones(1,'D')
> b = ones(1,'D')
> a[0] = b
> ---
> TypeEr
Sebastian Berg sipsolutions.net> writes:
>
> Hello,
>
> looking at the code, when only adding/removing dimensions with size 1,
> numpy takes a small shortcut, however it uses 0 stride lengths as value
> for the new one element dimensions temporarily, then replacing it again
> to ensure the new
Dave Hirschfeld gmail.com> writes:
>
> It seems that reshape doesn't work correctly on an array which has been
> resized using the 0-stride trick e.g.
>
> In [73]: x = array([5])
>
> In [74]: y = as_strided(x, shape=(10,), strides=(0,))
>
> In [75]: y
>
It seems that reshape doesn't work correctly on an array which has been
resized using the 0-stride trick e.g.
In [73]: x = array([5])
In [74]: y = as_strided(x, shape=(10,), strides=(0,))
In [75]: y
Out[75]: array([5, 5, 5, 5, 5, 5, 5, 5, 5, 5])
In [76]: y.reshape([10,1])
Out[76]:
array([[
Paul Anton Letnes gmail.com> writes:
> I would prefer:
> IndexError: index 3 is out of bounds for axis 0: [-3,2]
> as I find the 3) notation a bit weird - after all, indices are not floats, so
2.999 or 2.3 doesn't make sense as
> an index.
>
> An alternative is to not refer to negative indices
Pierre GM gmail.com> writes:
>
>
> Hello,
> The idea behin having a lib.recfunctions and not a rec.recfunctions or
whatever was to illustrate that the
> functions of this package are more generic than they appear. They work with
regular structured ndarrays
> and don't need recarrays. Methinks we
Mark Wiebe gmail.com> writes:
>
> Here are some current behaviors that are inconsistent with the microsecond
default, but consistent with the "generic time unit" idea:
>
> >>> np.timedelta64(10, 's') + 10
> numpy.timedelta64(20,'s')
>
>
That is what I would expect (and hope) would happen. IM
Mark Wiebe gmail.com> writes:
>
>
> It appears to me that a structured dtype with some further NumPy extensions
> could entirely replace the 'events' metadata fairly cleanly. If the ufuncs
> are extended to operate on structured arrays, and integers modulo n are
> added as a new dtype, a dtyp
Wes McKinney gmail.com> writes:
>
> >
> > - Fundamental need to be able to work with multiple time series,
> > especially performing operations involving cross-sectional data
> > - I think it's a bit hard for lay people to use (read: ex-MATLAB/R
> > users). This is just my opinion, but a few yea
Mark Wiebe gmail.com> writes:
>
> >>> a = np.datetime64('today')
>
> >>> a - a.astype('M8[Y]')
>
> numpy.timedelta64(157,'D')
>
> vs
>
>
> >>> a = np.datetime64('today')
> >>> a - a.astype('M8[Y]')
> Traceback (most recent call last):
> File "", line 1, in
> TypeError: ufunc subtract can
Christopher Barker noaa.gov> writes:
>
> Dave Hirschfeld wrote:
> > That would be one way of dealing with irregularly spaced data. I would argue
> > that the example is somewhat back-to-front though. If something happens
> > twice a month it's not occuring
Robert Kern gmail.com> writes:
>
> On Tue, Jun 7, 2011 at 07:34, Dave Hirschfeld gmail.com>
wrote:
>
> > I'm not convinced about the events concept - it seems to add complexity
> > for something which could be accomplished better in other ways. A [Y]//4
>
As a user of numpy/scipy in finance I thought I would put in my 2p worth as
it's something which is of great importance in this area.
I'm currently a heavy user of the scikits.timeseries package by Matt & Pierre
and I'm also following the development of statsmodels and pandas should we
require m
Jean-Luc Menut free.fr> writes:
>
> I have a little question about the speed of numpy vs IDL 7.0.
>
> Here the IDL result:
> % Compiled module: $MAIN$.
> 2.837
>
> The python code:
> from numpy import *
> from time import time
> time1 = time()
> for j in range(1):
> for i i
Venkat gmail.com> writes:
>
> Hi All,I am new to Numpy (also Scipy).I am trying to reshape my text data
which is in one single column (10,000 rows).I want the data to be in 100x100
array form.I have many files to convert like this. All of them are having file
names like 0, 1, 2, 500. with ou
math.duke.edu> writes:
>
> Hi, what is the best way to print (to a file or to stdout) formatted
> numerical values? Analogously to C's printf("%d %g",x,y) etc?
>
For stdout you can simply do:
In [26]: w, x, y, z = np.randint(0,100,4)
In [27]: type(w)
Out[27]:
In [28]: print("%f %g %e %d"
Charles R Harris gmail.com> writes:
> I was also thinking that someone might want to provide a better display at
> some point, drawing on a canvas, for instance. And what happens when the
> degree gets up over 100, which is quite reasonable with the Cheybshev
> polynomials?
There may well be be
46 matches
Mail list logo