On Wed, Aug 27, 2014 at 3:52 PM, Orion Poplawski
wrote:
> On 08/27/2014 11:07 AM, Julian Taylor wrote:
> > Hello,
> >
> > Almost punctually for EuroScipy we have finally managed to release the
> > first release candidate of NumPy 1.9.
> > We intend to only fix bugs until the final release which w
I just checked the docs on ufuncs, and it appears that's a solved problem
now, since ufunc.reduceat now comes with an axis argument. Or maybe it
already did when I wrote that, but I simply wasn't paying attention. Either
way, the code is fully vectorized now, in both grouped and non-grouped
axes. I
On 27 August 2014 19:02, Jaime Fernández del Río
wrote:
>
> Since there is at least one other person out there that likes it, is there
> any more interest in such a function? If yes, any comments on what the
> proper interface for extra output should be? Although perhaps the best is
> to leave th
On 08/27/2014 11:07 AM, Julian Taylor wrote:
> Hello,
>
> Almost punctually for EuroScipy we have finally managed to release the
> first release candidate of NumPy 1.9.
> We intend to only fix bugs until the final release which we plan to do
> in the next 1-2 weeks.
I'm seeing the following error
Yes, I was aware of that. But the point would be to provide true
vectorization on those operations.
The way I see it, numpy may not have to have a GroupBy implementation, but
it should at least enable implementing one that is fast and efficient over
any axis.
On Wed, Aug 27, 2014 at 12:38 PM, Ee
i.e, if the grouped axis is small but the other axes are not, you could
write this, which avoids the python loop over the long axis that
np.vectorize would otherwise perform.
import numpy as np
from grouping import group_by
keys = np.random.randint(0,4,10)
values = np.random.rand(10,2000)
for k,g
f.i., this works as expected as well (100 keys of 1d int arrays and 100
values of 1d float arrays):
group_by(randint(0,4,(100,2))).mean(rand(100,2))
On Wed, Aug 27, 2014 at 9:27 PM, Eelco Hoogendoorn <
hoogendoorn.ee...@gmail.com> wrote:
> If I understand you correctly, the current implementati
If I understand you correctly, the current implementation supports these
operations. All reductions over groups (except for median) are performed
through the corresponding ufunc (see GroupBy.reduce). This works on
multidimensional arrays as well, although this broadcasting over the
non-grouping axe
Hi Eelco,
I took a deeper look into your code a couple of weeks back. I don't think I
have fully grasped what it allows completely, but I agree that some form of
what you have there is highly desirable. Along the same lines, for sometime
I have been thinking that the right place for a `groupby` in
It wouldn't hurt to have this function, but my intuition is that its use
will be minimal. If you are already working with sorted arrays, you already
have a flop cost on that order of magnitude, and the optimized merge saves
you a factor two at the very most. Using numpy means you are sacrificing
fa
On Wed, Aug 27, 2014 at 10:01 AM, Robert Kern wrote:
> On Wed, Aug 27, 2014 at 5:44 PM, Jaime Fernández del Río
> wrote:
> > After reading this stackoverflow question:
> >
> >
> http://stackoverflow.com/questions/25530223/append-a-list-at-the-end-of-each-row-of-2d-array
> >
> > I was reminded th
Hello,
Almost punctually for EuroScipy we have finally managed to release the
first release candidate of NumPy 1.9.
We intend to only fix bugs until the final release which we plan to do
in the next 1-2 weeks.
In this release numerous performance improvements have been added, most
significantly t
A request was open in github to add a `merge` function to numpy that would
merge two sorted 1d arrays into a single sorted 1d array. I have been
playing around with that idea for a while, and have a branch in my numpy
fork that adds a `mergesorted` function to `numpy.lib`:
https://github.com/jaime
On Wed, Aug 27, 2014 at 5:44 PM, Jaime Fernández del Río
wrote:
> After reading this stackoverflow question:
>
> http://stackoverflow.com/questions/25530223/append-a-list-at-the-end-of-each-row-of-2d-array
>
> I was reminded that the `np.concatenate` family of functions do not
> broadcast the shap
After reading this stackoverflow question:
http://stackoverflow.com/questions/25530223/append-a-list-at-the-end-of-each-row-of-2d-array
I was reminded that the `np.concatenate` family of functions do not
broadcast the shapes of their inputs:
>>> import numpy as np
>>> a = np.arange(6).reshape(3,
On Do, 2014-08-28 at 00:08 +0900, phinn stuart wrote:
> Hi everyone, how can I convert (1L, 480L, 1440L) shaped numpy array
> into (480L, 1440L)?
>
Just slice it arr[0, ...] will do the trick. If you are daring,
np.squeeze also works, or of course np.reshape.
- Sebastian
>
> Thanks in the adva
On 27.08.2014 17:08, phinn stuart wrote:
> Hi everyone, how can I convert (1L, 480L, 1440L) shaped numpy array into
> (480L, 1440L)?
>
> Thanks in the advance.
np.squeeze removes empty dimensions:
In [2]: np.squeeze(np.ones((1,23,232))).shape
Out[2]: (23, 232)
__
There is also np.squeeze(), which will eliminate any singleton dimensions
(but I personally hate using it because it can accidentally squeeze out
dimensions that you didn't intend to squeeze when you have arbitrary input
data).
Ben Root
On Wed, Aug 27, 2014 at 11:12 AM, Wagner Sebastian <
sebast
Hi,
Our short example-data:
>>> np.arange(10).reshape(1,5,2)
array([[[0, 1],
[2, 3],
[4, 5],
[6, 7],
[8, 9]]])
Shape is (1,5,2)
Two possibilies:
>>> data.reshape(5,2)
array([[0, 1],
[2, 3],
[4, 5],
[6, 7],
[8, 9]])
Or just:
>>> data[0]
Hi everyone, how can I convert (1L, 480L, 1440L) shaped numpy array into
(480L, 1440L)?
Thanks in the advance.
phinn
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion
On 26 Aug 2014, at 09:05 pm, Adrian Altenhoff
wrote:
>> But you are right that the problem with using the first_values, which should
>> of course be valid,
>> somehow stems from the use of usecols, it seems that in that loop
>>
>>for (i, conv) in user_converters.items():
>>
>> i in user_c
21 matches
Mail list logo