Is there a function that operates like 'take' but does assignment?
Specifically that takes indices and an axis? As far as I can tell no
such function exists. Is there any particular reason?
One can fake such a thing by doing (code untested):
s = len(a.shape)*[np.s_[:]]
s[axis] = np.s_[1::2]
a
James Adams wrote:
> I would like to create an array object and initialize the array's
> values with an arbitrary fill value, like you can do using the ones()
> and zeros() creation routines to create and initialize arrays with
> ones or zeros. Is there an easy way to do this? If this isn't
> pos
> African or European?
>
>
> Why on earth would you ask that?
>
>
Its a Monty Python and the Holy Grail reference.
Eric
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion
On Saturday, January 4, 2014, Ralf Gommers wrote:
>
>
>
> On Mon, Dec 23, 2013 at 12:14 AM, Matti Picus
>
> > wrote:
>
>> Hi. I started to port the stdlib cmath C99 compatible complex number
>> tests to numpy, after noticing that numpy seems to have different
>> complex number routines than cmat
On Thursday, February 20, 2014, Eelco Hoogendoorn <
hoogendoorn.ee...@gmail.com> wrote:
> If the standard semantics are not affected, and the most common
> two-argument scenario does not take more than a single if-statement
> overhead, I don't see why it couldn't be a replacement for the existing
On Friday, March 14, 2014, Gregor Thalhammer
wrote:
>
> Am 13.03.2014 um 18:35 schrieb Leo Mao
> >:
>
> > Hi,
> >
> > Thanks a lot for your advice, Chuck.
> > Following your advice, I have modified my draft of proposal. (attachment)
> > I think it still needs more comments so that I can make it
On Sunday, March 16, 2014, wrote:
>
>
>
> On Sun, Mar 16, 2014 at 10:54 AM, Nathaniel Smith
>
> > wrote:
>
>> On Sun, Mar 16, 2014 at 2:39 PM, Eelco Hoogendoorn
>> >
>> wrote:
>> > Note that I am not opposed to extra operators in python, and only mildly
>> > opposed to a matrix multiplication o
On Monday, April 14, 2014, Matthew Brett wrote:
> Hi,
>
> On Mon, Apr 14, 2014 at 12:12 PM, Warren Weckesser
> > wrote:
> >
> > On Mon, Apr 14, 2014 at 2:59 PM, Matthew Brett
> >
> >
> > wrote:
> >>
> >> Hi,
> >>
> >> With Carl Kleffner, I am trying to build a numpy 1.8.1 wheel for
> >> Windows
On Tuesday, September 16, 2014, Jaime Fernández del Río <
jaime.f...@gmail.com> wrote:
> On Tue, Sep 16, 2014 at 3:26 PM, Charles R Harris <
> charlesr.har...@gmail.com
> > wrote:
>
>>
>>
>> On Tue, Sep 16, 2014 at 2:51 PM, Nathaniel Smith > > wrote:
>>
>>> On Tue, Sep 16, 2014 at 4:31 PM, Jaime F
On Tuesday, September 23, 2014, Todd wrote:
> On Mon, Sep 22, 2014 at 5:31 AM, Nathaniel Smith > wrote:
>
>> On Sun, Sep 21, 2014 at 7:50 PM, Stephan Hoyer > > wrote:
>> My feeling though is that in most of the cases you mention,
>> implementing a new array-like type is huge overkill. ndarray's
The second argument is named `refcheck` rather than check_refs.
Eric
On Wed, Dec 10, 2014 at 2:36 PM, Chris Barker wrote:
> On Tue, Dec 9, 2014 at 11:03 PM, Sturla Molden
> wrote:
>
>> Nathaniel Smith wrote:
>>
>> > @contextmanager
>> > def tmp_zeros(*args, **kwargs):
>> > arr = np.zeros(
On Thu, Dec 11, 2014 at 10:41 AM, Todd wrote:
> On Tue, Oct 28, 2014 at 5:28 AM, Nathaniel Smith wrote:
>
>> On 28 Oct 2014 04:07, "Matthew Brett" wrote:
>> >
>> > Hi,
>> >
>> > On Mon, Oct 27, 2014 at 8:07 PM, Sturla Molden
>> wrote:
>> > > Sturla Molden wrote:
>> > >
>> > >> If we really ne
On Monday, December 29, 2014, Valentin Haenel wrote:
> Hi,
>
> how do I access the kind of the data from cython, i.e. the single
> character string:
>
> 'b' boolean
> 'i' (signed) integer
> 'u' unsigned integer
> 'f' floating-point
> 'c' complex-floating point
> 'O' (Python) objects
> 'S', 'a' (b
`low` and `high` can be arrays so, you received 1 draw from (-0.5, 201) and
1 draw from (0.5, 201).
Eric
On Fri, Mar 13, 2015 at 11:57 AM, Alan G Isaac wrote:
> Today I accidentally wrote `uni = np.random.uniform((-0.5,0.5),201)`,
> supply a tuple instead of separate low and high values. This
On Mon, Mar 16, 2015 at 11:53 AM, Dave Hirschfeld wrote:
> I have a number of large arrays for which I want to compute the mean and
> standard deviation over a particular axis - e.g. I want to compute the
> statistics for axis=1 as if the other axes were combined so that in the
> example below I
This blog post, and the links within also seem relevant. Appears to have
python code available to try things out as well.
https://dataorigami.net/blogs/napkin-folding/19055451-percentile-and-quantile-estimation-of-big-data-the-t-digest
-Eric
On Wed, Apr 15, 2015 at 11:24 AM, Benjamin Root wrot
You have to do it by hand in numpy 1.6. For example see
https://github.com/scipy/scipy/blob/master/scipy/signal/lfilter.c.src#L285-L292
-Eric
On Sun, Jun 14, 2015 at 10:33 PM, Sturla Molden
wrote:
> What would be the best alternative to PyArray_SetBaseObject in NumPy 1.6?
>
> Purpose: Keep ali
It looks like all of the numpy failures there are due to a poor
implementation of hypot. One solution would be to force the use of the
hypot code in npymath for this tool chain. Currently this is done in
numpy/core/src/private/npy_config.h for both MSVC and mingw32.
-Eric
On Fri, Jul 10, 2015 a
On Thu, Nov 5, 2015 at 1:37 AM, Ralf Gommers wrote:
>
>
> On Thu, Nov 5, 2015 at 5:11 AM, Nathaniel Smith wrote:
>
>> On Wed, Nov 4, 2015 at 4:40 PM, Stefan Seefeld
>> wrote:
>> > Hello,
>> >
>> > is there a way to query Numpy for information about backends (BLAS,
>> > LAPACK, etc.) that it was
This fails because numpy uses the function `cacosh` from the libm and on
RHEL6 this function has a bug. As long as you don't care about getting the
sign right at the branch cut in this function, then it's harmless. If you
do care, the easiest solution will be to install something like anaconda
th
I have a mostly complete wrapping of the double-double type from the QD
library (http://crd-legacy.lbl.gov/~dhbailey/mpdist/) into a numpy dtype.
The real problem is, as david pointed out, user dtypes aren't quite full
equivalents of the builtin dtypes. I can post the code if there is
interest.
S
Try just calling np.array_split on the full 2D array. It splits along a
particular axis, which is selected using the axis argument of
np.array_split. The axis to split along defaults to the first so the two
calls to np.array_split below are exactly equivalent.
In [16]: a = np.c_[:10,10:20,20:30]
/* obj[ind] */
PyObject* DoIndex(PyObject* obj, int ind)
{
PyObject *oind, *ret;
oind = PyLong_FromLong(ind);
if (!oind) {
return NULL;
}
ret = PyObject_GetItem(obj, oind);
Py_DECREF(oind);
return ret;
}
/* obj[inds[0], inds[1], ... inds[n_ind-1]] */
PyObject* D
Yes, PySlice_New(NULL, NULL, NULL) is the same as ':'. Depending on what
exactly you want to do with the column once you've extracted it, this may
not be the best way to do it. Are you absolutely certain that you actually
need a PyArrayObject that points to the column?
Eric
On Mon, Apr 4, 2016
Its difficult to say why your code is slow without seeing it. i.e. are you
generating large temporaries? Or doing loops in python that can be pushed
down to C via vectorizing? It may or may not be necessary to leave python
to get things to run fast enough.
-Eric
On Tue, Apr 5, 2016 at 11:39 AM
def reduce_data(buffer, resolution):
thinned_buffer = np.zeros((resolution**3, 3))
min_xyz = buffer.min(axis=0)
max_xyz = buffer.max(axis=0)
delta_xyz = max_xyz - min_xyz
inds_xyz = np.floor(resolution * (buffer - min_xyz) /
delta_xyz).astype(int)
# handle values right at
Eh. The order of the outputs will be different than your code, if that
makes a difference.
On Tue, Apr 5, 2016 at 3:31 PM, Eric Moore wrote:
> def reduce_data(buffer, resolution):
> thinned_buffer = np.zeros((resolution**3, 3))
>
> min_xyz = buffer.min(axis=0)
> max_x
On Tue, May 17, 2016 at 9:40 AM, Sturla Molden
wrote:
> Stephan Hoyer wrote:
> > I have recently encountered several use cases for randomly generate
> random
> > number seeds:
> >
> > 1. When writing a library of stochastic functions that take a seed as an
> > input argument, and some of these f
I'd say the most compelling case for it is that is how it has always
worked. How much code will break if we make that change? (Or if not break,
at least have a change in dtype?) Is that worth it?
Eric
On Tue, May 24, 2016 at 1:31 PM, Alan Isaac wrote:
> On 5/24/2016 1:19 PM, Stephan Hoyer wro
ors can be
controlled via the np.seterr and some cannot should also be addressed
longer term.
Eric
On Tue, May 24, 2016 at 3:05 PM, Nathaniel Smith wrote:
> On Tue, May 24, 2016 at 10:36 AM, Eric Moore
> wrote:
> > I'd say the most compelling case for it is that is how it has al
30 matches
Mail list logo