This makes me ask something that I always wanted to know: why is weave
not the preferred or encouraged way ?
Is it because no developer has interest in maintaining it or is it too
onerous to maintain ? I do not know enough of its internals to guess
an answer. I think it would be fair to say that w
>> I think the story is that Cython overlaps enough with Weave that Weave
>> doesn't get any new users or developers.
>
> One big issue that I had with weave is that it compile on the fly. As a
> result, it makes for very non-distributable software (requires a compiler
> and the development headers
>> I do not know much Cython, except for the fact that it is out there
>> and what it is supposed to do., but wouldnt Cython need a compiler too
>> ?
>
> Yes, but at build-time, not run time.
Ah! I see what you mean, or so I think. So the first time a weave
based code runs, it builds, stores the c
tually kind of a guy, thus long
political/philosophical/epistemic threads distance me. I know there
are legitimate reasons to have this discussions. But it seems to me
that they get a bit too wordy here sometimes.
My 10E-2.
-- srean
___
NumPy-Discuss
> Patches languishing on Trac is a real problem. The issue here is not at all
> about not wanting those patches,
Oh yes I am sure of that, in the past it had not been clear what more
is necessary to get them pulled in, or how to go about satisfying the
requirements. The document you mailed on the
Hi Wolfgang,
I think you are looking for reduceat( ), in particular add.reduceat()
-- srean
On Thu, May 31, 2012 at 12:36 AM, Wolfgang Kerzendorf
wrote:
> Dear all,
>
> I have an ndarray which consists of many arrays stacked behind each other
> (only conceptually, in truth it
y the scipy module on optimization
already has a function to do such sanity checks. Of course it cannot
guarantee correctness, but usually goes a long way.
-- srean
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion
>
> You're right - there is definitely a difference between a correct
> gradient and a gradient is both correct and fast to compute.
>
> The current quick implementation of pyautodiff is naive in this
> regard.
Oh and by no means was I criticizing your implementation. It is a very
hard problem to
> Of course, maybe you were pointing out that if your derivative
> calculation depends in some intrinsic way on the topology of some
> graph, then your best bet is to have an automatic way to recompute it
> from scratch for each new graph you see. In that case, fair enough!
That is indeed what I h
> Hi,
>
> I second James here, Theano do many of those optimizations. Only
> advanced coder can do better then Theano in most case, but that will
> take them much more time. If you find some optimization that you do
> and Theano don't, tell us. We want to add them :)
>
> Fred
I am sure Theano does
n the other hand is very clear as is
Travis's book which I am glad to say I actually bought a long time
ago.
Thanks,
srean
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion
Hi All,
my question might have got lost due to the intense activity around
the 1.7 release. Now that it has quietened down, would appreciate any
help regarding my confusion about how index arrays work
(especially when broadcasted).
-- srean
On Mon, Jun 25, 2012 at 5:29 PM, srean wrote
> Yes it does. If you want to avoid this extra copy, and have a
> pre-existing output array, you can do:
>
> np.add(a, b, out=c)
>
> ('+' on numpy array's is just a synonym for np.add; np.add is a ufunc,
> and all ufunc's accept this syntax:
> http://docs.scipy.org/doc/numpy/reference/ufuncs.html
shirt could you get some
water" you get instant results. So pardon me for taking the
presumptuous liberty to request Travis to please set it up or
delegate.
Splitting the lists shouldn't be hard work, setting up overflow might
be more work in c
If I remember correctly there used to be a stackexchange site at
ask.scipy.org. It might be good to learn from that experience. I think
handling with spam was a significant problem, but am not sure whether
that is the reson why it got discontinued.
Best
srean
On Thu, Jun 28, 2012 at 11:36 AM
ho are not so interested in devel
issues and vice versa. I take interest in devel related issues (apart
from the distracting and what at times seem petty flamewars) and like
reading the numpy source, but dont think every user have similar
tastes neither should they.
Best
Srean
On Thu, Jun 28, 20
list than here and I think that is a good
thing.
Best
srean
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion
Could not have said this better even if I tried, so thank you for your
long answer.
-- srean
On Thu, Jun 28, 2012 at 4:57 PM, Fernando Perez wrote:
> Long answer, I know...
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
h
those lists
Best,
srean
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion
abandon here. Not sure what is
preferred here, top or bottom.
Best
srean
On Thu, Jun 28, 2012 at 8:52 PM, T J wrote:
> On Thu, Jun 28, 2012 at 3:23 PM, Fernando Perez
> wrote:
> I'm okay with having two lists as it does filtering for me, but this seems
> like a sub
On Sat, Jun 30, 2012 at 2:29 PM, John Hunter wrote:
> This thread is a perfect example of why another list is needed.
+1
On Sat, Jun 30, 2012 at 2:37 PM, Matthew Brett wrote:
> Oh - dear. I think the point that most of us agreed on was that
> having a different from: address wasn't a perfec
> Isn't that what the various sections are for?
Indeed they are, but it still needs active "pulling" on behalf of
those who would want to answer questions and even then a question can
sink deep in the well. Deeper than what one typically monitors.
Sometimes question are not appropriately tagged. S
id, mails him/her the response and a password with which he/she
can take control of the id. It is more polite and may be a good way
for the SO site to collect more users.
Best
--srean
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http
ld be asking for too much, but even ccs should help. I dont want to
clutter this thread with the sparsity issues though, any solution to the
original question or pointers to solutions would be appreciated.
Thanks
--srean
On Sat, Mar 26, 2011 at 12:10 PM, Hugo Gagnon <
sourceforge.nu...@user.f
length into
account. But for sparse arrays I think its a hopeless situation. That is a
bummer, because sparse is what I need. Oh well, I will probably do it in C++
-- srean
p.s. I hope top posting is not frowned upon here. If so, I will keep that in
mind in my future posts.
On Sat, Mar 26, 2011 at 1
Ah! very nice. I did not know that numpy-1.6.1 supports in place 'dot', and
neither the fact that you could access the underlying BLAS functions like
so. This is pretty neat. Thanks. Now I at least have an idea how the sparse
version might work.
If I get time I will probably give numpy-1.6.1 a sho
On Sat, Mar 26, 2011 at 3:16 PM, srean wrote:
>
> Ah! very nice. I did not know that numpy-1.6.1 supports in place 'dot',
>
In place is perhaps not the right word, I meant "in a specified location"
___
NumPy-Discussion
correct assessment, and
if so, is there any advantage in using multiprocessing.Array(...) over
simple numpy mmaped arrays.
Regards
srean
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion
indeed resident in memory (subject to swapping of course) that would still
be advatageous compared to a file mapped from a on-disk filesystem.
On Mon, Apr 11, 2011 at 12:42 PM, srean wrote:
> Hi everyone,
>
> I was looking up the options that are available for shared memory arrays
&
Got you and thanks a lot for the explanation. I am not using Queues so I
think I am safe for the time being. Given that you have worked a lot on
these issues, would you recommend plain mmapped numpy arrays over
multiprocessing.Array
Thanks again
-- srean
On Mon, Apr 11, 2011 at 1:36 PM, Sturla
Hi,
is there a guarantee that ufuncs will execute left to right and in
sequential order ? For instance is the following code standards compliant ?
>>> import numpy as n
>>> a=n.arange(0,5)
array([0, 1, 2, 3, 4])
>>> n.add(a[0:-1], a[1:], a[0:-1])
array([1, 3, 5, 7])
The idea was to reuse and h
> It is possible that we can make an exception for inputs and outputs
> that overlap each other and pick a standard traversal. In those cases,
> the order of traversal can affect the semantics,
Exactly. If there is no overlap then it does not matter and can potentially
be done in parallel. On the
should be doing ? Different invocations of this function has different
number of arrays, so I cannot pre-compile away this into a numexpr.
Thanks and regards
srean
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org
Bumping my question tentatively. I am fairly sure there is a good answer and
for some reason it got overlooked.
Regards
srean
-- Forwarded message --
From: srean
Date: Fri, May 27, 2011 at 10:36 AM
Subject: Adding the arrays in an array iterator
To: Discussion of Numerical
> If they are in a list, then I would do something like
Apologies if it wasnt clear in my previous mail. The arrays are in a
lazy iterator, they are non-contiguous and there are several thousands
of them. I was hoping there was a way to get at a "+=" operator for
arrays to use in a reduce. Seems l
BLAS context the better.
--srean
On Fri, Jun 10, 2011 at 10:01 AM, Brandt Belson wrote:
> Unfortunately I can't flatten the arrays. I'm writing a library where the
> user supplies an inner product function for two generic objects, and almost
> always the inner product functi
I make heavy use of transcendental
functions and was hoping to exploit the VML library.
Thanks for the help
-- srean
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion
Apologies, intended to send this to the scipy list.
On Tue, Jun 21, 2011 at 2:35 PM, srean wrote:
> Hi All,
>
> is there a fast way to do cumsum with numexpr ? I could not find it,
> but the functions available in numexpr does not seem to be
> exhaustively documented, so it is
ay to achieve what I am trying ? Efficiency is
important because potentially millions of objects would be yielded.
-- srean
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion
To answer my own question, I guess I can keep appending to a array.array()
object and get a numpy.array from its buffer if possible. Is that the
efficient way.
On Fri, Jun 24, 2011 at 2:35 AM, srean wrote:
> Hi,
>
> I have an iterator that yields a complex object. I want to make an ar
On Fri, Jun 24, 2011 at 9:12 AM, Robert Kern wrote:
> On Fri, Jun 24, 2011 at 04:03, srean wrote:
> > To answer my own question, I guess I can keep appending to a
> array.array()
> > object and get a numpy.array from its buffer if possible. Is that the
> > efficient
this indeed needs to be done
efficiently
Thanks again for your gracious help
-- srean
On Fri, Jun 24, 2011 at 9:12 AM, Robert Kern wrote:
> On Fri, Jun 24, 2011 at 04:03, srean wrote:
> > To answer my own question, I guess I can keep appending to a
> array.array()
> > object and
>> I think this is essential to speed up numpy. Maybe numexpr could handle this
>> in the future? Right now the general use of numexpr is result =
>> numexpr.evaluate("whatever"), so the same problem seems to be there.
>>
>> With this I am not saying that numpy is not worth it, just that for many
>> This is a slight digression: is there a way to have a out parameter
>> like semantics with numexpr. I have always used it as
>>
>> a[:] = numexpr(expression)
> In order to make sure the 1.6 nditer supports multithreading, I adapted
> numexpr to use it. The branch which does this is here:
> htt
else and not linked? or is it that the
c-info.ufunc-tutorial.rst document is incomplete and the examples have not
been written. I suspect the former. In that case could anyone point to the
code examples and may be also update the c-info.ufunc-tutorial.rst document.
Thanks
-- srean
Following up on my own question: I can see the code in the commit. So it
appears that
code-block::
Are not being rendered correctly. Could anyone confirm ? In case it is my
browser alone, though I did try after disabling no-script.
On Wed, Aug 24, 2011 at 6:53 PM, srean wrote:
> Hi,
>
&
Thanks Anthony and Mark, this is good to know.
So what would be the advised way of looking at freshly baked documentation ?
Just look at the raw files ? or is there some place else where the correct
sphinx rendered docs are hosted.
On Wed, Aug 24, 2011 at 7:19 PM, Anthony Scopatz wrote:
> code-
g,
and the current install document will be very welcome.
English is not my native language, but if there is anyway I can help, I
would do so gladly.
-- srean
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/list
This is great news, I hope this gets included in the epd distribution soon.
I had mailed a few questions about numexpr sometime ago. I am still
curious about those. I have included the relevant parts below. In
addition, I have another question. There was a numexpr branch that
allows a "out=blah" p
As one lurker to another, thanks for calling it out.
Over-argumentative, and personality centric threads like these have
actually led me to distance myself from the numpy community. I do not know
how common it is now because I do not follow it closely anymore. It used to
be quite common at one poi
Hi all,
is there an efficient way to do the following without allocating A where
A = np.repeat(x, [4, 2, 1, 3], axis=0)
c = A.dot(b)# b.shape
thanks
-- srean
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org
only way ?
On Mon, May 5, 2014 at 1:20 AM, Jaime Fernández del Río <
jaime.f...@gmail.com> wrote:
> On Sun, May 4, 2014 at 9:34 PM, srean wrote:
>
>> Hi all,
>>
>> is there an efficient way to do the following without allocating A where
>>
>> A
Wait, when assignments and slicing mix wasn't the behavior supposed to be
equivalent to copying the RHS to a temporary and then assigning using the
temporary. Is that a false memory ? Or has the behavior changed ? As long
as the behavior is well defined and succinct it should be ok
On Tuesday, Ju
stian Berg
wrote:
> On Fr, 2015-08-07 at 13:14 +0530, srean wrote:
> > Wait, when assignments and slicing mix wasn't the behavior supposed to
> > be equivalent to copying the RHS to a temporary and then assigning
> > using the temporary. Is that a false memory ? Or has the b
Thanks Francesc, Robert for giving me a broader picture of where this fits
in. I believe numexpr does not handle slicing, so that might be another
thing to look at.
On Wed, Oct 5, 2016 at 4:26 PM, Robert McLeod wrote:
>
> As Francesc said, Numexpr is going to get most of its power through
> gr
On Wed, Oct 5, 2016 at 5:36 PM, Robert McLeod wrote:
>
> It's certainly true that numexpr doesn't create a lot of OP_COPY
> operations, rather it's optimized to minimize them, so probably it's fewer
> ops than naive successive calls to numpy within python, but I'm unsure if
> there's any differen
56 matches
Mail list logo