>
> What happens to the buffer API/persistence with all those additions?
I understand the desire to keep things simple, which is why I am only proposing
a rather small change to the array object with *huge* implications ---
encompassing the very cool deferred arrays that Mark Wiebe is proposin
On Fri, Jan 28, 2011 at 12:46 PM, Charles R Harris
wrote:
> Hi All,
>
> Mark Wiebe has proposed making the master branch backward compatible with
> 1.5. The argument for doing this is that 1) removing the new bits for new
> releases is a chore as the refactor schedule slips and 2) the new ABI isn'
Hi All,
Mark Wiebe has proposed making the master branch backward compatible with
1.5. The argument for doing this is that 1) removing the new bits for new
releases is a chore as the refactor schedule slips and 2) the new ABI isn't
settled and keeping the current code in won't help with the merge.
On Thu, Jan 27, 2011 at 9:02 PM, Benjamin Root wrote:
> On Thursday, January 27, 2011, Christopher Barker
> wrote:
>> On 1/27/11 3:35 PM, Travis Oliphant wrote:
>>
> What is the thought about having two separate NumPy lists (one for
> development discussions and one for user discussions
On Thursday, January 27, 2011, Christopher Barker wrote:
> On 1/27/11 3:35 PM, Travis Oliphant wrote:
>
What is the thought about having two separate NumPy lists (one for
development discussions and one for user discussions)?
>
> Speaking as someone who hasn't contributed code to numpy
On 1/27/11 3:54 PM, Sturla Molden wrote:
> Lists allocate empty slots at their back, proportional to their size. So
> as lists grows, re-allocations become rarer and rarer. Then on average
> the complexity per append becomes O(1), which is the "amortised"
> complexity. Appending N items to a list t
On 1/27/11 3:35 PM, Travis Oliphant wrote:
>>> What is the thought about having two separate NumPy lists (one for
>>> development discussions and one for user discussions)?
Speaking as someone who hasn't contributed code to numpy itself, I still
really like to follow the development discussion,
On Thu, Jan 27, 2011 at 5:01 PM, Travis Oliphant wrote:
>
> Just to start the conversation, and to find out who is interested, I would
> like to informally propose generator arrays for NumPy 2.0. This concept
> has as one use-case, the deferred arrays that Mark Wiebe has proposed. But,
> it a
On 1/27/2011 5:35 PM, Travis Oliphant wrote:
> I think for me, the trouble is I don't have time to read all the
> messages, but I want to see developer-centric discussions. Sometimes, I
> can tell that from the subject (but I miss it).
>
> I agree that traffic is probably not too heavy at this poin
On 28/01/2011 1:07 p.m., Sturla Molden wrote:
> Den 28.01.2011 00:23, skrev Robert Kern:
>> We've resisted it for years. I don't think the split has done scipy
>> much good.
> The scope of NumPy is narrower development-wise and wider user-wise.
> While SciPy does not benefit, as use and development
Den 28.01.2011 00:23, skrev Robert Kern:
> We've resisted it for years. I don't think the split has done scipy
> much good.
The scope of NumPy is narrower development-wise and wider user-wise.
While SciPy does not benefit, as use and development are still quite
entangled, this is not be the case
Just to start the conversation, and to find out who is interested, I would like
to informally propose generator arrays for NumPy 2.0. This concept has as
one use-case, the deferred arrays that Mark Wiebe has proposed. But, it also
allows for "compressed arrays", on-the-fly computed arrays,
Den 28.01.2011 00:33, skrev Christopher Barker:
>
> hmmm - that doesn't seem quite right -- lists still have to
> re-allocate and copy, they just do it every n times (where n grows
> with the list), so I wouldn't expect exactly O(N).
Lists allocate empty slots at their back, proportional to thei
I think for me, the trouble is I don't have time to read all the messages, but
I want to see developer-centric discussions. Sometimes, I can tell that from
the subject (but I miss it).
I agree that traffic is probably not too heavy at this point (but it does
create some difficulty in keepi
On 1/27/11 1:53 PM, Sturla Molden wrote:
But N appends are O(N) for lists and O(N*N) for arrays.
hmmm - that doesn't seem quite right -- lists still have to re-allocate
and copy, they just do it every n times (where n grows with the list),
so I wouldn't expect exactly O(N).
But you never kn
On Thu, Jan 27, 2011 at 4:23 PM, Robert Kern wrote:
> On Thu, Jan 27, 2011 at 17:17, Travis Oliphant
> wrote:
> >
> > Hey all,
> >
> > What is the thought about having two separate NumPy lists (one for
> development discussions and one for user discussions)?
>
> We've resisted it for years. I do
On Thu, Jan 27, 2011 at 6:23 PM, Robert Kern wrote:
> On Thu, Jan 27, 2011 at 17:17, Travis Oliphant
> wrote:
> >
> > Hey all,
> >
> > What is the thought about having two separate NumPy lists (one for
> development discussions and one for user discussions)?
>
> We've resisted it for years. I do
On Thu, Jan 27, 2011 at 17:17, Travis Oliphant wrote:
>
> Hey all,
>
> What is the thought about having two separate NumPy lists (one for
> development discussions and one for user discussions)?
We've resisted it for years. I don't think the split has done scipy
much good. But that may just be m
Hey all,
What is the thought about having two separate NumPy lists (one for development
discussions and one for user discussions)?
-Travis
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-dis
My $0.02 on the NumPy 2.0 schedule:
NumPy 2.0 is for ABI-incompatible changes like datetime support, and .NET
support.It would be ideal, if at the same time we could future-proof the
ABI some-what so that future changes can be made in an ABI-compatible way.
I also think it would be a g
Den 27.01.2011 23:47, skrev Sturla Molden:
> The F90 version is meant to be read in conjunction with the F77 version,
> not alone. It is very useful for NumPy programmers, as it is one of few
> text books that deals with vectorisation of algorithms. (F90 is an
> array-oriented language like Matlab
Den 25.01.2011 23:21, skrev Jonathan Rocher:
> Actually I believe the version does matter: I have seen a C version of
> num rec that doesn't contain all the algorithmic part but only the
> codes. I cannot remember exactly which ones are the light versions. If
> I had to guess, the F90 is also a
Den 27.01.2011 22:47, skrev Sturla Molden:
>
> Please observe that appending to a Python list is amortized O(1),
> whereas appending to a numpy array is O(N**2).
>
Sorry, one append to a numpy array is O(N).
But N appends are O(N) for lists and O(N*N) for arrays.
S.M.
___
Den 27.01.2011 22:03, skrev Dewald Pieterse:
> Is numpy.append so slow? or is the culprit numpy.where?
Please observe that appending to a Python list is amortized O(1),
whereas appending to a numpy array is O(N**2).
Sturla
___
NumPy-Discussion mailing
On Thu, Jan 27, 2011 at 4:33 PM, Dewald Pieterse
wrote:
>
>
> On Thu, Jan 27, 2011 at 4:19 PM, Christopher Barker > wrote:
>
>> On 1/27/11 1:03 PM, Dewald Pieterse wrote:
>>
>>> I am processing two csv files against another, my first implementation
>>> used python list of lists and list.append to
On Thu, Jan 27, 2011 at 4:19 PM, Christopher Barker
wrote:
> On 1/27/11 1:03 PM, Dewald Pieterse wrote:
>
>> I am processing two csv files against another, my first implementation
>> used python list of lists and list.append to generate a new list while
>> looping all the data including the non-re
On Thu, Jan 27, 2011 at 9:36 AM, Charles R Harris wrote:
>
>> All tests pass for me now, maybe it's a good time to merge the branch into
>> the trunk so we can run it on the buildbot?
>>
>>
> Might be better to merge your unadulterated stuff into master, make a 1.6
> branch, and add the compatib
I am processing two csv files against another, my first implementation used
python list of lists and list.append to generate a new list while looping
all the data including the non-relevant data (can't determine location of
specific data element in a list of list). So I re-implented the exact same
On Thu, Jan 27, 2011 at 9:17 AM, Mark Wiebe wrote:
> On Thu, Jan 27, 2011 at 7:09 AM, Ralf Gommers > wrote:
>
>>
>> The PIL test can still be fixed before the final 0.9.0 release, it looks
>> like we will need another RC anyway. Does anyone have time for this in the
>> next few days?
>>
>
> I'v
On Thu, Jan 27, 2011 at 10:10 AM, Charles R Harris <
charlesr.har...@gmail.com> wrote:
>
>
> On Thu, Jan 27, 2011 at 9:50 AM, Fabrizio Pollastri
> wrote:
>
>> Hello,
>>
>> when one has to find a given number of highest values in an array
>> containing
>> NaNs, the sort function (always ascending)
On Thu, Jan 27, 2011 at 9:50 AM, Fabrizio Pollastri wrote:
> Hello,
>
> when one has to find a given number of highest values in an array
> containing
> NaNs, the sort function (always ascending) is uncomfortable.
>
> Since numpy >= 1.4.0 NaNs are sorted to the end, so the searched values are
> ju
Hello,
when one has to find a given number of highest values in an array containing
NaNs, the sort function (always ascending) is uncomfortable.
Since numpy >= 1.4.0 NaNs are sorted to the end, so the searched values are just
before the first NaN in a unpredictable position and one has to do anot
On Thu, Jan 27, 2011 at 7:09 AM, Ralf Gommers
wrote:
>
> The PIL test can still be fixed before the final 0.9.0 release, it looks
> like we will need another RC anyway. Does anyone have time for this in the
> next few days?
>
I've attached a patch which fixes it for me.
> I took a shot at fix
On Thu, Jan 27, 2011 at 11:09 AM, Charles R Harris <
charlesr.har...@gmail.com> wrote:
>
>
> On Wed, Jan 26, 2011 at 1:10 PM, Mark Wiebe wrote:
>
>> On Wed, Jan 26, 2011 at 2:23 AM, Ralf Gommers <
>> ralf.gomm...@googlemail.com> wrote:
>>
>>> On Wed, Jan 26, 2011 at 12:28 PM, Mark Wiebe wrote:
>
On Thu, Jan 27, 2011 at 8:37 PM, Nadav Horesh wrote:
> The C code return the right result with glibc 2.12.2 (linux 64 + gcc 4.52).
Same for me on mac os x (not sure which C library it is using, the
freebsd one ?) for ppc, i386 and amd64,
cheers,
David
___
The C code return the right result with glibc 2.12.2 (linux 64 + gcc 4.52).
However I get the same nan+nan*j with python.
Nadav
From: numpy-discussion-boun...@scipy.org [numpy-discussion-boun...@scipy.org]
On Behalf Of Pauli Virtanen [p...@iki.fi]
Sent:
Thu, 27 Jan 2011 11:40:00 +0100, Mark Bakker wrote:
[clip]
> Not for large complex values:
>
> In [85]: tanh(1000+0j)
> Out[85]: (nan+nan*j)
Yep, it's a bug. Care to file a ticket?
The implementation is just sinh/cosh, which overflows.
The fix is to provide an asymptotic expansion (sgn Re z),
a
On Mon, Jan 24, 2011 at 3:23 PM, Ralf Gommers
wrote:
>
>
> On Mon, Jan 24, 2011 at 8:22 PM, cool-RR wrote:
>
>> Hello folks,
>>
>> I have Ubuntu 10.10 server on EC2. I installed Python 3.1, and now I want
>> to install NumPy on it. How do I do it? I tried `easy_install-3.1 numpy` but
>> got this
Hello list,
When computing tanh for large complex argument I get unexpected nans:
tanh works fine for large real values:
In [84]: tanh(1000)
Out[84]: 1.0
Not for large complex values:
In [85]: tanh(1000+0j)
Out[85]: (nan+nan*j)
While the correct answer is:
In [86]: (1.0-exp(-2.0*(1000+0j)))/
Hi Mark,
I was very interested to see that you had written an implementation of
the Einstein summation convention for numpy.
I'd thought about this last year, and wrote some notes on what I
thought might be a reasonable interface. Unfortunately I was not in a
position to actually implement it myse
Hi Paul,
thanks for your answer! I was not aware of numpy.show_config().
However, it does not say anything about libamd.a and libumfpack.a, right?
How do I know if they were successfully linked (statically)?
Does anybody have a clue?
greetings
Samuel
41 matches
Mail list logo