On 24 January 2014 23:09, Dinesh Vadhia wrote:
> Francesc: Thanks. I looked at numexpr a few years back but it didn't
> support array slicing/indexing. Has that changed?
>
No, but you can do it yourself.
big_array = np.empty(2)
piece = big_array[30:-50]
ne.evaluate('sqrt(piece)')
Here, c
On Fri, Jan 24, 2014 at 10:29 PM, Chris Barker wrote:
> On Fri, Jan 24, 2014 at 8:25 AM, Nathaniel Smith wrote:
>>
>> If your arrays are big enough that you're worried that making a stray copy
>> will ENOMEM, then you *shouldn't* have to worry about fragmentation - malloc
>> will give each array
On Fri, Jan 24, 2014 at 8:25 AM, Nathaniel Smith wrote:
> If your arrays are big enough that you're worried that making a stray copy
> will ENOMEM, then you *shouldn't* have to worry about fragmentation -
> malloc will give each array its own virtual mapping, which can be backed by
> discontinuou
Francesc: Thanks. I looked at numexpr a few years back but it didn't support
array slicing/indexing. Has that changed?
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion
Yes.
On 24 Jan 2014 17:19, "Dinesh Vadhia" wrote:
> So, with the example case, the approximate memory cost for an in-place
> operation would be:
>
> A *= B : 2N
>
> But, if the original A or B is to remain unchanged then it will be:
>
> C = A * B : 3N ?
>
>
>
> __
So, with the example case, the approximate memory cost for an in-place
operation would be:
A *= B : 2N
But, if the original A or B is to remain unchanged then it will be:
C = A * B : 3N ?
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
h
On 24 Jan 2014 15:57, "Chris Barker - NOAA Federal"
wrote:
>
>
>> c = a + b: 3N
>> c = a + 2*b: 4N
>
> Does python garbage collect mid-expression? I.e. :
>
> C = (a + 2*b) + b
>
> 4 or 5 N?
It should be collected as soon as the reference gets dropped, so 4N. (This
is the advantage of a greedy ref
c = a + b: 3N
c = a + 2*b: 4N
Does python garbage collect mid-expression? I.e. :
C = (a + 2*b) + b
4 or 5 N?
Also note that when memory gets tight, fragmentation can be a problem. I.e.
if two size-n arrays where just freed, you still may not be able to
allocate a size-2n array. This seems to be
Yeah, numexpr is pretty cool for avoiding temporaries in an easy way:
https://github.com/pydata/numexpr
Francesc
El 24/01/14 16:30, Nathaniel Smith ha escrit:
There is no reliable way to predict how much memory an arbitrary numpy
operation will need, no. However, in most cases the main memor
There is no reliable way to predict how much memory an arbitrary numpy
operation will need, no. However, in most cases the main memory cost will
be simply the need to store the input and output arrays; for large arrays,
all other allocations should be negligible.
The most effective way to avoid ru
I want to write a general exception handler to warn if too much data is being
loaded for the ram size in a machine for a successful numpy array operation to
take place. For example, the program multiplies two floating point arrays A
and B which are populated with loadtext. While the data is be
11 matches
Mail list logo