James Snyder wrote:
> b = np.zeros((1,30)) # allocates new memory and disconnects the view
This is really about how python works, not how numpy works:
np.zeros() -- creates a new array with all zeros in it -- that's the
whole point.
b = Something -- binds the name "b" to the Something object. N
James Snyder wrote:
>> Well, if you do f = a[n, :], you would get a view, another object that
>> shares the data in memory with a but is a separate object.
>
> OK, so it is a new object, with the properties of the slice it
> references, but if I write anything to it, it will consistently go
> back
>
> Well, if you do f = a[n, :], you would get a view, another object that
> shares the data in memory with a but is a separate object.
OK, so it is a new object, with the properties of the slice it
references, but if I write anything to it, it will consistently go
back to the same spot in the ori
> The time has now been shaved down to ~9 seconds with this suggestion
> from the original 13-14s, with the inclusing of Eric Firing's
> suggestions. This is without scipy.weave, which at the moment I can't
> get to work for all lines, and when I just replace one of them
> sucessfully it seems to
I've done a little profiling with cProfile as well as with dtrace
since the bindings exist in mac os x, and you can use a lot of the d
scripts that apply to python, so previously I've found that the
np.random call and the where (in the original code) were heavy hitters
as far as amount of time cons
Robert Kern wrote:
> On Mon, May 19, 2008 at 6:55 PM, James Snyder <[EMAIL PROTECTED]> wrote:
>> Also note, I'm not asking to match MATLAB performance. It'd be nice,
>> but again I'm just trying to put together decent, fairly efficient
>> numpy code.
>
> I can cut the time by about a quarter by j
On Mon, May 19, 2008 at 7:30 PM, Charles R Harris
<[EMAIL PROTECTED]> wrote:
>
> On Mon, May 19, 2008 at 5:52 PM, Robert Kern <[EMAIL PROTECTED]> wrote:
>>
>> On Mon, May 19, 2008 at 6:39 PM, Charles R Harris
>> <[EMAIL PROTECTED]> wrote:
>> >
>> > On Mon, May 19, 2008 at 4:36 PM, Robert Kern <[EMA
On Mon, May 19, 2008 at 6:55 PM, James Snyder <[EMAIL PROTECTED]> wrote:
> Also note, I'm not asking to match MATLAB performance. It'd be nice,
> but again I'm just trying to put together decent, fairly efficient
> numpy code.
I can cut the time by about a quarter by just using the boolean mask
d
On Mon, May 19, 2008 at 5:52 PM, Robert Kern <[EMAIL PROTECTED]> wrote:
> On Mon, May 19, 2008 at 6:39 PM, Charles R Harris
> <[EMAIL PROTECTED]> wrote:
> >
> > On Mon, May 19, 2008 at 4:36 PM, Robert Kern <[EMAIL PROTECTED]>
> wrote:
> >>
> >> On Mon, May 19, 2008 at 5:27 PM, Charles R Harris
> >
On to the code, here's a current implementation, attached. I make no
claims about it being great code, I've modified it so that there is a
weave version and a sans-weave version.
Many of the suggestions make things a bit faster. The weave version
bombs out with a rather long log, which can be fo
On Mon, May 19, 2008 at 6:39 PM, Charles R Harris
<[EMAIL PROTECTED]> wrote:
>
> On Mon, May 19, 2008 at 4:36 PM, Robert Kern <[EMAIL PROTECTED]> wrote:
>>
>> On Mon, May 19, 2008 at 5:27 PM, Charles R Harris
>> <[EMAIL PROTECTED]> wrote:
>> > The latest versions of Matlab use the ziggurat method t
On Mon, May 19, 2008 at 4:36 PM, Robert Kern <[EMAIL PROTECTED]> wrote:
> On Mon, May 19, 2008 at 5:27 PM, Charles R Harris
> <[EMAIL PROTECTED]> wrote:
> > The latest versions of Matlab use the ziggurat method to generate random
> > normals and it is faster than the method used in numpy. I have z
Separating the response into 2 emails, here's the aspect that comes
from implementations of random:
In short, that's part of the difference. I ran these a few times to
check for consistency.
MATLAB (R2008a:
tic
for i = 1:2000
a = randn(1,13857);
end
toc
Runtime: ~0.733489 s
NumPy (1.1.0rc1)
On Mon, May 19, 2008 at 5:27 PM, Charles R Harris
<[EMAIL PROTECTED]> wrote:
> The latest versions of Matlab use the ziggurat method to generate random
> normals and it is faster than the method used in numpy. I have ziggurat code
> at hand, but IIRC, Robert doesn't trust the method ;)
Well, I out
On Mon, May 19, 2008 at 2:53 PM, Christopher Barker <[EMAIL PROTECTED]>
wrote:
> Anne Archibald wrote:
> > 2008/5/19 James Snyder <[EMAIL PROTECTED]>:
> >> I can provide the rest of the code if needed, but it's basically just
> >> filling some vectors with random and empty data and initializing a
Anne Archibald wrote:
> 2008/5/19 James Snyder <[EMAIL PROTECTED]>:
>> I can provide the rest of the code if needed, but it's basically just
>> filling some vectors with random and empty data and initializing a few
>> things.
>
> It would kind of help, since it would make it clearer what's a scala
On Mon, May 19, 2008 at 12:53 PM, Robin <[EMAIL PROTECTED]> wrote:
> Hi,
>
> I think my understanding is somehow incomplete... It's not clear to me
> why (simplified case)
>
> a[curidx,:] = scalar * a[2-curidx,:]
> should be faster than
> a = scalar * b
>
> In both cases I thought the scalar multip
Robin wrote:
> Also you could use xrange instead of range...
>
> Again, not sure of the size of the effect but it seems to be
> recommended by the docstring.
No, it is going away in Python 3.0, and its only real benefit is a
memory saving in extreme cases.
From the Python library docs:
"The ad
2008/5/19 James Snyder <[EMAIL PROTECTED]>:
> First off, I know that optimization is evil, and I should make sure
> that everything works as expected prior to bothering with squeezing
> out extra performance, but the situation is that this particular block
> of code works, but it is about half as
Hi,
I think my understanding is somehow incomplete... It's not clear to me
why (simplified case)
a[curidx,:] = scalar * a[2-curidx,:]
should be faster than
a = scalar * b
In both cases I thought the scalar multiplication results in a new
array (new memory allocated) and then the difference betwe
>for n in range(0,time_milliseconds):
>self.u = self.expfac_m * self.prev_u +
> (1-self.expfac_m) * self.aff_input[n,:]
>self.v = self.u + self.sigma *
> np.random.standard_normal(size=(1,self.naff))
>self.theta = self.expfac_theta * self.prev_theta -
Also you could use xrange instead of range...
Again, not sure of the size of the effect but it seems to be
recommended by the docstring.
Robin
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-
On Mon, May 19, 2008 at 7:08 PM, James Snyder <[EMAIL PROTECTED]> wrote:
>
>for n in range(0,time_milliseconds):
>self.u = self.expfac_m * self.prev_u +
> (1-self.expfac_m) * self.aff_input[n,:]
>self.v = self.u + self.sigma *
> np.random.standard_normal(size=(1,
Hi -
First off, I know that optimization is evil, and I should make sure
that everything works as expected prior to bothering with squeezing
out extra performance, but the situation is that this particular block
of code works, but it is about half as fast with numpy as in matlab,
and I'm wondering
24 matches
Mail list logo