ok Peter Otten code works (very fast),
and this is the profile
Sat Apr 12 11:15:39 2014restats
92834776 function calls in 6218.782 seconds
Ordered by: internal time
List reduced from 41 to 20 due to restriction <20>
ncalls tottime percall cumtime percall filename:lineno
Ok, i just run Peter's code and it seems really faster...I hope to don't
mistake this time!
Thanks
Gabriele
sent from Samsung Mobile
Il giorno 12/apr/2014 08:22, "Gabriele Brambilla" <
gb.gabrielebrambi...@gmail.com> ha scritto:
> Ok guys,
> I'm not expert about profile but help me to look at i
Ok guys,
I'm not expert about profile but help me to look at it.
this one is for 715853 elements (to multiply by 5, and for each of this N*5
there is a loop of 200 times)
Sat Apr 12 04:58:50 2014restats
9636507991 function calls in 66809.764 seconds
Ordered by: internal time
L
Gabriele Brambilla wrote:
> Ok guys, when I wrote that email I was excited for the apparent speed
> increasing (it was jumping the bottleneck for loop for the reason peter
> otten outlined).
> Now, instead the changes, the speed is not improved (the code still
> running from this morning and it's
Ok guys, when I wrote that email I was excited for the apparent speed
increasing (it was jumping the bottleneck for loop for the reason peter
otten outlined).
Now, instead the changes, the speed is not improved (the code still running
from this morning and it's at one forth of the dataset).
What c
On Fri, Apr 11, 2014 at 1:01 PM, Gabriele Brambilla
wrote:
> Yes,
> but I want to make a C extension to run faster a function from
> scipy.interpolate (interp1d)
>
> It woulldn't change anything?
This is precisely why you want to drive your optimization based on
what the profiler is telling you.
On Fri, Apr 11, 2014 at 1:01 PM, Gabriele Brambilla
wrote:
> Yes,
> but I want to make a C extension to run faster a function from
> scipy.interpolate (interp1d)
Just to emphasis: I believe your goal should be: "I want to make my
program fast."
Your goal should probably not be: "I want to write
Yes,
but I want to make a C extension to run faster a function from
scipy.interpolate (interp1d)
It woulldn't change anything?
thanks
Gabriele
2014-04-11 14:47 GMT-04:00 Alan Gauld :
> On 11/04/14 09:59, Peter Otten wrote:
>
>> Gabriele Brambilla wrote:
>>
>> Anyway I would like to try to sp
Gabriele Brambilla wrote:
> ok, it seems that the code don't enter in this for loop
>
> for gammar, MYMAP in zip(gmlis, MYMAPS):
>
> I don't understand why.
You have two variables with similar names, gmlis and gmils:
>> gmlis = []
>> gmils=[my_parts[7], my_part
On 11/04/14 09:59, Peter Otten wrote:
Gabriele Brambilla wrote:
Anyway I would like to try to speed it up using C functions
...
posted looks like it has great potential for speed-up by replacing the inner
loops with numpy array operations.
And in case its not obvious much(most?) of numPy con
this is the profile for a sample of 1000 elements
Fri Apr 11 10:21:21 2014restats
31594963 function calls in 103.708 seconds
Ordered by: internal time
List reduced from 47 to 20 due to restriction <20>
ncalls tottime percall cumtime percall filename:lineno(function)
ok
modifying the for in this way (zipping an array of matrix drive it crazy)
it works
dko=0
for gammar in gmils:
omC = (1.5)*(gammar**3)*c/(rho*rlc)
gig = omC*hcut/eVtoErg
#check the single emission
ok, it seems that the code don't enter in this for loop
for gammar, MYMAP in zip(gmlis, MYMAPS):
I don't understand why.
Thanks
Gabriele
2014-04-11 9:56 GMT-04:00 Gabriele Brambilla :
> Hi, I'm sorry but there is a big problem.
> the code is producing empty file.dat.
>
> I think it's because
Hi, I'm sorry but there is a big problem.
the code is producing empty file.dat.
I think it's because of this that previously I have done that strange trick
of myinternet...
So:
for my_line in open('data.dat'):
myinternet = []
gmlis = []
print('r
Hi Danny,
I'm quiet impressed.
the program takes near 30 minutes instead of more than 8 hours!
this is the profile:
Fri Apr 11 09:14:04 2014restats
19532732 function calls in 2105.024 seconds
Ordered by: internal time
ncalls tottime percall cumtime percall filename:lineno
2014 5:30 AM
> >Subject: Re: [Tutor] improving speed using and recalling C functions
> >
> >
> >
> >Hi Danny,
> >I followed your suggestion.
> >Tomorrow morning I will run this new version of the code.
> >
> >
> >Now using a sample of 8
> From: Gabriele Brambilla
>To: Danny Yoo
>Cc: python tutor
>Sent: Friday, April 11, 2014 5:30 AM
>Subject: Re: [Tutor] improving speed using and recalling C functions
>
>
>
>Hi Danny,
>I followed your suggestion.
>Tom
Gabriele Brambilla wrote:
> Anyway I would like to try to speed it up using C functions (and maybe
> comparing the resuts of the two profile in the end)
I can't help you on your chosen path, but let me emphasise that the code you
posted looks like it has great potential for speed-up by replacing
Hi Danny,
I followed your suggestion.
Tomorrow morning I will run this new version of the code.
Now using a sample of 81 elements (instead of 60) the profile returns:
Thu Apr 10 23:25:59 2014restats
18101188 function calls in 1218.626 seconds
Ordered by: internal time
Lis
> Comment: You are looping over your sliced eel five times. Do you
>need to? I like eel salad a great deal, as well, but, how about:
>
>
>for k in eel:
>MYMAP1[i, j, k] = MYMAP1[i, j, k] + myinternet[oo]
>MYMAP2[i, j, k] = MYMAP2[i, j, k] + myinternet[oo]
>
Ok, good.
There's a few things you'll want to fix in your mymain() in order for
the profiler to work more effectively in pinpointing issues.
1. Move functionality outside of "if __name__ == '__main__':"
At the moment, you've put the entire functionality of your program in
the body of that if
On Fri, Apr 11, 2014 at 10:59:05AM +1000, Steven D'Aprano wrote:
> It might help if you show us your code.
Oops, never mind, I see you have done so.
--
Steven
___
Tutor maillist - Tutor@python.org
To unsubscribe or change subscription options:
http
On Thu, Apr 10, 2014 at 11:58:30AM -0400, Gabriele Brambilla wrote:
> Hi,
>
> I have a program that is reading near 60 elements from a file.
> For each element it performs 200 times a particular mathematical operation
> (a numerical interpolation of a function).
> Now these process takes near
Gabriele,
but main is the program that contains everything.
And, that is precisely the point of profiling the thing that
contains 'everything'. Because the bottleneck is almost always
somewher inside of 'everything'. But, you have to keep digging
until you find it.
I saw that you repli
sure.
def mymain():
def LEstep(n):
Emin=10**6
Emax=5*(10**10)
Lemin=log10(Emin)
Lemax=log10(Emax)
stepE=(Lemax-Lemin)/n
return (stepE, n, Lemin, Lemax)
if __name__ == "__main__
but main is the program that contains everything.
I used the profile in this way:
import cProfile
import pstats
def mymain():
#all the code
#end of main indentation
cProfile.run('mymain()', 'restats', 'time')
p = pstats.Stats('restats')
p.strip_dirs().sort_stats('name')
p.sort_stats('time
>ncalls tottime percall cumtime percall filename:lineno(function)
> 1 149.479 149.479 199.851 199.851 skymaps5.py:16(mymain)
> 18101000 28.6820.000 28.6820.000 {method 'write' of 'file'
objects}
>
> 330445.4700.0006.4440.000
interpolate.py:394(_c
Gabriele,
21071736 function calls in 199.883 seconds
The 21 million function calls isn't really a surprise to me, given
18 million calls to file.write(). Given that the majority of the
time is still spent in skymaps5.py, I think you'll need to
instrument that a bit more to figure
Hi,
I get this result:
Thu Apr 10 17:35:53 2014restats
21071736 function calls in 199.883 seconds
Ordered by: internal time
List reduced from 188 to 10 due to restriction <10>
ncalls tottime percall cumtime percall filename:lineno(function)
1 149.479 149.479
Hi Gabriele,
I should probably have pointed you to:
https://docs.python.org/2/library/profile.html#instant-user-s-manual
instead.
Here is an example that uses the cProfile module. Let's say that I'm
trying to pinpoint where something is going slow in some_program():
I'm trying to profile it adding this code:
import cProfile
import re
import pstats
cProfile.run('re.compile("foo|bar")', 'restats')
p = pstats.Stats('restats')
p.strip_dirs().sort_stats('name')
p.sort_stats('time').print_stats(10)
but where I have to add this in my code?
because I obtain
Thu
Hi Gabriele,
Have you profiled your program? Please look at:
https://docs.python.org/2/library/profile.html
If you can, avoid guessing what is causing performance to drop.
Rather, use the tools in the profiling libraries to perform
measurements.
It may be that your program is taking a lon
On 10/04/2014 18:29, Gabriele Brambilla wrote:
(I'm sorry but I don't know very well what profiling is)
Take a look at these for some tips
http://www.huyng.com/posts/python-performance-analysis/ and
https://wiki.python.org/moin/PythonSpeed/PerformanceTips
--
My fellow Pythonistas, ask not
Hi,
2014-04-10 13:05 GMT-04:00 Martin A. Brown :
>
> Hi there Gabriele,
>
> : I have a program that is reading near 60 elements from a
> : file. For each element it performs 200 times a particular
> : mathematical operation (a numerical interpolation of a function).
> : Now these process
On 10/04/14 16:58, Gabriele Brambilla wrote:
For each element it performs 200 times a particular mathematical
operation (a numerical interpolation of a function).
Now these process takes near 8 hours.
The first thing to do in such cases is check that the time
is going where you think it is. Ru
Hi there Gabriele,
: I have a program that is reading near 60 elements from a
: file. For each element it performs 200 times a particular
: mathematical operation (a numerical interpolation of a function).
: Now these process takes near 8 hours.
Sounds fun! Here are some thoughts (th
Hi,
I have a program that is reading near 60 elements from a file.
For each element it performs 200 times a particular mathematical operation
(a numerical interpolation of a function).
Now these process takes near 8 hours.
Creating a C function and calling it from the code could improve the s
37 matches
Mail list logo