Attached here a code for bilateral filter:
1. The code is composed of a cython back end (bilateral_base.pyx) and a python
front end (bilateral.py)
2. I do not have the tools to make windows binaries (I run it on gentoo linux) .
3. It is not hard to strip the cython code to get a pure (and slow)
On Thu, Feb 12, 2009 at 6:04 PM, Keith Goodman wrote:
> On Thu, Feb 12, 2009 at 5:52 PM, Keith Goodman wrote:
>> On Thu, Feb 12, 2009 at 5:22 PM, A B wrote:
>>> Are there any routines to fill in the gaps in an array. The simplest
>>> would be by carrying the last known observation forward.
>>> 0
On Thu, Feb 12, 2009 at 5:52 PM, Keith Goodman wrote:
> On Thu, Feb 12, 2009 at 5:22 PM, A B wrote:
>> Are there any routines to fill in the gaps in an array. The simplest
>> would be by carrying the last known observation forward.
>> 0,0,10,8,0,0,7,0
>> 0,0,10,8,8,8,7,7
>
> Here's an obvious hac
On Thu, Feb 12, 2009 at 5:22 PM, A B wrote:
> Are there any routines to fill in the gaps in an array. The simplest
> would be by carrying the last known observation forward.
> 0,0,10,8,0,0,7,0
> 0,0,10,8,8,8,7,7
Here's an obvious hack for 1d arrays:
def fill_forward(x, miss=0):
y = x.copy()
On Feb 12, 2009, at 8:22 PM, A B wrote:
> Hi,
> Are there any routines to fill in the gaps in an array. The simplest
> would be by carrying the last known observation forward.
> 0,0,10,8,0,0,7,0
> 0,0,10,8,8,8,7,7
> Or by somehow interpolating the missing values based on the previous
> and next k
Hi,
Are there any routines to fill in the gaps in an array. The simplest
would be by carrying the last known observation forward.
0,0,10,8,0,0,7,0
0,0,10,8,8,8,7,7
Or by somehow interpolating the missing values based on the previous
and next known observations (mean).
Thanks.
__
On Fri, Feb 13, 2009 at 6:39 AM, Charles R Harris
wrote:
>
>
> On Thu, Feb 12, 2009 at 2:23 PM, Pauli Virtanen wrote:
>>
>> Hi,
>>
>> The Buildbot (up once again) is showing build failures on some platforms:
>>
>>
>>
>> http://buildbot.scipy.org/builders/Windows_XP_x86_64_MSVC/builds/875/steps/
> Sturla Molden wrote:
> IMO there's a problem with using literal variable names here, because
> Python syntax implies that the value is passed. One shouldn't make
> syntax where private=(i,) is legal but private=(f(),) isn't.
The latter would be illegal in OpenMP as well. OpenMP pragmas only tak
On Thu, Feb 12, 2009 at 2:23 PM, Pauli Virtanen wrote:
> Hi,
>
> The Buildbot (up once again) is showing build failures on some platforms:
>
>
> http://buildbot.scipy.org/builders/Windows_XP_x86_64_MSVC/builds/875/steps/shell/logs/stdio
>
> http://buildbot.scipy.org/builders/Linux_SPARC_64_Debian
Hi,
The Buildbot (up once again) is showing build failures on some platforms:
http://buildbot.scipy.org/builders/Windows_XP_x86_64_MSVC/builds/875/steps/shell/logs/stdio
http://buildbot.scipy.org/builders/Linux_SPARC_64_Debian/builds/423/steps/shell/logs/stdio
Are these a sign
> You know, I thought of the exact same thing when reading your post. No,
> you need the GIL currently, but that's something I'd like to fix.
>
> Ideally, it would be something like this:
>
> cdef int i, s = 0, n = ...
> cdef np.ndarray[int] arr = ... # will require the GIL
> with nogil:
>for i
On Thu, 12 Feb 2009 14:32:02 -0600 Robert Kern wrote:
>> You could also think of it the other way (in terms of generating 64-bit
>> ints). Instead of generating two 32-bit rints and concatenating them
>> for a 64-bit int, you can just directly generate the 64-bit int. Since
>> the 64-bit int requir
On Thu, Feb 12, 2009 at 14:17, Michael S. Gilbert
wrote:
> On Thu, 12 Feb 2009 13:18:26 -0600 Robert Kern wrote:
>> > I did some testing with this 64-bit implementation (mt19937-64). I've
>> > found that it is actually slower than the 32-bit reference (mt19937ar)
>> > on 64-bit systems (2.15s vs 2
On Thu, Feb 12, 2009 at 14:17, Michael S. Gilbert
wrote:
> On Thu, 12 Feb 2009 13:18:26 -0600 Robert Kern wrote:
>> > I did some testing with this 64-bit implementation (mt19937-64). I've
>> > found that it is actually slower than the 32-bit reference (mt19937ar)
>> > on 64-bit systems (2.15s vs 2
Brian Granger wrote:
> And a question:
>
> With the new Numpy support in Cython, does Cython release the GIL if
> it can when running through through loops over numpy arrays? Does
> Cython call into the C API during these sections?
You know, I thought of the exact same thing when reading your pos
Wow, interesting thread. Thanks everyone for the ideas. A few more comments:
GPUs/CUDA:
* Even though there is a bottleneck between main memory and GPU
memory, as Nathan mentioned, the much larger memory bandwidth on a GPU
often makes GPUs great for memory bound computations...as long as you
ca
On Thu, 12 Feb 2009 13:18:26 -0600 Robert Kern wrote:
> > I did some testing with this 64-bit implementation (mt19937-64). I've
> > found that it is actually slower than the 32-bit reference (mt19937ar)
> > on 64-bit systems (2.15s vs 2.25s to generate 1 ints). This is
> > likely because it
> At any rate, I really like the OpenMP approach and prefer to have
> support for it in Cython much better than threading, MPI or whatever.
> But the thing is: is OpenMP stable, mature enough for allow using it in
> most of common platforms? I think that recent GCC compilers support
> the latest i
> Recent Matlab versions use Intels Math Kernel Library, which performs
> automatic multi-threading - also for mathematical functions like sin
> etc, but not for addition, multiplication etc. It seems to me Matlab
> itself does not take care of multi-threading. On
> http://www.intel.com/software/p
> If your problem is evaluating vector expressions just like the above
> (i.e. without using transcendental functions like sin, exp, etc...),
> usually the bottleneck is on memory access, so using several threads is
> simply not going to help you achieving better performance, but rather
> the contr
Dag Sverre Seljebotn wrote:
> Hmm... yes. Care would need to be taken though because Cython might in
> the future very well generate a "while" loop instead for such a
> statement under some circumstances, and that won't work with OpenMP. One
> should be careful with assuming what the C result wi
Sturla Molden wrote:
> On 2/12/2009 12:34 PM, Dag Sverre Seljebotn wrote:
>
>> FYI, I am one of the core Cython developers and can make such
>> modifications in Cython itself as long as there's consensus on how it
>> should look on the Cython mailing list. My problem is that I don't
>> really
On Thu, Feb 12, 2009 at 10:58, Michael S. Gilbert
wrote:
> On Fri, 6 Feb 2009 18:18:44 -0500 "Michael S. Gilbert" wrote:
>> BTW, there is a 64-bit version of the reference mersenne twister
>> implementation available [1].
>
> I did some testing with this 64-bit implementation (mt19937-64). I've
>
On Thu, Feb 12, 2009 at 9:21 AM, Ralph Kube wrote:
> The same happens on the ipython prompt:
>
> 0.145 * 0.005 = 28.996
> N.int32(0.145 * 0.005) = 28
>
> Any ideas how to deal with this?
Do you want the answer to be 29? N.int32 truncates. If you want to
round instead, you could use th
Ralph Kube schrieb:
> Hi there,
> I have a little problem here with array indexing, hope you see the problem.
> I use the following loop to calculate some integrals
>
> ...
> 0.145 * 0.005 = 28.996
> N.int32(0.145 * 0.005) = 28
conversion to int truncates, it doesn't round. Try
N.int32(
On Thu, Feb 12, 2009 at 11:21 AM, Ralph Kube wrote:
> Hi there,
> I have a little problem here with array indexing, hope you see the problem.
> I use the following loop to calculate some integrals
>
> import numpy as N
> from scipy.integrate import quad
> T = 1
> dt = 0.005
> L = 3
> n = 2
> ints
Hi there,
I have a little problem here with array indexing, hope you see the problem.
I use the following loop to calculate some integrals
import numpy as N
from scipy.integrate import quad
T = 1
dt = 0.005
L = 3
n = 2
ints = N.zeros([T/dt])
for t in N.arange(0, T, dt):
a = quad(lambda
On 2/12/2009 5:24 PM, Gael Varoquaux wrote:
> My two cents: go for cython objects/statements. Not only does code in
> comments looks weird and a hack, but also it means to you have to hack
> the parser.
I agree with this. Particularly because Cython uses intendation as
syntax. With comments you
On Fri, 6 Feb 2009 18:18:44 -0500 "Michael S. Gilbert" wrote:
> BTW, there is a 64-bit version of the reference mersenne twister
> implementation available [1].
I did some testing with this 64-bit implementation (mt19937-64). I've
found that it is actually slower than the 32-bit reference (mt1993
On Wed, Feb 11, 2009 at 10:47 PM, Pierre GM wrote:
>
> On Feb 11, 2009, at 11:38 PM, Ryan May wrote:
>
> > Pierre,
> >
> > I noticed that using dtype=None with a heterogeneous set of data,
> > trying to use unpack=True to get the columns into separate arrays
> > (instead of a structured array) do
On Thu, Feb 12, 2009 at 03:27:51PM +0100, Sturla Molden wrote:
> The question is: Should OpenMP be comments in the Cython code (as they
> are in C and Fortran), or should OpenMP be special objects?
My two cents: go for cython objects/statements. Not only does code in
comments looks weird and a ha
On 2/11/09, Robert Kern wrote:
> On Wed, Feb 11, 2009 at 23:24, A B wrote:
>> Hi,
>>
>> I have the following data structure:
>>
>> col1 | col2 | col3
>>
>> 20080101|key1|4
>> 20080201|key1|6
>> 20080301|key1|5
>> 20080301|key2|3.4
>> 20080601|key2|5.6
>>
>> For each key in the second column, I wo
Nathan Bell wrote:
> On Thu, Feb 12, 2009 at 8:19 AM, Michael Abshoff
> wrote:
Hi,
>> No even close. The current generation peaks at around 1.2 TFlops single
>> precision, 280 GFlops double precision for ATI's hardware. The main
>> problem with those numbers is that the memory on the graphics ca
This is probably more than I need but I will definitely keep it as
reference. Thank you.
On 2/12/09, bernhard.vo...@gmail.com wrote:
> You might consider the groupby from the itertools module.
>
> Do you have two keys only? I would prefer grouping on the first
> column. For grouby you need to sor
On 2/12/2009 4:03 PM, Matthieu Brucher wrote:
> In C89, you will have absolutely no benefit (because there
> are no way you can tell the compiler that there is no aliasing), in
> Fortran, it will be optimized correctly.
In ANSI C (aka C89) the effect is achieved using compiler pragmas.
In ISO C
On Thu, Feb 12, 2009 at 8:19 AM, Michael Abshoff
wrote:
>
> No even close. The current generation peaks at around 1.2 TFlops single
> precision, 280 GFlops double precision for ATI's hardware. The main
> problem with those numbers is that the memory on the graphics card
> cannot feed the data fast
2009/2/12 David Cournapeau :
> Matthieu Brucher wrote:
>>
>> Sorry, I was refering to my last mail, but I sent so many in 5 minuts ;)
>> In C, if you have to arrays (two pointers), the compiler can't make
>> aggressive optimizations because they may intersect. With Fortran,
>> this is not possible.
2009/2/12 David Cournapeau :
> Matthieu Brucher wrote:
>>> No - I have never seen deep explanation of the matlab model. The C api
>>> is so small that it is hard to deduce anything from it (except that the
>>> memory handling is not ref-counting-based, I don't know if it matters
>>> for our discuss
David Cournapeau wrote:
> Matthieu Brucher wrote:
>> For BLAS level 3, the MKL is parallelized (so matrix multiplication is).
>>
Hi David,
> Same for ATLAS: thread support is one focus in the 3.9 serie, currently
> in development.
ATLAS has had thread support for a long, long time. The 3.9 s
On 2/12/2009 1:44 PM, Sturla Molden wrote:
Here is an example of SciPy's ckdtree.pyx modified to use OpenMP.
It seems I managed to post an errorneous C file. :(
S.M.
/*
* Parallel query for faster kd-tree searches on SMP computers.
* This function will relea
Matthieu Brucher wrote:
>
> Sorry, I was refering to my last mail, but I sent so many in 5 minuts ;)
> In C, if you have to arrays (two pointers), the compiler can't make
> aggressive optimizations because they may intersect. With Fortran,
> this is not possible. In this matter, Numpy behaves like
On 2/12/2009 12:34 PM, Dag Sverre Seljebotn wrote:
> FYI, I am one of the core Cython developers and can make such
> modifications in Cython itself as long as there's consensus on how it
> should look on the Cython mailing list. My problem is that I don't
> really know OpenMP and have little e
Matthieu Brucher wrote:
>
> For BLAS level 3, the MKL is parallelized (so matrix multiplication is).
>
Same for ATLAS: thread support is one focus in the 3.9 serie, currently
in development. I have never used it, I don't know how it compare to the
MKL,
David
___
You might consider the groupby from the itertools module.
Do you have two keys only? I would prefer grouping on the first
column. For grouby you need to sort the array after the first column
then.
from itertools import groupby
a.sort(order='col1')
# target array: first col are unique dates, seco
Matthieu Brucher wrote:
>> No - I have never seen deep explanation of the matlab model. The C api
>> is so small that it is hard to deduce anything from it (except that the
>> memory handling is not ref-counting-based, I don't know if it matters
>> for our discussion of speeding up ufunc). I would
2009/2/12 Gregor Thalhammer :
> Brian Granger schrieb:
>>> I am curious: would you know what would be different in numpy's case
>>> compared to matlab array model concerning locks ? Matlab, up to
>>> recently, only spreads BLAS/LAPACK on multi-cores, but since matlab 7.3
>>> (or 7.4), it also uses
2009/2/12 Sturla Molden :
> On 2/12/2009 1:50 PM, Francesc Alted wrote:
>
>> Hey! That's very nice to know. We already have OpenMP support in
>> Cython for free (or apparently it seems so :-)
>
> Not we don't, as variable names are different in C and Cython. But
> adding support for OpenMP would
> No - I have never seen deep explanation of the matlab model. The C api
> is so small that it is hard to deduce anything from it (except that the
> memory handling is not ref-counting-based, I don't know if it matters
> for our discussion of speeding up ufunc). I would guess that since two
> array
Yes, it is. You have to link against pthread (at least with Linux ;))
You have to write a single parallel region if you don't want this
overhead (which is not possible with Python).
Matthieu
2009/2/12 Gael Varoquaux :
> On Wed, Feb 11, 2009 at 11:52:40PM -0600, Robert Kern wrote:
>> > This seem
A Thursday 12 February 2009, Sturla Molden escrigué:
> On 2/12/2009 1:50 PM, Francesc Alted wrote:
> > Hey! That's very nice to know. We already have OpenMP support in
> > Cython for free (or apparently it seems so :-)
>
> Not we don't, as variable names are different in C and Cython. But
> addin
> I am curious: would you know what would be different in numpy's case
> compared to matlab array model concerning locks ? Matlab, up to
> recently, only spreads BLAS/LAPACK on multi-cores, but since matlab 7.3
> (or 7.4), it also uses multicore for mathematical functions (cos,
> etc...). So at lea
Sturla Molden wrote:
> On 2/12/2009 1:50 PM, Francesc Alted wrote:
>
>
>> Hey! That's very nice to know. We already have OpenMP support in
>> Cython for free (or apparently it seems so :-)
>>
>
> Not we don't, as variable names are different in C and Cython. But
> adding support for Ope
Sturla Molden wrote:
> On 2/12/2009 12:20 PM, David Cournapeau wrote:
Hi,
>> It does if you have access to the parallel toolbox I mentioned earlier
>> in this thread (again, no experience with it, but I think it is
>> specially popular on clusters; in that case, though, it is not limited
>> to th
Francesc Alted wrote:
> I don't know OpenMP enough neither, but I'd say that in this list there
> could be some people that could help.
>
> At any rate, I really like the OpenMP approach and prefer to have
> support for it in Cython much better than threading, MPI or whatever.
> But the thing i
On 2/12/2009 1:50 PM, Francesc Alted wrote:
> Hey! That's very nice to know. We already have OpenMP support in
> Cython for free (or apparently it seems so :-)
Not we don't, as variable names are different in C and Cython. But
adding support for OpenMP would not bloat the Cython language.
Cy
Sturla Molden wrote:
> On 2/12/2009 12:20 PM, David Cournapeau wrote:
>
>
>> It does if you have access to the parallel toolbox I mentioned earlier
>> in this thread (again, no experience with it, but I think it is
>> specially popular on clusters; in that case, though, it is not limited
>> to t
On 2/12/2009 12:20 PM, David Cournapeau wrote:
> It does if you have access to the parallel toolbox I mentioned earlier
> in this thread (again, no experience with it, but I think it is
> specially popular on clusters; in that case, though, it is not limited
> to thread-based implementation).
As
A Thursday 12 February 2009, Sturla Molden escrigué:
> OpenMP does not need to be a aprt of the Cython language. It can be
> special comments in the code as in Fortran. After all, "#pragma omp
> parallel" is a comment in Cython.
Hey! That's very nice to know. We already have OpenMP support in
C
A Thursday 12 February 2009, Dag Sverre Seljebotn escrigué:
> FYI, I am one of the core Cython developers and can make such
> modifications in Cython itself as long as there's consensus on how it
> should look on the Cython mailing list. My problem is that I don't
> really know OpenMP and have lit
On 2/12/2009 11:30 AM, Dag Sverre Seljebotn wrote:
It would be interesting to see how a spec would look for integrating
OpenMP natively into Cython for these kinds of purposes. Cython is still
flexible as a language after all. Avoiding language bloat is also
important, but it is difficult to k
On 2/12/2009 7:15 AM, David Cournapeau wrote:
> Since openmp also exists on windows, I doubt that it is required that
> openmp uses pthread :)
On Windows, MSVC uses Win32 threads and GCC (Cygwin and MinGW) uses
pthreads. If you use OpenMP with MinGW, the executable becomes dependent
on pthreadG
Hi all,
as Francesc announced, the latest release of Numexpr 1.2 can be built
with Intels Math Kernel Library, which gives a BIGbig increase in
performance. Now the questions: Could somebody provide binaries for
Windows of Numexpr, linked with Intels MKL? I know, there is the license
problem.
Thu, 12 Feb 2009 03:39:59 -0800, Andrew Straw wrote:
[clip]
> ...except for one last question: If Hardy uses the g77 ABI but I'm
> building scipy with gfortran, shouldn't there be an ABI issue with my
> ATLAS? Shouldn't I get lots of test failures with scipy? I don't.
On Debian Etch you get myster
Andrew Straw wrote:
> OK, I think you're concerned about compatibility of Python extensions
> using fortran. We don't use any (that I know of), so I'm going to stop
> worrying about this and upload .debs from your .dsc (or very close) to
> my repository...
>
> ...except for one last question: If Ha
OK, I think you're concerned about compatibility of Python extensions
using fortran. We don't use any (that I know of), so I'm going to stop
worrying about this and upload .debs from your .dsc (or very close) to
my repository...
...except for one last question: If Hardy uses the g77 ABI but I'm
bu
Gregor Thalhammer wrote:
> Recent Matlab versions use Intels Math Kernel Library, which performs
> automatic multi-threading - also for mathematical functions like sin
> etc, but not for addition, multiplication etc.
It does if you have access to the parallel toolbox I mentioned earlier
in this
Francesc Alted wrote:
> A Thursday 12 February 2009, Dag Sverre Seljebotn escrigué:
>
>> A quick digression:
>>
>> It would be interesting to see how a spec would look for integrating
>> OpenMP natively into Cython for these kinds of purposes. Cython is
>> still flexible as a language after all.
Andrew Straw wrote:
> (Warning: this email is a little over-detailed on the packaging details
> front. Believe it or not, I'm not discussing the details of Debian
> packaging for fun, but rather my questions have practical importance to
> me -- I don't want to break all my lab's scipy installations
A Thursday 12 February 2009, Dag Sverre Seljebotn escrigué:
> A quick digression:
>
> It would be interesting to see how a spec would look for integrating
> OpenMP natively into Cython for these kinds of purposes. Cython is
> still flexible as a language after all.
That would be really nice indeed
Brian Granger schrieb:
>> I am curious: would you know what would be different in numpy's case
>> compared to matlab array model concerning locks ? Matlab, up to
>> recently, only spreads BLAS/LAPACK on multi-cores, but since matlab 7.3
>> (or 7.4), it also uses multicore for mathematical functions
David Cournapeau wrote:
> Andrew Straw wrote:
>
>> Fernando Perez wrote:
>>
>>
>>> On Wed, Feb 11, 2009 at 6:17 PM, David Cournapeau
>>> wrote:
>>>
>>>
>>>
Unfortunately, it does require some work, because hardy uses g77
instead of gfortran, so the source package
Brian Granger wrote:
> Hi,
>
> This is relevant for anyone who would like to speed up array based
> codes using threads.
>
> I have a simple loop that I have implemented using Cython:
>
> def backstep(np.ndarray opti, np.ndarray optf,
> int istart, int iend, double p, double q):
>
Hi Brian,
A Thursday 12 February 2009, Brian Granger escrigué:
> Hi,
>
> This is relevant for anyone who would like to speed up array based
> codes using threads.
>
> I have a simple loop that I have implemented using Cython:
>
> def backstep(np.ndarray opti, np.ndarray optf,
> int is
Wes McKinney wrote:
>
> The general problem here is an indexed array (by dates or strings, for
> example), that you want to conform to a new index. The arrays most of
> the time contain floats but occasionally PyObjects. For some reason
> the access and assignment is slow (this function can be f
Paul Rudin writes:
> def compute_voxels2(depth_buffers):
> dim = depth_buffers[0].shape[0]
> znear, zfar, ynear, yfar, xnear, xfar = depth_buffers
> z = numpy.arange(dim)
> y = numpy.arange(dim)[:, None]
> x = numpy.arange(dim)[:, None, None]
>
> return ((xnear[y,z] < xf
Andrew Straw wrote:
> Fernando Perez wrote:
>
>> On Wed, Feb 11, 2009 at 6:17 PM, David Cournapeau wrote:
>>
>>
>>> Unfortunately, it does require some work, because hardy uses g77
>>> instead of gfortran, so the source package has to be different (once
>>> hardy is done, all the one below
On Thu, Feb 12, 2009 at 12:42:37AM -0600, Robert Kern wrote:
> It is implemented using threads, with Windows native threads on
> Windows. I think Gaël really just meant "threads" there.
I guess so :). Once you reformulate my remark in proper terms, this is
indeed what comes out.
I guess all what
Fernando Perez wrote:
> On Wed, Feb 11, 2009 at 6:17 PM, David Cournapeau wrote:
>
>> Unfortunately, it does require some work, because hardy uses g77
>> instead of gfortran, so the source package has to be different (once
>> hardy is done, all the one below would be easy, though). I am not sure
78 matches
Mail list logo