After, I agree with you.
2015-09-30 18:14 GMT+01:00 Robert Kern :
> On Wed, Sep 30, 2015 at 10:35 AM, Matthieu Brucher
> wrote:
>>
>> Yes, obviously, the code has NR parts, so it can't be licensed as BSD
>> as it is...
>
> It's not obvious to me, especi
Yes, obviously, the code has NR parts, so it can't be licensed as BSD
as it is...
Matthieu
2015-09-30 2:37 GMT+01:00 Charles R Harris :
>
>
> On Tue, Sep 29, 2015 at 6:48 PM, Chris Barker - NOAA Federal
> wrote:
>>
>> This sounds pretty cool -- and I've had a use case. So it would be
>> nice to
Hi,
These functions are defined in the C standard library!
Cheers,
Matthieu
2015-03-17 18:00 GMT+00:00 Shubhankar Mohapatra :
> Hello all,
> I am a undergraduate and i am trying to do a project this time on numppy in
> gsoc. This project is about integrating vector math library classes of sleef
2014-05-12 14:23 GMT+02:00 Matthieu Brucher :
>
>> Yes, they seem to be focused on HPC clusters with sometimes old rules
>> (as no shared library).
>> Also, they don't use a potable Makefile generator, not even autoconf,
>> this may also play a role in Windows suppor
Yes, they seem to be focused on HPC clusters with sometimes old rules
(as no shared library).
Also, they don't use a potable Makefile generator, not even autoconf,
this may also play a role in Windows support.
2014-05-12 12:52 GMT+01:00 Olivier Grisel :
> BLIS looks interesting. Besides threading
OK, I may end up doing it, as it can be quite interesting!
Cheers,
Matthieu
2014-04-22 15:45 GMT+01:00 Andrew Collette :
> Hi,
>
>> Good work!
>> Small question : do you now have the interface to set alignment?
>
> Unfortunately this didn't make it in to 2.3. Pull requests are
> welcome for thi
Good work!
Small question : do you now have the interface to set alignment?
Cheers,
Matthieu
2014-04-22 14:25 GMT+01:00 Andrew Collette :
> Announcing HDF5 for Python (h5py) 2.3.0
> ===
>
> The h5py team is happy to announce the availability of h5py 2.3.0 (fin
> know in that field(Pierre L'Ecuyer) and he recommanded this one for
> our problem. For the GPU, we don't want an rng that have too much
> register too.
>
> Robert K. commented that this would need refactoring of numpy.random
> and then it would be easy to have many rng.
Hi,
The main issue with PRNG and MT is that you don't know how to
initialize all MT generators properly. A hash-based PRNG is much more
efficient in that regard (see Random123 for a more detailed
explanation).
>From what I heard, if MT is indeed chosen for RNG in numerical world,
in parallel world
Yes, but these will be scipy.sparse matrices, nothing to do with numpy
(dense) matrices.
Cheers,
Matthieu
2014-02-10 Dinesh Vadhia :
> Scipy sparse uses matrices - I was under the impression that scipy sparse
> only works with matrices or have things moved on?
>
>
>
> ___
According to the discussions on the ML, they switched from GPL to MPL
to enable the kind of distribution numpy/scipy is looking for. They
had some hesitations between BSD and MPL, but IIRC their official
stand is to allow inclusion inside BSD-licensed code.
Cheers,
Matthieu
2014-02-06 20:09 GMT+
Hi,
Don't forget that np.where is not smart. First np.sin(x)/x is computed
for the array, which is why you see the warning, and then np.where
selects the proper final results.
Cheers,
Matthieu
2013/11/16 David Pine :
> The program at the bottom of this message returns the following runtime
> w
In my point of view, you should never use an output argument equal to an
input argument. It can impede a lot of optimizations.
Matthieu
2013/5/23 Nicolas Rougier
>
> >
> > Sure, that's clearly what's going on, but numpy shouldn't let you
> > silently shoot yourself in the foot like that. Re-us
Hi,
It's to be expected. You are overwritten one of your input vector while it
is still being used.
So not a numpy bug ;)
Matthieu
2013/5/23 Pierre Haessig
> Hi Nicolas,
>
> Le 23/05/2013 15:45, Nicolas Rougier a écrit :
> > if I use either a or b as output, results are wrong (and nothing in
e MKL routines are called (directly, or via a numpy/scipy interface).
>
>
>
> Tom
>
>
>
> *From:* numpy-discussion-boun...@scipy.org [mailto:
> numpy-discussion-boun...@scipy.org] *On Behalf Of *Matthieu Brucher
> *Sent:* Friday, April 19, 2013 9:50 AM
>
> *To:* D
For the matrix multiplication or array dot, you use BLAS3 functions as they
are more or less the same. For the rest, nothing inside Numpy uses BLAS or
LAPACK explicitelly IIRC. You have to do the calls yourself.
2013/4/19 Neal Becker
> KACVINSKY Tom wrote:
>
> > You also get highly optimized BL
Hi,
I think you have at least linear algebra (lapack) and dot. Basic
arithmetics will not benefit, for expm, logm... I don't know.
Matthieu
2013/4/19 Neal Becker
> What sorts of functions take advantage of MKL?
>
> Linear Algebra (equation solving)?
>
> Something like dot product?
>
> exp, lo
сообщение ---
> От кого: "Matthieu Brucher"
> Дата: 16 марта 2013, 11:33:39
>
> Hi,
>
> Different objects can have the same hash, so it compares to find the
> actual correct object.
> Usually when you store something in a dict and later you can't find it
> anym
Hi,
Different objects can have the same hash, so it compares to find the actual
correct object.
Usually when you store something in a dict and later you can't find it
anymore, it is that the internal state changed and that the hash is not the
same anymore.
Matthieu
2013/3/16 Dmitrey
>
>
> ---
Hi,
Actually, this behavior is already present in other languages, so I'm -1 on
additional verbosity.
Of course a += b is not the same as a = a + b. The first one modifies the
object a, the second one creates a new object and puts it inside a. The
behavior IS consistent.
Cheers,
Matthieu
2013/
> Does anyone have an informed opinion on the quality of these books:
>
> "NumPy 1.5 Beginner's Guide", Ivan Idris,
> http://www.packtpub.com/numpy-1-5-using-real-world-examples-beginners-guide/book
>
> "NumPy Cookbook", Ivan Idris,
> http://www.packtpub.com/numpy-for-python-cookbook/book
>
Packt
Oh, about the differences. If there is something like cache blocking inside
ATLAS (which would make sense), the MAD are not in exactly the same order
and this would lead to some differences. You need to compare the MSE/sum of
values squared to the machine precision.
Cheers,
2012/11/9 Matthieu
Hi,
A.A slower than A.A' is not a surprise for me. The latter is far more cache
friendly that the former. Everything follows cache lines, so it is faster
than something that will use one element from each cache line. In fact it
is exactly what "proves" that the new version is correct.
Good job (if
Does ACML now provide a CBLAS interface?
Matthieu
2012/5/12 Thomas Unterthiner
>
> On 05/12/2012 03:27 PM, numpy-discussion-requ...@scipy.org wrote:
> > 12.05.2012 00:54, Thomas Unterthiner kirjoitti:
> > [clip]
> >> > The process will have 100% CPU usage and will not show any activity
> >> >
2012/3/6 Sturla Molden
> On 06.03.2012 21:45, Matthieu Brucher wrote:
>
> > This is your opinion, but there are a lot of numerical code now in C++
> > and they are far more maintainable than in Fortran. And they are faster
> > for exactly this reason.
>
> That is
Using either for
> numerical programming usually a mistake.
>
This is your opinion, but there are a lot of numerical code now in C++ and
they are far more maintainable than in Fortran. And they are faster for
exactly this reason.
Matthieu
--
Information System Engineer, Ph.D.
Blog: http://matt.e
2012/2/20 Daniele Nicolodi
> On 18/02/12 04:54, Sturla Molden wrote:
> > This is not true. C++ can be much easier, particularly for those who
> > already know Python. The problem: C++ textbooks teach C++ as a subset
> > of C. Writing C in C++ just adds the complexity of C++ on top of C,
> > for n
2012/2/19 Sturla Molden
> Den 19.02.2012 10:28, skrev Mark Wiebe:
> >
> > Particular styles of using templates can cause this, yes. To properly
> > do this kind of advanced C++ library work, it's important to think
> > about the big-O notation behavior of your template instantiations, not
> > jus
2012/2/19 Nathaniel Smith
> On Sun, Feb 19, 2012 at 9:16 AM, David Cournapeau
> wrote:
> > On Sun, Feb 19, 2012 at 8:08 AM, Mark Wiebe wrote:
> >> Is there a specific
> >> target platform/compiler combination you're thinking of where we can do
> >> tests on this? I don't believe the compile tim
2012/2/19 Matthew Brett
> Hi,
>
> On Sat, Feb 18, 2012 at 8:38 PM, Travis Oliphant
> wrote:
>
> > We will need to see examples of what Mark is talking about and clarify
> some
> > of the compiler issues. Certainly there is some risk that once code is
> > written that it will be tempting to jus
> Would it be fair to say then, that you are expecting the discussion
> about C++ will mainly arise after the Mark has written the code? I
> can see that it will be easier to specific at that point, but there
> must be a serious risk that it will be too late to seriously consider
> an alternative
> C++11 has this option:
>
> for (auto& item : container) {
> // iterate over the container object,
> // get a reference to each item
> //
> // "container" can be an STL class or
> // A C-style array with known size.
> }
>
> Which does this:
>
> for item in container:
> pass
Hi Sturla,
It has been several months now since AMP is there, I wouldn't care about it.
You also forgot about OpenAcc, the accelerator sister of OpenMP, Intel's
PBB (with TBB, IPP, ArBB that will soon make a step in Numpy's world),
OmpSS, and so many others.
I wouldn't blame MS for this, IMHO Int
Hi,
If I remember correctly, float is a double (precision float). The precision
is more important in doubles (float64) than in usual floats (float32). And
20091231 can not be reprensented in 32bits floats.
Matthieu
2011/12/17 Alex van Houten
> Try this:
> $ python
> Python 2.7.1 (r271:86832, A
Hi David,
Is every GPL part GCC related? If yes, GCC has a licence that allows to
redistribute its runtime in any program (meaning the program's licence is
not relevant).
Cheers,
Matthieu
2011/10/30 David Cournapeau
> Hi,
>
> While testing the mingw gcc 3.x -> 4.x migration, I realized that s
It seems you are missing libiomp5.so, which is sound if you re using the
whole Composer package: the needed libs are split in two different
locations, and unfortunately, Numpy cannot cope with this last time I
checked (I think it was one of the reasons David Cournapeau created numscons
and bento).
Indeed, it is not. In the first case, you keep your original object and each
(integer) element is multiplied by 1.0. In the second example, you are
creating a temporary object a*x, and as x is a float and a an array of
integers, the result will be an array of floats, which will be assigned to
a.
M
Hi,
I don't thnk this is a bug. You are playign with C integers, not Python
integers, and the former are limited. It's a common "feature" in all
processors (even DSPs).
Matthieu
2011/3/23 Dmitrey
> >>> 2**64
> 18446744073709551616L
> >>> 2**array(64)
> -9223372036854775808
> >>> 2**100
> 1267
Hi,
Did you try np.where(res[:,4]==2) ?
Matthieu
2011/3/17 santhu kumar
> Hello all,
>
> I am new to Numpy. I used to program before in matlab and am getting used
> to Numpy.
>
> I have a array like:
> res
> array([[ 33.35053669, 49.4615004 , 44.27631299, 1., 2.
> ],
>[ 3
>
> C++ templates maks binaries almost impossible to debug.
>
Never had an issue with this and all my number crunching code is done
through metaprogramming (with vectorization, cache blocking...) So I have a
lot of complex template structures, and debugging them is easy.
Then, if someone doesn't w
Hi,
Intel Fortran is an excellent Fortran compiler. Why is Fortran still better
than C and C++?
- some rules are different, like arrays passed to functions are ALWAYS
supposed to be independent in Fortran, whereas in C, you have to add a
restrict keyword
- due to the last fact, Fortran is a langua
Hi,
I'm sorry I didn't file a bug, I have some troubles getting my old trac
account back :|
In lib/npyio.py, there is a mistake line 1029.
Instead on fh.close(), it should have been file.close(). If fromregex opens
the file, it will crash because the name of the file is not correct.
Matthieu
--
t;
> Thanks,
> Sebastian
>
> On Sat, Feb 19, 2011 at 12:40 AM, Sturla Molden wrote:
> > Den 17.02.2011 16:31, skrev Matthieu Brucher:
> >
> > It may also be the sizes of the chunk OMP uses. You can/should specify
> > them.in
> >
> > Matthieu
> >
&g
of the OpenMP overhead when
> run with 1 thread,
> and found that - if measured correctly, using same compiler settings
> and so on - the speedup is so small that there no point in doing
> OpenMP - again.
> (For my case, having (only) 4 cores)
>
>
> Cheers,
> Sebastia
> Then, where does the overhead come from ? --
> The call toomp_set_dynamic(dynamic);
> Or the
> #pragma omp parallel for private(j, i,ax,ay, dif_x, dif_y)
>
It may be this. You initialize a thread pool, even if it has only one
thread, and there is the dynamic part, so OpenMP may create severa
> Do you think, one could get even better ?
> And, where does the 7% slow-down (for single thread) come from ?
> Is it possible to have the OpenMP option in a code, without _any_
> penalty for 1 core machines ?
>
There will always be a penalty for parallel code that runs on one core. You
have at l
nd with C extensions?
> I don't know what "PAPI profil" is ...?
> -Sebastian
>
>
> On Tue, Feb 15, 2011 at 4:54 PM, Matthieu Brucher
> wrote:
> > Hi,
> > My first move would be to add a restrict keyword to dist (i.e. dist is
> the
> > only pointer to
Hi,
My first move would be to add a restrict keyword to dist (i.e. dist is the
only pointer to the specific memory location), and then declare dist_ inside
the first loop also with a restrict.
Then, I would run valgrind or a PAPI profil on your code to see what causes
the issue (false sharing, ...
Hi,
This pops up regularly here, you can search with Google and find this page:
http://matt.eifelle.com/2008/11/03/i-used-the-latest-mkl-with-numpy-and/
Matthieu
2011/2/13 Andrzej Giniewicz
> Hello,
>
> I'd like to ask if anyone got around the undefined symbol i_free
> issue? What I did was th
I think the main issue is that ACML didn't have an official CBLAS interface,
so you have to check if they provide one now. If thy do, it should be almost
easy to link against it.
Matthieu
2011/1/23 David Cournapeau
> 2011/1/23 Dmitrey :
> > Hi all,
> > I have AMD processor and I would like to g
2010/12/30 K.-Michael Aye :
> On 2010-12-30 16:43:12 +0200, josef.p...@gmail.com said:
>
>>
>> Since linspace exists, I don't see much point in adding the stop point
>> in arange. I use arange mainly for integers as numpy equivalent of
>> python's range. And I often need arange(n+1) which is less w
2010/11/24 Gael Varoquaux :
> On Tue, Nov 23, 2010 at 07:14:56PM +0100, Matthieu Brucher wrote:
>> > Jumping in a little late, but it seems that simulated annealing might
>> > be a decent method here: take random steps (drawing from a
>> > distribution of integer
2010/11/23 Zachary Pincus :
>
> On Nov 23, 2010, at 10:57 AM, Gael Varoquaux wrote:
>
>> On Tue, Nov 23, 2010 at 04:33:00PM +0100, Sebastian Walter wrote:
>>> At first glance it looks as if a relaxation is simply not possible:
>>> either there are additional rows or not.
>>> But with some technical
> The problem is that I can't tell the Nelder-Mead that the smallest jump
> it should attempt is .5. I can set xtol to .5, but it still attemps jumps
> of .001 in its initial jumps.
This is strange. It should not if the intiial points are set
adequatly. You may want to check if the initial conditi
2010/11/22 Gael Varoquaux :
> On Mon, Nov 22, 2010 at 11:12:26PM +0100, Matthieu Brucher wrote:
>> It seems that a simplex is what you need. It uses the barycenter (more
>> or less) to find a new point in the simplex. And it works well only in
>> convex functions (but in fact
2010/11/22 Gael Varoquaux :
> On Mon, Nov 22, 2010 at 11:12:26PM +0100, Matthieu Brucher wrote:
>> It seems that a simplex is what you need.
>
> Ha! I am learning new fancy words. Now I can start looking clever.
>
>> > I realize that maybe I should rephrase my qu
2010/11/22 Gael Varoquaux :
> On Mon, Nov 22, 2010 at 09:12:45PM +0100, Matthieu Brucher wrote:
>> Hi ;)
>
> Hi bro
>
>> > does anybody have, or knows where I can find some N dimensional
>> > dichotomy optimization code in Python (BSD licensed, or equivalent)
2010/11/22 Gael Varoquaux :
> Hi list,
Hi ;)
> does anybody have, or knows where I can find some N dimensional dichotomy
> optimization code in Python (BSD licensed, or equivalent)?
I don't know any code, but it should be too difficult by bgoing
through a KdTree.
> Worst case, it does not look
>> It would be great if someone could let me know why this happens.
>
> They don't use the same implementation, so such tiny differences are
> expected - having exactly the same solution would have been surprising,
> actually. You may be surprised about the difference for such a trivial
> operation
Denormal numbers are a tricky beast. You may have to change the clip
or the shift depending on the processor you have.
It's no wonder that processors and thus compilers have options to
round denormals to zero.
Matthieu
2010/9/11 Hagen Fürstenau :
>> Anyway, seems it is indeed a denormal issue, as
Thanks Joseph, I'll wrap this inside my code ;)
Matthieu
2010/9/2 :
> On Thu, Sep 2, 2010 at 3:56 AM, Matthieu Brucher
> wrote:
>> Hi,
>>
>> I'm looking for a Numpy equivalent of convmtx
>> (http://www.mathworks.in/access/helpdesk/help/toolbox/signal/con
Hi,
I'm looking for a Numpy equivalent of convmtx
(http://www.mathworks.in/access/helpdesk/help/toolbox/signal/convmtx.html).
Is there something inside Numpy directly? or perhaps Scipy?
Matthieu
--
Information System Engineer, Ph.D.
Blog: http://matt.eifelle.com
LinkedIn: http://www.linkedin.com
I don't think there is longdouble on Windows, is there?
Matthieu
2010/8/18 :
> On Wed, Aug 18, 2010 at 10:36 AM, Charles R Harris
> wrote:
>>
>>
>> On Wed, Aug 18, 2010 at 8:25 AM, Colin Macdonald
>> wrote:
>>>
>>> On 08/18/10 15:14, Charles R Harris wrote:
>>> > However, the various constants
> I've been having a similar problem compiling NumPy with MKL on a cluster with
> a site-wide license. Dag's site.cfg fails to config if I use 'iomp5' in it,
> since (at least with this version, 11.1) libiomp5 is located in
>
> /scinet/gpc/intel/Compiler/11.1/072/lib/intel64/
>
> whereas t
2010/8/4 Søren Gammelmark :
>
>>
>> I wouldn't know for sure, but could this be related to changes to the
>> gcc compiler in Fedora 13 (with respect to implicit DSO linking) or
>> would that only be an issue at build-time?
>>
>> http://fedoraproject.org/w/index.php?title=UnderstandingDSOLinkChange
de whereas
>> all (major) Linux distributions default to 4ByteUnicode.
>>
>> ( check >>> sys.maxunicode to see what you have; I get 1114111, i.e
>> >65535 , so I have 4 byte (on Debian) )
>>
>> So, most likely you have some "hand co
It's a problem of compilation of Python and numpy with different
parameters. But I've tried the same yesterday, and the Ubuntu
repository are OK in that respect, so there is something not quite
right with your configuration.
Matthieu
2010/7/27 Robert Faryabi :
> I can see the numpy now, but I hav
Python 2.6.5 from Ubuntu?
I tried the same yesterday evening, and it worked like a charm.
Matthieu
2010/7/27 Robert Faryabi :
> I am using 2.5.6
>
> Python 2.6.5 (r265:79063, Jun 28 2010, 20:31:28)
> [GCC 4.4.3] on linux2
>
>
>
> On Tue, Jul 27, 2010 at 9:51 AM, M
Which version of Python are you actually using in this example?
Matthieu
2010/7/27 Robert Faryabi :
> I am new to numpy. Hopefully this is a correct forum to post my question.
>
> I have Ubuntu Luci system. I installed Python 2.6.5 and Python 3.0 as well
> as python-numpy using Ubuntu repository.
> Dave, I got:
> c:\SVNRepository\numpy>C:\Python31>python setup.py bdist_wininst
> 'C:\Python31' is not recognized as an internal or external command,
> operable program or batch file.
>
> Or didn't I do exactly what you suggested?
python setup.py bdist_wininst
>> Assuming you have a C compiler
BTW, there still is an error with ifort, so scipy is still
incompatible with the Intel compilers (which is at least very sad...)
Matthieu
2010/4/19 Matthieu Brucher :
> Hi,
>
> I'm trying to compile scipy with ICC (numpy got through correctly),
> but I have issue with in
Hi,
I'm trying to compile scipy with ICC (numpy got through correctly),
but I have issue with infinites in cephes:
icc: scipy/special/cephes/const.c
scipy/special/cephes/const.c(94): error: floating-point operation
result is out of range
double INFINITY = 1.0/0.0; /* 99e999; */
Hi,
A.shape[1]
2010/3/17 gerardo.berbeglia :
>
> I would like to know a simple way to know the size of a given dimension of a
> numpy array.
>
> Example
> A = numpy.zeros((10,20,30),float)
> The size of the second dimension of the array A is 20.
>
> Thanks.
>
>
>
>
> --
> View this message in con
2010/2/18 Christopher Barker :
> Dag Sverre Seljebotn wrote:
>> If it is not compiled with -fPIC, you can't statically link it into any
>> shared library, it has to be statically linked into the final executable
>> (so the standard /usr/bin/python will never work).
>
> Shows you what I (don't) know
ation...
>
> --George.
>
> On 18 February 2010 10:56, Matthieu Brucher
> wrote:
>> If header files are provided, the work done by f2py is almost done.
>> But you don't know the real Fortran interface, so you still have to
>> use ctypes over f2py.
>>
&g
If header files are provided, the work done by f2py is almost done.
But you don't know the real Fortran interface, so you still have to
use ctypes over f2py.
Matthieu
2010/2/18 George Nurser :
> Hi Nils,
> I've not tried it, but you might be able to interface with f2py your
> own fortran subrouti
> Ok I have extracted the *.o files from the static library.
>
> Applying the file command to the object files yields
>
> ELF 64-bit LSB relocatable, AMD x86-64, version 1 (SYSV),
> not stripped
>
> What's that supposed to mean ?
It means that each object file is an object file compiled with -fPIC
> You may have to convert the .a library to a .so library.
And this is where I hope that the library is compiled with fPIC (which
is generally not the case for static libraries). If it is not the
case, you will not be able to compile it as a shared library and thus
not be able to use it from Pytho
>> [1] BTW, I could not figure out how to link statically if I wanted -- is
>> "search_static_first = 1" supposed to work? Perhaps MKL will insist on
>> loading some parts dynamically even then *shrug*.
>
> search_static_first is inherently fragile - using the linker to do this
> is much better (wi
2010/1/21 Dag Sverre Seljebotn :
> Matthieu Brucher wrote:
>>> try:
>>> import sys
>>> import ctypes
>>> _old_rtld = sys.getdlopenflags()
>>> sys.setdlopenflags(_old_rtld|ctypes.RTLD_GLOBAL)
>>> from numpy.linalg import lap
> try:
> import sys
> import ctypes
> _old_rtld = sys.getdlopenflags()
> sys.setdlopenflags(_old_rtld|ctypes.RTLD_GLOBAL)
> from numpy.linalg import lapack_lite
> finally:
> sys.setdlopenflags(_old_rtld)
> del sys; del ctypes; del _old_rtld
This also applies to scipy code that
Hi,
SCons can also do configuration and installation steps. David made it
possible to use SCons capabilities from distutils, but you can still
make a C/Fortran/Cython/Python project with SCons.
Matthieu
2010/1/16 Kurt Smith :
> My questions here concern those familiar with configure/build/instal
>> OK, I should have said "Object-oriented SIMD API that is implemented
>> using hardware SIMD instructions".
>
> No, I think you're right. Using "SIMD" to refer to numpy-like
> operations is an abuse of the term not supported by any outside
> community that I am aware of. Everyone else uses "SIMD"
> Is it general, or just for simple operations in numpy and ufunc ? I
> remember that for music softwares, SIMD used to matter a lot, even for
> simple bus mixing (which is basically a ax+by with a, b scalars and x
> y the input arrays).
Indeed, it shouldn't :| I think the main reason might not be
Hi,
You need to use the static libraries, are you sure you currently do?
Matthieu
2009/10/15 Kashyap Ashwin :
> I followed the advice given by the Intel MKL link adviser
> (http://software.intel.com/en-us/articles/intel-mkl-link-line-advisor/)
>
> This is my new site.cfg:
> mkl_libs = mkl_intel_
> Sure. Specially because NumPy is all about embarrasingly parallel problems
> (after all, this is how an ufunc works, doing operations
> element-by-element).
>
> The point is: are GPUs prepared to compete with a general-purpose CPUs in
> all-road operations, like evaluating transcendental function
Use Numpy instead of Numeric (no longer supported I think)?
Matthieu
2009/9/1 Stefano Covino :
> Hello everybody,
>
> I have just upgraded my Mac laptop to snow leopard.
> However, I can no more compile Numeric 24.2.
>
> Here is my output:
>
> [MacBook-Pro-di-Stefano:~/Pacchetti/Numeric-24.2] cov
>> I personally think that, in general, exposing GPU capabilities directly
>> to NumPy would provide little service for most NumPy users. I rather
>> see letting this task to specialized libraries (like PyCUDA, or special
>> versions of ATLAS, for example) that can be used from NumPy.
>
> speciali
2009/8/6 Erik Tollerud :
> Note that this is from a "user" perspective, as I have no particular plan of
> developing the details of this implementation, but I've thought for a long
> time that GPU support could be great for numpy (I would also vote for OpenCL
> support over cuda, although conceptua
2009/7/30 Raymond de Vries :
> Hi
>
>> Indeed, the solution is as simple as this ;) The trouble is to find
>> the information!
>>
> Yes, there is so much information everywhere. And it's hard to make the
> first steps.
> For the std::vector<>[], I suggest you convert it to a single vector,
called, but not how...
>>
>> I will take a look at the conversion to a single vector. I hope this can
>> be avoided because I cannot simply change the library.
>>
>> regards
>> Raymond
>>
>>
>> Matthieu Brucher wrote:
>>
>>> Hi,
>
Hi,
In fact, it's not that complicated. You may know the way how copying a
vector, and this is all you need
(http://matt.eifelle.com/2008/01/04/transforming-a-c-vector-into-a-numpy-array/).
You will have to copy your data, it is the safest way to ensure that
the data is always valid.
For the std::
2009/7/9 David Cournapeau :
> Matthieu Brucher wrote:
>>
>> Unfortunately, this is not possible. We've been playing with blocking
>> loops for a long time in finite difference schemes, and it is always
>> compiler dependent
>
> You mean CPU dependent
2009/7/9 Citi, Luca :
> Hello
>
> The problem is not PyArray_Conjugate itself.
> The problem is that whenever you call a function from the C side
> and one of the inputs has ref_count 1, it can be overwritten.
> This is not a problem from the python side because if the
> ufunc sees a ref_count=1 it
2009/7/9 Pauli Virtanen :
> On 2009-07-08, Stéfan van der Walt wrote:
>> I know very little about cache optimality, so excuse the triviality of
>> this question: Is it possible to design this loop optimally (taking
>> into account certain build-time measurable parameters), or is it the
>> kind of
Hi,
Is it really ?
You only show the imaginary part of the FFT, so you can't be sure of
what you are saying.
Don't forget that the only difference between FFT and iFFT is (besides
of teh scaling factor) a minus sign in the exponent.
Matthieu
2009/6/9 bela :
>
> I tried to calculate the second fo
2009/6/9 Robin :
> On Mon, Jun 8, 2009 at 7:14 PM, David Warde-Farley wrote:
>>
>> On 8-Jun-09, at 8:33 AM, Jason Rennie wrote:
>>
>> Note that EM can be very slow to converge:
>>
>> That's absolutely true, but EM for PCA can be a life saver in cases where
>> diagonalizing (or even computing) the f
2009/6/8 David Cournapeau :
> Matthieu Brucher wrote:
>> David,
>>
>> I've checked out the trunk, and the segmentation fault isn't there
>> anymore (the trunk is labeled 0.8.0 though)
>>
>
> Yes, the upcoming 0.7.1 release has its code in the 0.7.x
/data/pau112/INNO/local/x86_64/lib/python2.5/site-packages/nose/case.py",
line 182, in runTest
self.test(*self.arg)
File
"/data/pau112/INNO/local/x86_64/lib/python2.5/site-packages/scipy/special/tests/test_basic.py",
line 2295, in test_some_values
assert isnan(struve(
OK, I'm stuck with #946 with the MKL as well (finally managed to
compile and use it with only the static library safe for libguide).
I'm trying to download the trunk at the moment to check if the
segmentation fault is still there.
Matthieu
2009/6/8 Matthieu Brucher :
> Good luc
1 - 100 of 442 matches
Mail list logo