On 23/12/15 06:08, Ralf Gommers wrote:
>
> On Tue, Dec 22, 2015 at 9:58 AM, Henry Gomersall <mailto:h...@cantab.net>> wrote:
>
> On 23/10/15 02:14, Robert McGibbon wrote:
> > The original goal was to get MS to pay for this, on the theory that
> > the
On 23/10/15 02:14, Robert McGibbon wrote:
> The original goal was to get MS to pay for this, on the theory that
> they should be cleaning up their own messes, but after 6 months of
> back-and-forth we've pretty much given up on that at this point, and
> I'm in the process of emailing everyone I can
On 30/10/14 03:58, Sturla Molden wrote:
> MKL has an API compatible with FFTW, so FFTW and MKL can be supported
> with the same C code.
Compatible with big caveats:
https://software.intel.com/en-us/node/522278
Henry
___
NumPy-Discussion mailing list
Num
On 29/10/14 18:23, Alexander Eberspächer wrote:
> Definitely. My attempt at streamlining the use of pyfftw even further
> can be found here:
>
> https://github.com/aeberspaecher/transparent_pyfftw
There could be an argument that this sort of capability should be added
to the pyfftw package, as a
On 28/10/14 04:28, Nathaniel Smith wrote:
>
> - not sure if it can handle non-power-of-two problems at all, or at
> all efficiently. (FFTPACK isn't great here either but major
> regressions would be bad.)
>
From my reading, this seems to be the biggest issue with FFTS (from my
reading as well)
On 28/10/14 09:41, David Cournapeau wrote:
> The real issue with fftw (besides the license) is the need for plan
> computation, which are expensive (but are not needed for each
> transform). Handling this in a way that is user friendly while
> tweakable for advanced users is not easy, and IMO mo
I'm running some test code on travis-ci, which is currently failing, but
passing locally.
I've identified the problem as being that my code tests internally for
the alignment of an array being its "natural alignment", which I
establish by checking "data_pointer%test_array.dtype.alignment" (I do
On 20/11/13 19:56, Chris Barker wrote:
> On Wed, Nov 20, 2013 at 3:06 AM, Henry Gomersall <mailto:h...@cantab.net>> wrote:
>
> Yes, this didn't occur to me as an option, mostly because I'm keen for a
> commercial FFTW license myself and it would gall me so
On 19/11/13 17:52, Nathaniel Smith wrote:
> On Tue, Nov 19, 2013 at 9:17 AM, Henry Gomersall wrote:
>> >On 19/11/13 16:08, Stéfan van der Walt wrote:
>>> >>On Tue, Nov 19, 2013 at 6:03 PM, Henry Gomersall wrote:
>>>>> >>> >However, FFTW i
On 19/11/13 16:08, Stéfan van der Walt wrote:
> On Tue, Nov 19, 2013 at 6:03 PM, Henry Gomersall wrote:
>> >However, FFTW is dual licensed GPL/commercial and so the wrappers are
>> >also GPL by necessity.
> I'm not sure if that is true, strictly speaking--you may lice
On 19/11/13 16:00, Charles Waldman wrote:
> How about FFTW? I think there are wrappers out there for that ...
Yes there are! (complete with the numpy.fft API)
https://github.com/hgomersall/pyFFTW
However, FFTW is dual licensed GPL/commercial and so the wrappers are
also GPL by necessity.
Cheer
On 29/10/13 18:01, Sebastian Berg wrote:
> On Tue, 2013-10-29 at 16:47 +0000, Henry Gomersall wrote:
>> >Is there a way to extract the size of array that would be created by
>> >doing 1j*array?
>> >
> There is np.result_type. It does the handling of scalars as norm
On 29/10/13 17:02, Robert Kern wrote:
>
> Quick and dirty:
>
> # Get a tiny array from `a` to test the dtype of its output when
> multiplied
> # by a complex float. It must be an array rather than a scalar since the
> # casting rules are different for array*scalar and scalar*scalar.
> dt = (a.flat
On 29/10/13 16:47, Henry Gomersall wrote:
> Is there a way to extract the size of array that would be created by
> doing 1j*array?
Of course, I mean dtype of the array.
Henry
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.or
Is there a way to extract the size of array that would be created by
doing 1j*array?
The problem I'm having is in creating an empty array to fill with
complex values without knowing a priori what the input data type is.
For example, I have a real or int array `a`.
I want to create an array `b`
On 08/10/13 09:49, Matthew Brett wrote:
> On Tue, Oct 8, 2013 at 1:06 AM, Ke Sun wrote:
>> >Dear all,
>> >
>> >I have written the following function to compute the square distances of a
>> >large
>> >matrix (each sample a row). It compute row by row and print the overall
>> >progress.
>> >The pr
On 08/10/13 09:06, Ke Sun wrote:
> I give as input a 70,000x800 matrix. The output should be a 70,000x70,000
> matrix. The program runs really slow (16 hours for 1/3 progress). And it eats
> 36G memory (fortunately I have enough).
At this stage I'd be asking myself what I'm trying to achieve and w
On 18/09/13 01:51, Antony Lee wrote:
> While I realize that this is certainly tweaking multiprocessing beyond
> its specifications, I would like to use it on Windows to start a
> 32-bit Python process from a 64-bit Python process (use case: I need
> to interface with a 64-bit DLL and use an exte
On 2013-09-19 23:12, Christoph Gohlke wrote:
> On 9/19/2013 1:06 AM, Henry Gomersall wrote:
>> On 19/09/13 09:05, Henry Gomersall wrote:
>>> I've had feedback that this is possible. Give me a few hours and I'll
>>> see what I can do...
>>
>> I mea
On 2013-09-19 17:58, Antony Lee wrote:
> Henry: thanks a lot, that would be very appreciated regardless of
> whether I end up using it in this specific project or not.
> Other replies below.
>
I've actually spent rather too long fiddling with Windows on this one! I
can't for the life of me get c
On 19/09/13 09:05, Henry Gomersall wrote:
> I've had feedback that this is possible. Give me a few hours and I'll
> see what I can do...
I mean that it builds under win 64-bit. I'll prob push a .exe.
Cheers,
Henry
___
NumPy-Di
On 18/09/13 01:51, Antony Lee wrote:
> I need to interface with a 64-bit DLL and use an extension (pyFFTW)
> for which I can only find a 32-bit compiled version (yes, I could try
> to install MSVC and compile it myself but I'm trying to avoid that...))
I'm now in a position that I might be able
On Sun, 2013-06-16 at 14:48 +0800, Sudheer Joseph wrote:
> Is it possible to sample a 4D array of numpy at given dimensions with
> out writing loops? ie a smart python way?
It's not clear how what you want to do is different from simply indexing
the array...?
Henry
__
On Mon, 2013-06-10 at 13:21 +0100, Robert Kern wrote:
> > With my work on https://github.com/hgomersall/pyFFTW, which supports
> > long double as one of the data types, numpy's long double is
> absolutely
> > the right way to do this. Certainly I've managed reasonable success
> > across the three m
On Sun, 2013-06-09 at 12:23 +0100, David Cournapeau wrote:
> On Sun, Jun 9, 2013 at 8:35 AM, Henry Gomersall
> wrote:
> > On Sat, 2013-06-08 at 14:35 +0200, Anne Archibald wrote:
> >> Looking at the rational module, I think you're right: it really
> >> shouldn
On Sat, 2013-06-08 at 14:35 +0200, Anne Archibald wrote:
> Looking at the rational module, I think you're right: it really
> shouldn't be too hard to get quads working as a user type using gcc's
> __float128 type, which will provide hardware arithmetic in the
> unlikely case that the user has hardw
On Fri, 2013-05-10 at 17:14 +0800, Sudheer Joseph wrote:
> Thank you,
> But I was looking for a format statement likw
> write(*,"(A,5F8.3)")
> with best regards,
> Sudheer
How about the following:
print('IL = ' + (('%d,' * 5)[:-1] + '\n ') * 5 % tuple(IL))
If instead of a list
On Thu, 2013-05-09 at 16:06 +0800, Sudheer Joseph wrote:
> If I wanted to print below text in a file (for reading by another
> program), it looks to be not an easy jobHope new developments will
> come and a userfriendly formatted out put method for pyton will
> evolve.
I don't understand what
On Wed, 2013-05-08 at 10:13 +0800, Sudheer Joseph wrote:
> However I get below error. Please tell me if any thing I am missing.
>
>
> file "read_reg_grd.py", line 22, in
> np.savetxt("file.txt", IL.reshape(-1,5), fmt='%5d', delimiter=',')
> AttributeError: 'list' object has no attribute 'res
On Thu, 2013-03-07 at 13:36 -0600, Mayank Daga wrote:
> Can someone point me to the definition of dot() in the numpy source?
> The only instance of 'def dot()' I found was in numpy/ma/extras.py but
> that does not seem to be the correct one.
It seems to be in a dynamic library.
In [9]: numpy.dot.
On Fri, 2013-03-01 at 17:29 +0100, Sebastian Berg wrote:
> At this time it seems there is more sentiment against it and that is
> fine with me. I thought it might be useful for some who normally want
> the linspace behavior, but do not want to worry about the right num in
> some cases. Someone who
On Fri, 2013-03-01 at 10:01 -0500, Alan G Isaac wrote:
> On 3/1/2013 9:32 AM, Henry Gomersall wrote:
> > there should be an equivalent for floats that
> > unambiguously returns a range for the half open interval
>
>
> If I've understood you:
> start + stepsize*np.
On Fri, 2013-03-01 at 14:32 +, Henry Gomersall wrote:
> I'll assert then that there should be an equivalent for floats that
> unambiguously returns a range for the half open interval. IMO this is
> more useful than a hacky version of linspace.
And, no, I haven't thought car
On Fri, 2013-03-01 at 09:24 -0500, Warren Weckesser wrote:
> > In my jet-lag addled state, i can't see when this out[-1] > stop
> case
> > will occur, but I can take it as true. It does seem to be
> problematic
> > though.
>
>
> Here you go:
>
> In [32]: end = 2.2
>
> In [33]: x = arange(0.1, e
On Fri, 2013-03-01 at 13:34 +, Nathaniel Smith wrote:
> > My usual hack to deal with the numerical bounds issue is to
> add/subtract
> > half the step.
>
> Right. Which is exactly the sort of annoying, content-free code that a
> library is supposed to handle for you, so you can save mental ene
On Fri, 2013-03-01 at 13:25 +0100, Sebastian Berg wrote:
> there has been a request on the issue tracker for a step parameter to
> linspace. This is of course tricky with the imprecision of floating
> point numbers.
How is that different to arange? Either you specify the number of points
with lins
On Sun, 2013-02-17 at 12:38 -0500, Neal Becker wrote:
> The 1st example says:
> >>> import pyfftw
> >>> import numpy
> >>> a = pyfftw.n_byte_align_empty(128, 16, 'complex128')
> >>> a[:] = numpy.random.randn(128) + 1j*numpy.random.randn(128)
> >>> b = pyfftw.interfaces.numpy_fft.fft(a)
>
> I don't
Some of you may be interested in the latest release of my FFTW bindings.
It can now serve as a drop in replacement* for numpy.fft and
scipy.fftpack.
This means you can get most of the speed-up of FFTW with a one line code
change or monkey patch existing libraries.
Lots of other goodness too of co
On Mon, 2013-01-21 at 08:41 -0500, Neal Becker wrote:
> I have an array to be used for indexing. It is 2d, where the rows are
> all the
> permutations of some numbers. So:
>
> array([[-2, -2, -2],
>[-2, -2, -1],
>[-2, -2, 0],
>[-2, -2, 1],
>[-2, -2, 2],
> ...
Further to my previous emails about getting SIMD aligned arrays, I've
noticed that numpy arrays aren't always naturally aligned either.
For example, numpy.float96 arrays are not always aligned on 12-byte
boundaries under 32-bit linux/gcc. Indeed, .alignment on the array
always seems to return 4 (
On Fri, 2012-12-21 at 11:34 +0100, Francesc Alted wrote:
> > Also this convolution code:
> > https://github.com/hgomersall/SSE-convolution/blob/master/convolve.c
> >
> > Shows a small but repeatable speed-up (a few %) when using some
> aligned
> > loads (as many as I can work out to use!).
>
> Oka
On Thu, 2012-12-20 at 21:45 +0100, Sturla Molden wrote:
> On 20.12.2012 21:24, Henry Gomersall wrote:
>
> > I didn't know that. It's a real pain having so many libc libs
> knocking
> > around. I have little experience of Windows, as you may have
> guessed!
On Thu, 2012-12-20 at 21:13 +0100, Sturla Molden wrote:
> On 20.12.2012 21:03, Henry Gomersall wrote:
>
> > Why is it important? (for my own understanding)
>
> Because if CRT resources are shared between different CRT versions,
> bad
> things will happen (the ABIs are n
On Thu, 2012-12-20 at 21:05 +0100, Sturla Molden wrote:
> On 20.12.2012 20:57, Sturla Molden wrote:
> > On 20.12.2012 20:52, Henry Gomersall wrote:
> >
> >> Perhaps the DLL should go and read MS's edicts!
> >
> > Do you link with same same CRT as Python? (m
On Thu, 2012-12-20 at 20:57 +0100, Sturla Molden wrote:
> On 20.12.2012 20:52, Henry Gomersall wrote:
>
> > Perhaps the DLL should go and read MS's edicts!
>
> Do you link with same same CRT as Python? (msvcr90.dll)
>
> You should always use -lmsvcr90.
>
&g
On Thu, 2012-12-20 at 20:50 +0100, Sturla Molden wrote:
> On 20.12.2012 18:38, Henry Gomersall wrote:
>
> > Except I build with MinGW. Please don't tell me I need to install
> Visual
> > Studio... I have about 1GB free on my windows partition!
>
> The same DLL
On Thu, 2012-12-20 at 15:23 +0100, Francesc Alted wrote:
> On 12/20/12 9:53 AM, Henry Gomersall wrote:
> > On Wed, 2012-12-19 at 19:03 +0100, Francesc Alted wrote:
> >> The only scenario that I see that this would create unaligned
> arrays
> >> is
> >> for
On Thu, 2012-12-20 at 17:48 +0100, Sturla Molden wrote:
> On 19.12.2012 19:25, Henry Gomersall wrote:
>
> > That is not true at least under Windows 32-bit. I think also it's
> not
> > true for Linux 32-bit from my vague recollections of testing in a
> > virtual
On Thu, 2012-12-20 at 17:26 +0100, Sturla Molden wrote:
> return tmp[offset:offset+N]\
> .view(dtype=d)\
> .reshape(shape, order=order)
Also, just for the email record, that should be
return tmp[offset:offset+N*d.itemsize]\
.
On Thu, 2012-12-20 at 17:26 +0100, Sturla Molden wrote:
> On 19.12.2012 09:40, Henry Gomersall wrote:
> > I've written a few simple cython routines for assisting in creating
> > byte-aligned numpy arrays. The point being for the arrays to work
> with
> > SSE/AVX cod
On Wed, 2012-12-19 at 19:03 +0100, Francesc Alted wrote:
> The only scenario that I see that this would create unaligned arrays
> is
> for machines having AVX. But provided that the Intel architecture is
> making great strides in fetching unaligned data, I'd be surprised
> that
> the difference
On Thu, 2012-12-20 at 08:12 +, Henry Gomersall wrote:
> On Wed, 2012-12-19 at 15:10 +, Nathaniel Smith wrote:
>
> > >> > Is this something that can be rolled into Numpy (the feature,
> not
> > my
> > >> > particular implementation or in
On Wed, 2012-12-19 at 15:10 +, Nathaniel Smith wrote:
> >> > Is this something that can be rolled into Numpy (the feature, not
> my
> >> > particular implementation or interface - though I'd be happy for
> it to
> >> > be so)?
> >> >
> >> > Regarding (b), I've written a test case that works fo
On Wed, 2012-12-19 at 19:03 +0100, Francesc Alted wrote:
> > Finally, I think there is significant value in auto-aligning the
> array
> > based on an appropriate inspection of the cpu capabilities (or
> > alternatively, a function that reports back the appropriate SIMD
> > alignment). Again, this
On Wed, 2012-12-19 at 15:57 +, Nathaniel Smith wrote:
> Not sure which interface is more useful to users. On the one hand,
> using funny dtypes makes regular non-SIMD access more cumbersome, and
> it forces your array size to be a multiple of the SIMD word size,
> which might be inconvenient if
I've written a few simple cython routines for assisting in creating
byte-aligned numpy arrays. The point being for the arrays to work with
SSE/AVX code.
https://github.com/hgomersall/pyFFTW/blob/master/pyfftw/utils.pxi
The change recently has been to add a check on the CPU as to what flags
are su
On Wed, 2012-11-21 at 10:49 +, David Cournapeau wrote:
> That's already what we do (on.windows anyway). The binary installer
> contains multiple arch binaries, and we pick the bewt one.
Interesting. Does it (or can it) extend to different algorithmic
implementations?
Henry
__
On Wed, 2012-11-21 at 00:44 +, David Cournapeau wrote:
> On Tue, Nov 20, 2012 at 8:52 PM, Henry Gomersall
> wrote:
> > On Tue, 2012-11-20 at 20:35 +0100, Dag Sverre Seljebotn wrote:
> >> Is there a specific reason it *has* to happen at compile-time? I'd
> >>
On Tue, 2012-11-20 at 20:35 +0100, Dag Sverre Seljebotn wrote:
> Is there a specific reason it *has* to happen at compile-time? I'd
> think
> one could do something like just shipping a lot of separate Python
> extensions which are really just the same module linked with
> different
> versions o
On Mon, 2012-10-29 at 11:49 -0400, Frédéric Bastien wrote:
> That is possible.
>
Great!
> The gpu nd array project I talked above work on the CPU and the GPU in
> OpenCL and with CUDA. But there is much stuff that is in numpy that we
> didn't ported.
This is: https://github.com/inducer/compyte/w
On Mon, 2012-10-29 at 11:11 -0400, Frédéric Bastien wrote:
> > Assuming of course all the relevant backends are up to scratch.
> >
> > Is there a fundamental reason why targetting a CPU through OpenCL is
> > worse than doing it exclusively through C or C++?
>
> First, opencl do not allow us to do
On Tue, 2012-10-23 at 11:41 -0400, Frédéric Bastien wrote:
> Did you saw the gpu nd array project? We try to do something similar
> but only for the GPU.
>
Out of interest, is there a reason why the backend for Numpy could not
be written entirely in OpenCL?
Assuming of course all the relevant bac
On Fri, 2012-09-28 at 16:43 -0500, Travis Oliphant wrote:
> I agree that we should be much more cautious about semantic changes in
> the 1.X series of NumPy.How we handle situations where 1.6 changed
> things from 1.5 and wasn't reported until now is an open question and
> depends on the partic
On Thu, 2012-07-26 at 22:15 -0600, Charles R Harris wrote:
> I would support accumulating in 64 bits but, IIRC, the function will
> need to be rewritten so that it works by adding 32 bit floats to the
> accumulator to save space. There are also more stable methods that
> could also be investigated.
On Wed, 2012-07-18 at 15:14 +0200, Molinaro Céline wrote:
> In [2]: numpy.real(arange(3))
> Out[2]: array([0, 1, 2])
> In [3]: numpy.complex(arange(3))
> TypeError: only length-1 arrays can be converted to Python scalars
>
>
> Are there any reasons why numpy.complex doesn't work on arrays?
> Shou
On Mon, 2012-07-16 at 20:35 +0300, Dmitrey wrote:
> I have wrote a routine to solve dense / sparse problems
> min {alpha1*||A1 x - b1||_1 + alpha2*||A2 x - b2||^2 + beta1 * ||x||_1
> + beta2 * ||x||^2}
> with specifiable accuracy fTol > 0: abs(f-f*) <= fTol (this parameter
> is handled by solvers
On Thu, 2012-07-12 at 16:21 +0100, Nathaniel Smith wrote:
> On Thu, Jul 12, 2012 at 4:13 PM, Henry Gomersall
> wrote:
> > On Thu, 2012-07-12 at 10:53 -0400, Neal Becker wrote:
> >> I've been bitten several times by this.
> >>
> >> logical_or (a, b, c)
On Thu, 2012-07-12 at 10:53 -0400, Neal Becker wrote:
> I've been bitten several times by this.
>
> logical_or (a, b, c)
>
> is silently accepted when I really meant
>
> logical_or (logical_or (a, b), c)
>
> because the logic functions are binary, where I expected them to be
> m-ary.
I don't
Does anyone have any experience building a 32-bit version of numpy on a
64-bit linux machine? I'm trying to build a python stack that I can use
to handle a (closed source) 32-bit library.
Much messing around with environment variables and linker flags has got
some of the way, perhaps, but not enou
Forgive me for what seems to me should be an obvious question.
How do people run development code without the need to build an entire
source distribution each time? My current strategy is to develop in a
virtualenv and then copy the changes to my numpy fork when done, but
there are lots of obvious
On Wed, 2012-05-30 at 17:11 +0100, Robert Kern wrote:
> On Wed, May 30, 2012 at 4:13 PM, Henry Gomersall
> wrote:
> > I'd like to include the _cook_nd_args() function from fftpack in my
> GPL
> > code. Is this possible?
>
> Yes. The numpy license is compatib
I'd like to include the _cook_nd_args() function from fftpack in my GPL
code. Is this possible?
How should I modify my license file to satisfy the Numpy license
requirements, but so it's clear which function it applies to?
Thanks,
Henry
___
NumPy-Disc
On Fri, 2012-05-18 at 14:45 +0200, Dag Sverre Seljebotn wrote:
> I would focus on the 'polymorphic C API' spin. PyObject_GetItem is
> polymorphic, but there is no standard way for 3rd party libraries to
> make such functions.
>
> So let's find a C API that's NOT about arrays at all and show how so
On Fri, 2012-05-18 at 12:48 +0100, mark florisson wrote:
> If we can find even more examples, preferably outside of the
> scientific community, where related projects face a similar situation,
> it may help people understand that this is not "a Numpy problem".
Buffer Objects in OpenGL?
_
On Wed, 2012-05-02 at 12:58 -0700, Stéfan van der Walt wrote:
> On Wed, May 2, 2012 at 9:03 AM, Henry Gomersall
> wrote:
> > Is this some nuance of the way numpy does things? Or am I missing
> some
> > stupid bug in my code?
>
> Try playing with the parameters of the f
I'm need to do some shifting of data within an array and am using the
following code:
for p in numpy.arange(array.shape[0], dtype='int64'):
for q in numpy.arange(array.shape[1]):
# A positive shift is towards zero
shift = shift_values[p, q]
if shift >= 0:
On 10/04/2012 17:57, Francesc Alted wrote:
>> I'm using numexpr in the end, but this is slower than numpy.abs under linux.
> Oh, you mean the windows version of abs(complex64) in numexpr is slower
> than a pure numpy.abs(complex64) under linux? That's weird, because
> numexpr has an independent im
On 10/04/2012 16:36, Francesc Alted wrote:
> In [10]: timeit c = numpy.complex64(numpy.abs(numpy.complex128(b)))
> 100 loops, best of 3: 12.3 ms per loop
>
> In [11]: timeit c = numpy.abs(b)
> 100 loops, best of 3: 8.45 ms per loop
>
> in your windows box and see if they raise similar results?
>
No
Here is the body of a post I made on stackoverflow, but it seems to be a
non-obvious issue. I was hoping someone here might be able to shed light
on it...
On my 32-bit Windows Vista machine I notice a significant (5x) slowdown
when taking the absolute values of a fairly large |numpy.complex64|
On Tue, 2012-02-14 at 15:12 -0800, Chris Barker wrote:
>
> On Tue, Feb 14, 2012 at 9:16 AM, Dag Sverre Seljebotn
> wrote:
> > It was about the need for a dedicated matrix multiplication
> operator.
>
> has anyone proposed that? I do think we've had a proposal on the table
> for generally more op
On Tue, 2012-02-14 at 14:14 -0600, Travis Oliphant wrote:
> > Is that a prompt for feedback? :)
>
> Absolutely. That's the reason I'm getting more active on this list.
> But, at the same time, we all need to be aware of the tens of
> thousands of users of NumPy who don't use the mailing list an
On Mon, 2012-02-13 at 22:56 -0600, Travis Oliphant wrote:
> But, I am also aware of *a lot* of users who never voice their opinion
> on this list, and a lot of features that they want and need and are
> currently working around the limitations of NumPy to get.These are
> going to be my primary
On Tue, 2012-02-07 at 12:26 -0800, Warren Focke wrote:
> > Is this a bug I should register?
>
> Yes.
>
> It should work right if you replace
> s[axes[-1]] = (s[axes[-1]] - 1) * 2
> with
> s[-1] = (a.shape[axes[-1]] - 1) * 2
> but I'm not really in a position to test it right now.
I ca
On Tue, 2012-02-07 at 11:53 -0800, Warren Focke wrote:
> You're not doing anything wrong.
> irfftn takes complex input and returns real output.
> The exception is a bug which is triggered because max(axes) >=
> len(axes).
Is this a bug I should register?
Cheers,
Henry
_
On Tue, 2012-02-07 at 09:15 +, Henry Gomersall wrote:
> > >>>> numpy.fft.ifftn(a, axes=axes)
> >
> > Or do you mean if the error message is expected?
>
> Yeah, the question was regarding the error message. Specifically, the
> problem it seems to have
On Tue, 2012-02-07 at 09:15 +, Henry Gomersall wrote:
>
> On Tue, 2012-02-07 at 01:04 +0100, Torgil Svensson wrote:
> > irfftn is an optimization for real input and does not take complex
> > input. You have to use numpy.fft.ifftn instead:
> >
> hmmm, that doesn&
On Tue, 2012-02-07 at 01:04 +0100, Torgil Svensson wrote:
> irfftn is an optimization for real input and does not take complex
> input. You have to use numpy.fft.ifftn instead:
>
hmmm, that doesn't sound right to me (though there could be some non
obvious DFT magic that I'm missing). Indeed,
np.i
Is the following behaviour expected:
>>> import numpy
>>> a_shape = (63, 4, 98)
>>> a = numpy.complex128(numpy.random.rand(*a_shape)+\
... 1j*numpy.random.rand(*a_shape))
>>>
>>> axes = [0, 2]
>>>
>>> numpy.fft.irfftn(a, axes=axes)
Traceback (most recent call last):
File "", line 1, in
Does anyone care about this? Is there an alternative channel for such
information, perhaps a bug report?
Cheers,
Henry
On Fri, 2010-10-29 at 13:32 +0100, Henry Gomersall wrote:
> There is an inconsistency in the documentation for NPY_INOUT_ARRAY.
>
> cf.
> http://docs.scipy.org/do
On Fri, 2010-10-29 at 15:33 +0200, Jon Wright wrote:
>
> You need to call import_array() in initspam. See:
> http://docs.scipy.org/doc/numpy-1.5.x/user/c-info.how-to-extend.html
Thanks, that solves it.
It would be really useful to have a complete example somewhere. As in, a
set of files
I'm trying to get a really simple toy example for a numpy extension
working (you may notice its based on the example in the numpy docs and
the python extension docs). The code is given below.
The problem I am having is running the module segfaults at any attempt
to access &PyArray_Type (so, as pre
There is an inconsistency in the documentation for NPY_INOUT_ARRAY.
cf.
http://docs.scipy.org/doc/numpy/user/c-info.how-to-extend.html#NPY_INOUT_ARRAY
http://docs.scipy.org/doc/numpy/reference/c-api.array.html#NPY_INOUT_ARRAY
The first link includes the flag NPY_UPDATEIFCOPY. Checking the code
se
92 matches
Mail list logo