o list, but it doesn't seem to
> be appearing on it as far as I can see.)
Yeah, it looks like librelist is fundamentally broken, many messages
never appear. I need to move to something sane.
cheers,
David
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion
ry to link against?
You can't easily relink at install time, so what you want is pick up
the best library at runtime. It is more or less impossible to do this
in a portable way (e.g. there is no solution that I know of on windows
< windows 2008, short of requiring to install some dlls with
On Tue, Nov 20, 2012 at 7:35 PM, Dag Sverre Seljebotn
wrote:
> On 11/20/2012 06:22 PM, David Cournapeau wrote:
>> On Tue, Nov 20, 2012 at 5:03 PM, Sturla Molden wrote:
>>> On 20.11.2012 15:38, David Cournapeau wrote:
>>>
>>>> I support this as well in prin
That's already what we do (on.windows anyway). The binary installer
contains multiple arch binaries, and we pick the bewt one.
Le 21 nov. 2012 10:16, "Henry Gomersall" a écrit :
> On Wed, 2012-11-21 at 00:44 +, David Cournapeau wrote:
> > On Tue, Nov 20, 2012 at
On Wed, Nov 21, 2012 at 10:56 AM, Henry Gomersall wrote:
> On Wed, 2012-11-21 at 10:49 +0000, David Cournapeau wrote:
>> That's already what we do (on.windows anyway). The binary installer
>> contains multiple arch binaries, and we pick the bewt one.
>
> Interesting. Doe
iprocessing.
David
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion
On Wed, Nov 21, 2012 at 2:31 PM, Sturla Molden wrote:
> On 21.11.2012 15:01, David Cournapeau wrote:
>> On Wed, Nov 21, 2012 at 12:00 PM, Sturla Molden wrote:
>>> But do we need a binary OpenBLAS on Mac? Isn't there an accelerate
>>> framework with
numpy.core._dotblas module from being built.
_dotblas is only built if *C*blas is available (atlas, accelerate and
mkl only are supported ATM).
David
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion
On Thu, Dec 6, 2012 at 7:35 PM, Bradley M. Froehle
wrote:
> Right, but if I link to libcblas, cblas would be available, no?
No, because we don't explicitly check for CBLAS. We assume it is there
if Atlas, Accelerate or MKL is found.
cheers,
David
>
>
> On Thu, Dec 6, 2012 a
On Thu, Dec 13, 2012 at 5:34 PM, Charles R Harris
wrote:
> Time to raise this topic again. Opinions welcome.
I am ok if 1.7 is the LTS. I would even go as far as dropping 2.5 as
well then (RHEL 6 uses python 2.6).
cheers,
David
___
NumPy-Discuss
ment compared to
2.5:
- context manager
- python 3-compatible exception syntax (writing code that works with
2 and 3 without any change is significantly easier if your baseline in
2.6 instead of 2.4/2.5)
- json, ast, multiprocessing are available and potentially quite
useful for NumPy itself.
c
I think (windows
being the elephant in the room). I would be more than happy to be
proven wrong, though :)
cheers,
David
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion
le, but building
things like pygtk when you don't have X11 headers installed is more or
less impossible.
David
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion
ytes on
> systems (Linux and Mac OSX):
Only by accident, at least on linux. The pointers returned by the gnu
libc malloc are at least 8 bytes aligned, but they may not be 16 bytes
when you're above the threshold where mmap is used for malloc.
The difference between aligned and unalign
of the distinction between scalar types and dtypes.)
>
> Q: So basically all the dtypes, including the weird ones like
> 'np.integer' and 'np.number'[1], would use the standard Python
> abstract base class machinery, and we could throw out all the
> issubdtype/issubsctype/issctype nonsense, and just use
> isinstance/issubclass everywhere instead?
> [1] http://docs.scipy.org/doc/numpy/reference/arrays.scalars.html
>
> A: Yeah.
>
> Q: Huh. That does sound nice. I don't know. What other problems can
> you think of with this scheme?
Thanks for the entertaining explanation.
I don't think 0-dim array being slow is such a big drawback. I would
be really surprised if there was no way to make them faster, and
having unspecified, nearly duplicated type handling code in multiple
places is likely one reason why nobody took time to really make them
faster.
Regarding ufunc combination caching, couldn't we do the caching on
demand ? I am not sure how you arrived at a 600 bytes per ufunc, but
in many real world use cases, I would suspect only a few combinations
would be used.
Scalar arrays are ones of the most esoteric feature of numpy, and a
fairly complex one in terms of implementation. Getting rid of it would
be a net plus on that side. Of course, there is the issue of backward
compatibility, whose extend is hard to assess.
cheers,
David
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion
x27;ve run it on python 2.6 and 3.2
> and 3.3 on one of my packages as a first pass.
>
>>> If we want 3.x binaries, then we should fix that or (preferably) build
>>> binaries with Bento. Bento has grown support for mpkg's; I'm not sure how
>>> robust that is.
but the few things it does better than bento can be easily improved in
> bento. So if removing numscons support from master saves some developer
> hours, +1 from me.
I think numscons was already scheduled to be dropped in 1.7 (and next
version of scipy as well) ? I am certainly in favor
On Sun, Jan 13, 2013 at 9:50 AM, Charles R Harris
wrote:
>
>
> On Sun, Jan 13, 2013 at 6:44 AM, David Cournapeau
> wrote:
>>
>> On Sun, Jan 13, 2013 at 7:29 AM, Ralf Gommers
>> wrote:
>> >
>> >
>> >
>> > On Sun, Jan 13,
On Sun, Jan 13, 2013 at 11:11 AM, Ralf Gommers wrote:
>
>
>
> On Sun, Jan 13, 2013 at 5:25 PM, David Cournapeau
> wrote:
>>
>> On Sun, Jan 13, 2013 at 9:50 AM, Charles R Harris
>> wrote:
>> >
>> >
>> > On Sun, Jan 13, 2013 at 6:44
e next release!", because
> then you end up slipping and sliding all over the place. Instead you
> say "here are some things that I want to work on next, and we'll see
> which release they end up in". Since we're already following the rule
> that nothi
The problem has no solution until we can restrict support to windows 7
and above. Otherwise, any acceptable solution would require user to be
an admin.
David
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion
On Mon, Feb 4, 2013 at 8:27 PM, Ondřej Čertík wrote:
> On Sun, Feb 3, 2013 at 2:57 AM, David Cournapeau wrote:
>> On Sun, Feb 3, 2013 at 12:28 AM, wrote:
>>> On Sat, Feb 2, 2013 at 6:14 PM, Matthew Brett
>>> wrote:
>>>> Hi,
>>>>
>>>
On Tue, Feb 5, 2013 at 12:27 AM, Ondřej Čertík wrote:
> On Mon, Feb 4, 2013 at 3:49 PM, Christoph Gohlke wrote:
>> On 2/4/2013 12:59 PM, David Cournapeau wrote:
>>> On Mon, Feb 4, 2013 at 8:27 PM, Ondřej Čertík
>>> wrote:
>>>> On Sun, Feb 3, 2013 at
I think it is, you (Ralf) think it isn't, we
> haven't discussed that. It may not come up.
> b) It may or may not be acceptable for someone other than Ondrej to be
> responsible for the Windows 64-bit builds. I think it should be, if
> necessary, we haven't really discu
as/lapack, not atlas), and the superpack installer on
sourceforge. Incidentally, that's why the super pack installer uses a
different filename, to avoid confusion.
David
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion
On Wed, Feb 13, 2013 at 5:35 AM, Ondřej Čertík wrote:
> David,
>
> On Tue, Feb 12, 2013 at 6:46 AM, David Cournapeau wrote:
>> On Tue, Feb 12, 2013 at 5:49 AM, Ondřej Čertík
>> wrote:
>>> Hi,
>>>
>>> I have uploaded the NumPy 1.7.0 source dist
:20: error: Python.h: No such file or directory
> _configtest.c:1:20: error: Python.h: No such file or directory
> lipo: can't figure out the architecture type of:
> /var/folders/4h/7kcqgdb55yjdtfs6dpwjytjhgn/T//ccIEwAT5.out
> failure.
> removing: _configtest.c _configtes
IK.)
>
> It look slike OpenBLAS is BSD-licensed, and thus compatible with
numpy/sciy.
>
> It there a reason (other than someone having to do the work) it could
> not be used as the "standard" BLAS for numpy?
no reason, and it actually works quite nicely. B
he references given in the poyfit code make no mention of it. I think it
would be much better to return the standard definition of the covariance
matrix (see, for example, the book Numerical Recipes).
David Pine
___
NumPy-Discussion mailing list
NumPy
eduction and Error Analysis for the
Physical Sciences" by Bevington, both standard works.
Dave
On Wed, Feb 27, 2013 at 9:03 AM, Charles R Harris wrote:
>
>
> On Wed, Feb 27, 2013 at 6:46 AM, David Pine wrote:
>
>> As of NumPy v1.7, numpy.polyfit includes an option for
it is clear. I
would be happy to contribute both to improving the documentation and
software.
David
If
On Wed, Feb 27, 2013 at 12:47 PM, Pauli Virtanen wrote:
> 27.02.2013 16:40, David Pine kirjoitti:
> [clip]
> > 2. I am sorry but I don't understand your response. The
Recipes has a nice
discussion both about fixing parameters and about weighting the data in
different ways in polynomial least squares fitting.
David
On Mon, Mar 4, 2013 at 7:23 PM, Jaime Fernández del Río <
jaime.f...@gmail.com> wrote:
> A couple of days back, answering a question in Stac
r
> ImportError: No module named scipy_distutils.fcompiler
>
Looks like you're having an ancient f2py in there. You may want to use
the one included in numpy instead.
David
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion
stributions.
cheers,
David
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion
elease notes see below.
I see a segfault on Ubuntu 64 bits for the test
TestAssumedShapeSumExample in numpy/f2py/tests/test_assumed_shape.py.
Am I the only one seeing it ?
David
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion
#x27;0', '1', 'o'],
['0', '2', 'o'],
['1', '0', 'o'],
['1', '1', 'x'],
['1', '2', 'x'],
['2
pycache__ directories with Python3.1 binary installer.
__pycache__ is a feature added in python 3.2 to my knowledge:
http://www.python.org/dev/peps/pep-3147
cheers,
David
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion
Hi Nadav,
I'm sorry to disappoint you but PyNIO has read-only support for GRIB. It can be
used to convert GRIB to NetCDF but not the other way around.
-Dave Brown
PyNIO developer
On Apr 8, 2011, at 10:31 AM, dileep kunjaai wrote:
> Thank you sir ,thank uuuvery much..
>
rix_y] = Record()
SystemError: error return without exception set
I would ask the universal what am I doing wrong, but how about I ask
in this particular situation, what am I doing wrong?
Can anybody guide me through this problem?
Regards,
David
___
N
On Mon, Apr 11, 2011 at 11:00 AM, Sturla Molden wrote:
> Den 11.04.2011 02:01, skrev David Crisp:
>> Can anybody guide me through this problem?
>>
>
> You must use dtype=object for the array vegetation_matrix.
Thank you. This seemed to remove the error I was havi
On Mon, Apr 11, 2011 at 11:00 AM, Sturla Molden wrote:
> Den 11.04.2011 02:01, skrev David Crisp:
>> Can anybody guide me through this problem?
>>
>
> You must use dtype=object for the array vegetation_matrix.
I changed the line which set up the vegetation_matri
On Mon, Apr 11, 2011 at 1:17 PM, David Crisp wrote:
> On Mon, Apr 11, 2011 at 11:00 AM, Sturla Molden wrote:
>> Den 11.04.2011 02:01, skrev David Crisp:
>>> Can anybody guide me through this problem?
I dont know how acceptable it is to answer your own question :P here goes:
mport ctypes"
If that does not work, the problem is how python was installed, not with numpy.
cheers,
David
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion
On Mon, Apr 18, 2011 at 12:55 AM, Ralf Gommers
wrote:
> Hi,
>
> The list of open issues for 1.6.0 is down to a handful:
>
> - f2py segfault on Ubuntu reported by David (David, did you get any
> further with this?)
I could reproduce on another machine (MAC OS X, different hardwa
On Mon, Apr 18, 2011 at 1:47 PM, David Cournapeau wrote:
> On Mon, Apr 18, 2011 at 12:55 AM, Ralf Gommers
> wrote:
>> Hi,
>>
>> The list of open issues for 1.6.0 is down to a handful:
>>
>> - f2py segfault on Ubuntu reported by David (David, did you get an
2 Gb Ram available on your machine :) ).
Maybe a check is wrong due to some wrong configuration on windows. Are
you on windows 32 or 64 bits ?
cheers,
David
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion
ke a lof of memory
Doing bincount with dict is faster in those cases.
cheers,
David
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion
On Thu, Jun 2, 2011 at 5:42 PM, wrote:
> On Wed, Jun 1, 2011 at 9:35 PM, David Cournapeau wrote:
>> On Thu, Jun 2, 2011 at 1:49 AM, Mark Miller
>> wrote:
>>> Not quite. Bincount is fine if you have a set of approximately
>>> sequential numbers. But if you don
On Mon, Jun 6, 2011 at 3:15 PM, Alex Ter-Sarkissov wrote:
> I have a vector of positive integers length n. Is there a simple (i.e.
> without sorting/ranking) of 'pulling out' k larrgest (or smallest) values.
Maybe not so simple, but does not require sorting (and its associated
o(NlogN) cost): htt
r a
review is not realistic (if only because of time differences around
the globe).
I would advise to revert that merge, and make sure the original branch
got proper review. Giving a few days for a change involving > 1
lines of code seems quite reasonable to me,
cheers,
David
_
I would like to call to the attention of the NumPy community the following call
for papers:
Second Symposium on Advances in Modeling and Analysis Using Python, 22–26
January 2012, New Orleans, Louisiana
The Second Symposium on Advances in Modeling and Analysis Using Python,
sponsored by t
inary support for .mpkg (Mac OS X native packaging)
- More consistent API for extension/compiled library build registration
- Both numpy and scipy can now be built with bento + waf as a build backend
Bento is discussed on the bento mailing list
(http://librelist.com/browser/bento).
cheers,
maybe numscons scripts for one release
for both numpy/scipy, with a warning about their deprecation, and then
removing them one release later.
Does that sound ok with everyone ?
cheers,
David
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
I posted on stackoverflow but then noticed this message board:
http://stackoverflow.com/questions/7311869/python-numpy-on-solaris-blas-slow-or-not-linked
I'm reposting the full post below:
Matrix-Matrix multiplies are very slow on my Solaris install (running
on a sparc server) compared to my OSX
On Tue, Sep 6, 2011 at 2:38 PM, David Cottrell wrote:
> I posted on stackoverflow but then noticed this message board:
>
> http://stackoverflow.com/questions/7311869/python-numpy-on-solaris-blas-slow-or-not-linked
>
> I'm reposting the full post below:
>
> Matrix-Matri
r:
print('No ATLAS:')
import numpy.linalg
N = 1000
x = numpy.random.random((N,N))
t = time.time()
(U, s, V) = numpy.linalg.svd(x)
print(U.shape, s.shape, V.shape)
# S = numpy.matrix(numpy.diag(s))
# y = U * S * V
#print(y.shape)
print(time.time()-t)
On Tue, Sep 6, 2011 at 2:59 PM
Actually this link: http://www.scipy.org/PerformanceTips seems to
indicate that numpy.dot does use blas ...
Is there some way of running ldd on the install to see what libraries
are being pulled in?
On Tue, Sep 6, 2011 at 4:13 PM, David Cottrell wrote:
> Thanks, I didn't realize dot
On Tue, Sep 6, 2011 at 5:12 PM, David Cottrell wrote:
> Actually this link: http://www.scipy.org/PerformanceTips seems to
> indicate that numpy.dot does use blas ...
This not true (you can check by looking into numpy/core/setup.py,
which explicitly checks for ATLAS for _dotblas). The i
c = f(c)
d = f(d)
I would like to replace :
a = f(a)
b = f(b)
c = f(c)
d = f(d)
with something like that, but which really modify a,b,c and d:
for x in [a,b,c,d]:
x = f(x)
So having something like a pointer on the arrays.
Thanks for your help !
Thank you Olivier and Robert for your replies!
Some remarks about the dictionnary solution:
from numpy import *
def f(arr):
return arr + 100.
arrs = {}
arrs['a'] = array( [1,1,1] )
arrs['b'] = array( [2,2,2] )
arrs['c'] = array( [3,3,3] )
arrs['d'] = array( [4,4,4] )
for key,value in arr
; If there is really something broken with vecLib/Accelerate, a ticket on
> Apple's bugtracker rdar should be opened.
>
>
>>> and the switch --fcompiler=gnu95 arg?
>>
>> This shouldn't be necessary if you only have gfortran installed.
>>
>
> Ah ok.
00)
On Wed, Sep 7, 2011 at 9:08 AM, Samuel John wrote:
>
> On 06.09.2011, at 22:13, David Cottrell wrote:
>
>> Thanks, I didn't realize dot was not just calling dgemm or some
>> variant which I assume would be reasonably fast. I see dgemm appears
>> in the numpy code
On Tue, Sep 20, 2011 at 9:56 AM, David Cottrell
wrote:
> Thanks, just getting back to this. I just checked again, and after
> setting my LD_LIBRARY_PATH properly, ldd shows _dotblas.so pointing
> and the sunmath and sunperf libraries. However the test_03.py still
> runs at about
/opt/SUNWspro11/SUNWspro/prod/lib/v8plus/../cpu/sparcv9+vis2/libfsu_isa.so.1
/platform/SUNW,Sun-Fire-V490/lib/libc_psr.so.1
/platform/SUNW,Sun-Fire-V490/lib/libmd_psr.so.1
On Tue, Sep 20, 2011 at 1:18 PM, David Cournapeau wrote:
> On Tue, Sep 20, 2011 at 9:56 AM, David Cottre
On Tue, Sep 20, 2011 at 3:18 PM, David Cottrell
wrote:
> The test_03.py was basically a linalg.svd test (which I think is a
> linalg only routine?"). I guess for linalg checking, I should run ldd
> on lapack_lite.so? (included below).
>
> It's sounding like I need to get
On Tue, Sep 20, 2011 at 1:18 PM, David Cournapeau wrote:
> On Tue, Sep 20, 2011 at 9:56 AM, David Cottrell
> wrote:
>> Thanks, just getting back to this. I just checked again, and after
>> setting my LD_LIBRARY_PATH properly, ldd shows _dotblas.so pointing
>> and
ces to deal with pull
requests only, so I would concur.
cheers,
David
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion
rest you as well.
Regards,
David
On 29 September 2011 20:46, Keith Hughitt wrote:
> Ah. Thanks for catching that!
>
> Otherwise though I think everything looks pretty good.
>
> Thanks all,
> Keith
>
> On Thu, Sep 29, 2011 at 12:18 PM, Zachary Pincus
> wrote:
>
>>
Thanks everybody for the different solutions proposed, I really appreciate.
What about this solution? So simple that I didn't think to it...
import numpy as np
from numpy import *
def f(arr):
return arr*2
a = array( [1,1,1] )
b = array( [2,2,2] )
c = array( [3,3,3] )
d = array( [4,4,4] )
/temp.linux-x86_64-2.4 -lptf77blas -lptcblas -latlas -o
> build/lib.linux-x86_64-2.4/numpy/core/_dotblas.so" failed with exit status 1
Did you build Atlas by yourself ? If so, it is most likely not usable
for shared libraries (mandatory for any python extensi
or now, closing the returned NpzFile instance is the correct
solution. I added a note about this in the load doc, and a context
manager to NpzFile so you can also do (python >= 2.5 only):
with load('yo.npz') as data:
cheers,
David
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion
same number of bits will cause both to be cast to
> higher precision. IIRC, matlab was said to return +127 as abs(-128), which,
> if true, is quite curious.
In C, abs(INT_MIN) is undefined, so both 127 and -128 work :)
David
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion
though this one
actually raises an exception).
Detecting it may be costly, but this would need benchmarking.
That being said, without context, I don't find 127 a better solution than -128.
cheers,
David
___
NumPy-Discussion mailing list
Nu
On 10/12/11, "V. Armando Solé" wrote:
> On 12/10/2011 10:46, David Cournapeau wrote:
>> On Wed, Oct 12, 2011 at 9:18 AM, "V. Armando Solé" wrote:
>>> From a pure user perspective, I would not expect the abs function to
>>> return a negative numbe
dn't rely on this (always use
> numpy.isnan to test for nan-ness).
They are the same, just not equal to each other (or even to
themselves). As for the different NaN names in numpy namespace, I
think this is for historical reason (maybe because python itself used
to print nan differently on different platforms, although this should
not be the case anymore since python 2.6).
David
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion
studio) ?
binary128 should only be thought as a (bad) synonym to np.longdouble.
David
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion
On Sun, Oct 16, 2011 at 8:33 AM, Matthew Brett wrote:
> Hi,
>
> On Sun, Oct 16, 2011 at 12:28 AM, David Cournapeau wrote:
>> On Sun, Oct 16, 2011 at 8:04 AM, Matthew Brett
>> wrote:
>>> Hi,
>>>
>>> On Sat, Oct 15, 2011 at 11:04 PM, Nadav Horesh
be:
>
> float80
> floatLD
>
> for intel 32 and 64 bit, and then
>
> floatPPC
> floatLD
>
> for whatever PPC has, and so on.
If all you want is a common name, there is already one: np.longdouble.
This is an alias for the more platform-specific name.
cheers,
David
_
ay be an option, but it would make more sense to
do it at creation time than during reset, no ? Something like a binary
and with the current mode flag,
cheers,
David
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion
tor that accepts also a buffer to copy in
> the neighbourhood.
> * A reset function that would refill the buffer after each parent iterator
> modification
The issue with giving the buffer is that one needs to be carefull
about the size and all. What's your usecase to pass the buffer
o
do to warrant changing the API here.
cheers,
David
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion
305, in __getitem__
out = N.ndarray.__getitem__(self, index)
IndexError: 0-d arrays can only use a single () or a list of newaxes (and a
single ...) as an index
Can anyone help?
It's my first time on this mailing list so apologies if this is not the
right
unless you have to or you can rely on
platform-specificities.
I would rather spend some time on implementing/integrating portable
quad precision in software,
cheers,
David
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion
, because it is very platform dependent, so it will likely
bitrot as it won't appear on usual platforms.
David
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion
runtime to a visual
studio executable). The only solution I could think of would be to
recompile the gfortran runtime with Visual Studio, which for some
reason does not sound very appealing :)
Thoughts ?
cheers,
David
___
NumPy-Discussion mailing list
NumPy
On Thu, Oct 27, 2011 at 2:16 PM, Peter
wrote:
> On Thu, Oct 27, 2011 at 2:02 PM, David Cournapeau wrote:
>>
>> Hi,
>>
>> I was wondering if we could finally move to a more recent version of
>> compilers for official win32 installers. This would of course conce
On Thu, Oct 27, 2011 at 3:15 PM, Jim Vickroy wrote:
>
> Hi David,
>
> What is the "msvcr90 vodoo" you are referring to?
gcc 3.* versions don't have stubs to link against recent versions of
MS C runtime, so we have to build them by ourselves. 4.x series don't
On Thu, Oct 27, 2011 at 5:18 PM, wrote:
> On Thu, Oct 27, 2011 at 9:02 AM, David Cournapeau wrote:
>> Hi,
>>
>> I was wondering if we could finally move to a more recent version of
>> compilers for official win32 installers. This would of course concern
>> the
On Thu, Oct 27, 2011 at 5:19 PM, Ralf Gommers
wrote:
> Hi David,
>
> On Thu, Oct 27, 2011 at 3:02 PM, David Cournapeau
> wrote:
>>
>> Hi,
>>
>> I was wondering if we could finally move to a more recent version of
>> compilers for official win32 insta
but it means the windows installers will contain GPL code.
My understanding is that this is OK because the code in question is
GPL + exception, meaning the usual GPL requirements only apply to
those runtimes, and that's ok ?
cheers,
David
_
On Sun, Oct 30, 2011 at 11:38 AM, Matthieu Brucher
wrote:
> Hi David,
>
> Is every GPL part GCC related? If yes, GCC has a licence that allows to
> redistribute its runtime in any program (meaning the program's licence is
> not relevant).
Good point, I should have specified
t; much effort?
I don't know about aldebaran, but cross compiling numpy will be a challenge.
cheers,
David
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion
E_SSE1 0
#define EINSUM_USE_SSE2 0
cheers,
David
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion
On Tue, Nov 8, 2011 at 9:01 AM, David Cournapeau wrote:
> Hi Mads,
>
> On Tue, Nov 8, 2011 at 8:40 AM, Mads Ipsen wrote:
>> Hi,
>>
>> I am trying to build numpy-1.6.1 with the following gcc compiler specs:
>>
>> Reading specs from /usr/lib/gcc/x86_64-redh
her or wether we want to drop support for
old compilers.
cheers,
David
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion
al cases: being more
efficient in those cases only is indeed easy.
cheers,
David
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion
>> mingw on top of a visual studio python?
>>> >>
>>> >> Some further forensics seem to suggest that, despite the fact the math
>>> >> suggests float96 is float64, the storage format it in fact 80-bit
>>&
On Tue, Nov 15, 2011 at 6:22 AM, Matthew Brett wrote:
> Hi,
>
> On Mon, Nov 14, 2011 at 10:08 PM, David Cournapeau wrote:
>> On Mon, Nov 14, 2011 at 9:01 PM, Matthew Brett
>> wrote:
>>> Hi,
>>>
>>> On Sun, Nov 13, 2011 at 5:03 PM, Charles R
ion issues, though (when I tried
implementing bluestein transforms on top of fftpack, it gave very bad
results numerically-wise). A comparison with fftw would be good here.
regards,
David
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion
some ABI/API
changes are needed after 2.0, we will be dragged down with this for
years. I am willing to spend time on this. Geoffray, does this sound
acceptable to you ?
David
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion
501 - 600 of 3277 matches
Mail list logo