On Wed, Oct 26, 2016 at 3:20 PM, wrote:
>
>
> On Wed, Oct 26, 2016 at 3:11 PM, Mathew S. Madhavacheril <
> mathewsyr...@gmail.com> wrote:
>
>>
>>
>> On Wed, Oct 26, 2016 at 2:56 PM, Nathaniel Smith wrote:
>>
>>> On Wed, Oct 26, 2016 at 11
On Wed, Oct 26, 2016 at 2:56 PM, Nathaniel Smith wrote:
> On Wed, Oct 26, 2016 at 11:13 AM, Stephan Hoyer wrote:
> > On Wed, Oct 26, 2016 at 11:03 AM, Mathew S. Madhavacheril
> > wrote:
> >>
> >> On Wed, Oct 26, 2016 at 1:46 PM, Stephan Hoyer
> wrote:
>
On Wed, Oct 26, 2016 at 2:13 PM, Stephan Hoyer wrote:
> On Wed, Oct 26, 2016 at 11:03 AM, Mathew S. Madhavacheril <
> mathewsyr...@gmail.com> wrote:
>
>> On Wed, Oct 26, 2016 at 1:46 PM, Stephan Hoyer wrote:
>>
>>> I wonder if the goals of this addition could
ser would have to re-implement the part that converts the covariance
matrix to a correlation
coefficient. I made this PR to avoid that code duplication.
Mathew
>
> On Wed, Oct 26, 2016 at 10:27 AM, Mathew S. Madhavacheril <
> mathewsyr...@gmail.com> wrote:
>
>> Hi all,
>&g
the user to write their own
code to convert one to the other, we want to allow both to
be obtained from `numpy` as efficiently as possible.
Best,
Mathew
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
https://mail.scipy.org/mailman/listinfo
thanks. My solution was much hackier.
-Mathew
On Thu, Jun 2, 2011 at 10:27 AM, Olivier Delalleau wrote:
> I think this does what you want:
>
> def seq_split(x):
> r = [0] + list(numpy.where(x[1:] != x[:-1] + 1)[0] + 1) + [None]
> return [x[r[i]:r[i + 1]] for i in xr
Hi
I have indices into an array I'd like split so they are sequential
e.g.
[1,2,3,10,11] -> [1,2,3],[10,11]
How do I do this?
-Mathew
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion
okay. To get it all to work I edited msvc9compiler.py and changed /MD
to /MT. This still led to an a different error. having to do with
mt.exe which does not come with MSVC 2008 Express. I fixed this
commenting out /MANIFEST stuff in msvc9compile.py
On Thu, May 19, 2011 at 6:25 PM, Mathew Yeates
Solved. Sort of. When I compiled by hand and switched /MD to /MT it
worked. It would still be nice if I could control the compiler options
f2py passes to cl.exe
-Mathew
On Thu, May 19, 2011 at 3:05 PM, Mathew Yeates wrote:
> Hi
> I am trying to run f2py and link to some libraries.
>
ant to try compiling the
code with /MT. How can tell f2py to use this option when compiling
(not linking, this option is passed to cl).
-Mathew
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion
cool. just what I was looking for
On Thu, May 19, 2011 at 2:15 PM, Alan G Isaac wrote:
> On 5/19/2011 2:24 PM, Mathew Yeates wrote:
>> The Registry keys point to the old Python27.
>
>
> Odd. The default installation settings
> should have reset this. Or so I believed.
&
Right. The Registry keys point to the old Python27.
On Thu, May 19, 2011 at 11:23 AM, Alan G Isaac wrote:
> On 5/19/2011 2:15 PM, Mathew Yeates wrote:
>> I*am* using the windows installer.
>
> And you find that it does not find your most recent
> Python 2.7 install, for which
I *am* using the windows installer.
On Thu, May 19, 2011 at 11:14 AM, Alan G Isaac wrote:
> On 5/19/2011 2:07 PM, Mathew Yeates wrote:
>> I have installed a new version of Python27 in a new directory. I want to get
>> this info into the registry so, when I install Numpy, it
Hi
I have installed a new version of Python27 in a new directory. I want to get
this info into the registry so, when I install Numpy, it will use my new
Python
TIA
-Mathew
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http
e.g.
>>
>> def imread(fname, mode='RGBA'):
>> return np.asarray(Image.open(fname).convert(mode))
>>
>> to ensure that you always get 4-channel images, even for images that
>> were initially RGB or grayscale.
>>
>> HTH,
>> Dan
>>
Hi
What is current method of using ndiimage on a Tiff file? I've seen
different methods using ndimage itself, scipy.misc and Pil.
Mathew
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion
bizarre
I get
=
>>> hello.foo(a)
Hello from Fortran!
a= 1
2
>>> a
1
>>> hello.foo(a)
Hello from Fortran!
a= 1
2
>>> print a
1
>>>
=
i.e. The value of 2 gets printed! This is numpy 1.3.0
-Mathew
On
I have
subroutine foo (a)
integer a
print*, "Hello from Fortran!"
print*, "a=",a
a=2
end
and from python I want to do
>>> a=1
>>> foo(a)
and I want a's value to now be 2.
How do I do this?
Mathew
_
Thank a lot. I was wading through the Python C API. This is much simpler.
-Mathew
On Fri, Sep 24, 2010 at 10:21 AM, Zachary Pincus
wrote:
>> I'm trying to do something ... unusual.
>>
>> gdb support scripting with Python. From within my python script, I can
>> ge
e creation of a buffer, but, is there an
easier, more direct, way?
-Mathew
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion
Nope. This version didn't work either.
>
> If you're on Python 2.6 the binary on here might work for you:
>
> http://www.lfd.uci.edu/~gohlke/pythonlibs/
>
> It looks recent enough to have the rewritten ndimage
> ___
> NumPy-Discussion mailing list
> Num
I'm on Windows, using a precompiled binary. I never built numpy/scipy on
Windows.
On Wed, Jun 2, 2010 at 10:45 AM, Wes McKinney wrote:
> On Wed, Jun 2, 2010 at 1:23 PM, Mathew Yeates
> wrote:
> > thanks. I am also getting an error in ndi.mean
> > Were you getting the
thanks. I am also getting an error in ndi.mean
Were you getting the error
"RuntimeError: data type not supported"?
-Mathew
On Wed, Jun 2, 2010 at 9:40 AM, Wes McKinney wrote:
> On Wed, Jun 2, 2010 at 3:41 AM, Vincent Schut wrote:
> > On 06/02/2010 04:52 AM, josef.p
I guess it's as fast as I'm going to get. I don't really see any other way.
BTW, the lat/lons are integers)
-Mathew
On Tue, Jun 1, 2010 at 1:49 PM, Zachary Pincus wrote:
> > Hi
> > Can anyone think of a clever (non-lopping) solution to the following?
> >
>
[2001] then
data[1001] = (data[1001] + data[2001])/2
Looping is going to take wa to long.
Mathew
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion
Anybody have any ideas what is going on here. Although I found a workaround,
I'm concerned about memory leaks
From: numpy-discussion-boun...@scipy.org
[mailto:numpy-discussion-boun...@scipy.org] On Behalf Of Yeates, Mathew C (388D)
Sent: Tuesday, Decemb
3 Gigs, all is good)
Mathew
From: numpy-discussion-boun...@scipy.org
[mailto:numpy-discussion-boun...@scipy.org] On Behalf Of Santanu Chatterjee
Sent: Tuesday, December 01, 2009 12:15 PM
To: Discussion of Numerical Python
Subject: Re: [Numpy-discussion] a
Click on "Hello World" twice and get a memory error. Comment out the ax.plot
call and get no error.
import numpy
import sys
import gtk
from matplotlib.figure import Figure
from matplotlib.backends.backend_gtkagg import FigureCanvasGTKAgg as
FigureCanvas
ax=None
fig=None
canvas=None
def dop
yes, a GTK app from the python shell. And not using the toolbar.
I'll see if I can extract out a sample of code that demonstrates the problem
I'm having.
Thx
Mathew
On Thu, Nov 19, 2009 at 10:56 AM, John Hunter wrote:
>
>
>
>
> On Nov 19, 2009, at 12:53 PM, Mathe
I am running my gtk app from python. I am deleting the canvas and running
gc.collect(). I still seem to have a reference to my memmapped data.
Any other hints?
-Mathew
On Thu, Nov 19, 2009 at 10:42 AM, John Hunter wrote:
>
>
>
>
> On Nov 19, 2009, at 12:35 PM, Mathew Yeates
Yeah, I tried that.
Here's what I'm doing. I have an application which displays different
dataset which a user selects from a drop down list. I want to overwrite the
existing plot with a new one. I've tried deleting just about everything get
matplotlib to let go of my data!
Math
If I plot mydata before the line
> del mydata
I can't get rid of the file until I exit python!!
Does matplotlib keep a reference to the data? How can I remove this
reference?
Mathew
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
h
for a 64 bit machine does this mean I am limited to 4 GB?
-Mathew
On Wed, Nov 18, 2009 at 3:48 PM, Robert Kern wrote:
> On Wed, Nov 18, 2009 at 17:43, Mathew Yeates wrote:
> > What limits are there on file size when using memmap?
>
> With a modern filesystem, usually you are
What limits are there on file size when using memmap?
-Mathew
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion
I turns out I *was* running out of memory. My dimensions would require 3.5
gig and my plot must have used up some memory.
On Wed, Nov 18, 2009 at 2:43 PM, Charles R Harris wrote:
>
>
> On Wed, Nov 18, 2009 at 3:13 PM, Mathew Yeates wrote:
>
>> The value of dims i
also, the exception is only thrown when I plot something first. I wonder if
matplotlib is messing something up.
On Wed, Nov 18, 2009 at 2:13 PM, Mathew Yeates wrote:
> The value of dims is constant and not particularly large. I also checked to
> make sure I wasn't running out of
The value of dims is constant and not particularly large. I also checked to
make sure I wasn't running out of memory. Are there other reasons for this
error?
Mathew
On Wed, Nov 18, 2009 at 1:51 PM, Robert Kern wrote:
> On Wed, Nov 18, 2009 at 15:48, Mathew Yeates wrote:
> > Hi
have a feeling this may be a nightmare to figure out what matplotlib
and/or numpy are doing wrong. Any ideas where I can start?
Mathew
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion
deal with this?
Mathew
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion
(x,y) pairs (unknown until
evaluation). I don't like the idea of setting the return to some small
value as this may create local maxima in the solution space.
Mathew
Ken Basye wrote:
> Hi Mathew,
>Here are some things to think about: First, is there a way to decompose
>
Sebastian Walter wrote:
> N optimization problems. This is very unusual! Typically the problem
> at hand can be formulated as *one* optimization problem.
>
>
yes, this is really not so much an optimization problem as it is a
vectorization problem.
I am trying to avoid
1) Evaluate f over and ove
David Huard wrote:
> Hi Mathew,
>
> You could use Newton's method to optimize for each vi sequentially. If
> you have an expression for the jacobian, it's even better.
Here's the problem. Every time f is evaluated, it returns a set of
values. (a row in the matrix) B
I have a function f(x,y) which produces N values [v1,v2,v3 vN]
where some of the values are None (only found after evaluation)
each evaluation of "f" is expensive and N is large.
I want N x,y pairs which produce the optimal value in each column.
A brute force approach would be to generate
[
ation.
Thanks
Mathew
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion
I should add, I'm starting with N rotation angles. So I should rephrase
and say I'm starting with N angles and N xy pairs.
Mathew Yeates wrote:
> I know this must be trivial but I can't seem to get it right
>
> I have N 2x2 arrays which perform a rotation. I
I know this must be trivial but I can't seem to get it right
I have N 2x2 arrays which perform a rotation. I also have N xy pairs to
transpose. What is the simplest way to perform the transformation
without looping?
Thanks from someone about to punch their screen.
__
well, this isn't a perfect solution. polyfit is better because it
determines rank based on condition values. Finds the eigenvalues ...
etc. But, unless it can vectorized without Python looping, it's too slow
for me to use
Mathew
josef.p...@gmail.com wrote:
>
>
>
>
sheer genius. Done in the blink of an eye and my original was taking 20
minutes!
Keith Goodman wrote:
> On 4/21/09, Mathew Yeates wrote:
>
>> Hi
>> I posted something about this earlier
>>
>> Say I have 2 arrays X and Y with shapes (N,3) where N is lar
sheer genius. Done in the blink of an eye and my original was taking 20
minutes!
Keith Goodman wrote:
> On 4/21/09, Mathew Yeates wrote:
>
>> Hi
>> I posted something about this earlier
>>
>> Say I have 2 arrays X and Y with shapes (N,3) where N is lar
sheer genius. Done in the blink of an eye and my original was taking 20
minutes!
Keith Goodman wrote:
> On 4/21/09, Mathew Yeates wrote:
>
>> Hi
>> I posted something about this earlier
>>
>> Say I have 2 arrays X and Y with shapes (N,3) where N is lar
.
But now I'm starting to wonder if this pointless. If the routine "poly
fit takes a long time, when compared with the time for a Python
function call, then things can't be sped up.
Any comments?
Mathew
___
Numpy-discussion mai
Import Error resulting in an unoptimized "dot"
Anyone have any ideas?
Mathew
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion
] where
[a,b,c] fits the points (x1,y1) (x2,y2),(x3,y3) and
[d,e,f] fits the points (s1,r1) (s2,r2),(s3,r3)
I realize I could use "apply_along_axis" but I'm afraid of the
performance penalty. Is there a way to do this without resorting to a
function call for ea
h. I don't understand the result.
If
a=array([ 1, 2, 3, 7, 10]) and b=array([ 1, 2, 3, 8, 10])
I want to get the result [0,1,2,4] but[searchsorted(a,b) produces
[0,1,2,4,4] ?? and searchsorted(b,a) produces [0,1,2,3,4]
??
Mathew
On Fri, Oct 24, 2008 at 3:12 PM, Charles R H
long.
Thanks to any one of you vectorization gurus that has any ideas.
Mathew
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion
Is there a routine in scipy for telling whether a point is inside a
convex 4 sided polygon?
Mathew
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion
On an AMD x86_64 with ATLAS installed I am getting errors like
ValueError: On entry to DLASD0 parameter number 9 had an illegal value
ValueError: On entry to ILAENV parameter number 2 had an illegal value
Anybody seen this before?
Mathew
___
Numpy
What got fixed?
Robert Kern wrote:
> On Tue, Jul 29, 2008 at 17:41, Mathew Yeates <[EMAIL PROTECTED]> wrote:
>
>> Charles R Harris wrote:
>>
>>> This smells like an ATLAS problem.
>>>
>> I don't think so. I crash in a call to dsy
oops. It is ATLAS. I was able to run with a nonoptimized lapack.
Mathew Yeates wrote:
> Charles R Harris wrote:
>
>> This smells like an ATLAS problem.
>>
> I don't think so. I crash in a call to dsyevd which part of lapack but
> not atlas. Also, whe
Charles R Harris wrote:
>
>
> This smells like an ATLAS problem.
I don't think so. I crash in a call to dsyevd which part of lapack but
not atlas. Also, when I commented out the call to test_eigh_build I get
zillions of errors like (look at the second one, warnings wasn't imported?)
=
more info
when /linalg.py(872)eigh() calls dsyevd I crash
James Turner wrote:
> Thanks everyone. I think I might try using the Netlib BLAS, since
> it's a server installation... but please let me know if you'd like
> me to troubleshoot this some more (the sooner the easier).
>
> James.
>
> ___
I am using an ATLAS 64 bit lapack 3.9.1.
My cpu (4 cpus)
-
processor : 0
vendor_id : GenuineIntel
cpu family : 6
model : 23
model name : Intel(R) Xeon(R) CPU X5460 @ 3.16GHz
stepping: 6
cpu MHz
my set up is similar. Same cpu's. Except I am using atlas 3.9.1 and gcc
4.2.4
James Turner wrote:
>> Are you using ATLAS? If so, where did you get it and what cpu do you have?
>>
>
> Yes. I have Atlas 3.8.2. I think I got it from
> http://math-atlas.sourceforge.net. I also included Lapack 3.
I'm getting this too
Ticket #652 ... ok
Ticket 662.Segmentation fault
Robert Kern wrote:
> On Tue, Jul 29, 2008 at 14:16, James Turner <[EMAIL PROTECTED]> wrote:
>
>> I have built NumPy 1.1.0 on RedHat Enterprise 3 (Linux 2.4.21
>> with gcc 3.2.3 and glibc 2.3.2) and Python 2.5.1. When I run
>
tw3.a
anybody know why?
Mathew
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion
thing. Any thoughts?
Mathew
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion
gt;> wrote:
>
> On Dec 26, 2007 12:22 PM, Mathew Yeates <[EMAIL PROTECTED]
> <mailto:[EMAIL PROTECTED]>> wrote:
> > I have an arbitrary number of lists. I want to form all possible
> > combinations from all lists. So if
> >
Which reference manual?
René Bastian wrote:
> Le Mercredi 26 Décembre 2007 21:22, Mathew Yeates a écrit :
>
>> Hi
>> I've been looking at "fromfunction" and itertools but I'm flummoxed.
>>
>> I have an arbitrary number of lists. I want to form
yes, I came up with this and may use it. Seems like it would be insanely
slow but my problem is small enough that it might be okay.
Thanks
Keith Goodman wrote:
> On Dec 26, 2007 12:22 PM, Mathew Yeates <[EMAIL PROTECTED]> wrote:
>
>> I have an arbitrary number of lists.
,["cat",1],["cat",2]]
It's obvious when the number of lists is not arbitrary. But what if
thats not known until runtime?
Mathew
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion
mance
improvements would be expected using this?
Mathew
Sebastian Haase wrote:
> On 10/26/07, David Cournapeau <[EMAIL PROTECTED]> wrote:
>
>> P.S: IMHO, this is one of the main limitation of numpy (or any language
>> using arrays for speed; and this is really diffi
Anybody know of any tricks for handling something like
z[0]=1.0
for i in range(100):
out[i]=func1(z[i])
z[i+1]=func2(out[i])
??
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-di
Anybody know how to contact the pycdf author? His name is Gosselin I
think. There are hardcoded values that cause pycdf to segfault when
using large strings.
Mathew
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http
never returns
Charles R Harris wrote:
>
>
> On 8/29/07, *Mathew Yeates* <[EMAIL PROTECTED]
> <mailto:[EMAIL PROTECTED]>> wrote:
>
> I guess I can't blame lapack. My system has atlas so I recompiled
> numpy
> pointing
eeing this?
Mathew
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion
Thanks Robert
I have a deadline and don't have time to install ATLAS. Instead I'm
installing clapack. Is this the corrrect thing to do?
Mathew
Robert Kern wrote:
> Mathew Yeates wrote:
>
>> I'm the one who created libblas.a so I must have done something wron
I'm the one who created libblas.a so I must have done something wrong.
This is lapack-3.1.1.
Robert Kern wrote:
> If your BLAS just the reference BLAS, don't bother with _dotblas. It won't be
> any faster than the default implementation in numpy. You only get a win if you
> are using an accelera
more info. My blas library has zaxpy defined but not cblas_zaxpy
Mathew Yeates wrote:
> my site,cfg just is
> [DEFAULT]
> library_dirs = /home/myeates/lib
> include_dirs = /home/myeates/include
>
> python setup.py config gives
> F2PY Version 2_3979
> blas_opt_info:
> b
lapack', 'blas']
library_dirs = ['/home/myeates/lib']
define_macros = [('NO_ATLAS_INFO', 1)]
language = f77
running config
Robert Kern wrote:
> Mathew Yeates wrote:
>
>> oops. sorry
>> from numpy.core import _dotblas
>> ImportError:
oops. sorry
from numpy.core import _dotblas
ImportError:
/home/myeates/lib/python2.5/site-packages/numpy/core/_dotblas.so:
undefined symbol: cblas_zaxpy
Robert Kern wrote:
> Mathew Yeates wrote:
>
>> yes, I get
>> from numpy.core import _dotblas
>> ImportError: No
yes
Robert Kern wrote:
> Mathew Yeates wrote:
>
>> yes, I get
>> from numpy.core import _dotblas
>> ImportError: No module named multiarray
>>
>
> That's just weird. Can you import numpy.core.multiarray by itself?
>
>
_
yes, I get
from numpy.core import _dotblas
ImportError: No module named multiarray
Now what?
uname -a
Linux 2.6.9-55.0.2.EL #1 Tue Jun 12 17:47:10 EDT 2007 i686 athlon i386
GNU/Linux
Robert Kern wrote:
> Mathew Yeates wrote:
>
>> Hi
>> When I try
>> import nump
liblapack.a is
being accessed.
Any ideas?
Mathew
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion
Running from numpy source directory.
F2PY Version 2_3875
blas_opt_info:
blas_mkl_info:
( library_dirs = /u/vento0/myeates/lib )
( include_dirs = /u/vento0/myeates/include )
(paths: )
(paths: )
(paths: )
(paths: )
(paths: )
(paths: )
libraries mkl,vml,guide not found in /u/vento0/myeates/lib
N
nope. try again
% python setup.py -v config_fc --fcompiler=gfortran install
Running from numpy source directory.
non-existing path in 'numpy/distutils': 'site.cfg'
F2PY Version 2_3882
blas_opt_info:
blas_mkl_info:
( library_dirs = /u/vento0/myeates/lib:/usr/lib )
( include_dirs = /usr/include:/u/
Just realized this might be important I'm using Python2.5.1
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion
John Cartwright wrote:
> I tried to send that last night, but the message was so large that
> it's waiting for approval. Here's the first part of the output:
>
Same for me. Here is the beginning of mine
Running from numpy source directory.
F2PY Version 2_3875
blas_opt_info:
blas_mkl_info:
( l
Even more info!
I am using numpy gotten from svn on Wed or Thurs.
Mathew Yeates wrote:
> More info:
> I tried Chris' suggestion , i.e. export F77=gfortran
>
> And now I get
>
> Found executable /u/vento0/myeates/bin/gfortran
> gnu: no Fortran 90 compiler found
>
More info:
I tried Chris' suggestion , i.e. export F77=gfortran
And now I get
Found executable /u/vento0/myeates/bin/gfortran
gnu: no Fortran 90 compiler found
Found executable /usr/bin/g77
Mathew Yeates wrote:
> No.
> My PC crashed. I swear I have a virus on this machine. Been
de']
customize GnuFCompiler
Found executable /usr/bin/g77
gnu: no Fortran 90 compiler found
gnu: no Fortran 90 compiler found
customize GnuFCompiler
gnu: no Fortran 90 compiler found
gnu: no Fortran 90 compiler found
customize GnuFCompiler using config
compiling '_configtest.c':
Ro
rtran
--prefix=/u/vento0/myeates
Thread model: posix
gcc version 4.2.0
> -bash-3.1$ python setup.py config_fc --fcompiler=gnu95 build 2>&1 |tee out
Robert Kern wrote:
> Mathew Yeates wrote:
>
>> result
>> Found executable /usr/bin/g77
>> gnu: no Fortran 90 co
result
Found executable /usr/bin/g77
gnu: no Fortran 90 compiler found
Something is *broken*.
Robert Kern wrote:
> Mathew Yeates wrote:
>
>> Does anyone know how to run
>> python setup.py build
>> and have gfortran used? It is in my path.
>>
>
> p
Does anyone know how to run
python setup.py build
and have gfortran used? It is in my path.
Mathew
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion
I have gfortran installed in my path. But when I run python setup.py
build I get
Found executable /usr/bin/g77
gnu: no Fortran 90 compiler found
gnu: no Fortran 90 compiler found
customize GnuFCompiler
gnu: no Fortran 90 compiler found
gnu: no Fortran 90 compiler found
The output of python setup.
Hi
I'm looking for a more elegant way of setting my array elements
Using "for" loops it would be
for i in range(rows):
for j in range(cols):
N[i,j] = N[i-1][j] + N[i][j-1] - N[i-1][j-1]
It's sort of a combined 2d accumulat
Hi
I have a list of objects that I want to be interpreted numpy.where. What
class methods do I need to implement?
example:
class A:pass
a=A()
a.i=1
b=A()
b.i=0
numpy.where([a,b,a]) #desired result [0,2]
Thanks
Mathew
___
Numpy-discussion mailing list
Thanks for the replies. I think I have enough to work with.
Mathew
Christopher Barker wrote:
> Mathew Yeates wrote:
>
>> given an array of floats, 2N columns and M rows, where the elements
>> A[r,2*j] and A[r,2*j+1] form the real and imaginary parts of a complex
>>
27;s not a float array, its elements are byte sized, in case
that matters)
Mathew
--
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion
done, do a transpose. Whats the best way?
Mathew
Mathew Yeates wrote:
> Hmm
> I'm trying to duplicate the behavior with a simple program
> -
> import numpy
> datasize=5529000
> numrows=121
>
> fd=open("biggie","w")
> fd.close()
> b
(shape=(datasize,),dtype=numpy.float32)
for r in range(0,numrows):
print r
big[r,:] = c
c[r] = 2.0
-
but it is fast. Hmmm. Any ideas about where to go from here?
Mathew
Robert Kern wrote:
> Mathew Yeates wrote:
>
>> Hi
>
1 - 100 of 101 matches
Mail list logo