On Mon, Jan 14, 2013 at 2:57 PM, Benjamin Root wrote:
> I am also +1 on the idea of having a filled() and filled_like() function (I
> learned a long time ago to just do a = np.empty() and a.fill() rather than
> the multiplication trick I learned from Matlab). However, the collision
> with the mas
On Sat, Jan 5, 2013 at 1:03 PM, Robin wrote:
>>> If not, is there a reasonable way to build numpy.linalg such that
>>> it interfaces with MKL correctly ?
I managed to get this to work in the end. Since Matlab uses MKL with
ILP64 interface it is not possible to get Numpy t
h Enthough EPD which is built against MKL -
but I think the problem is that Intel provide two different interfaces
- ILP64 with 64 bit integer indices and LP64 with 32 bit integers.
Matlab link against the ILP64 version, whereas Enthought use the LP64
version - so there are still incompatible.
Che
lly,
>
>>>> c[574519:].sum()
> 356.0
>>>> c[574520:].sum()
> 0.0
>
> is the case on Linux 64-bit; is it the case on Windows 64?
Yes - I get exactly the same numbers in 64 bit windows with 1.6.1.
Cheers
Robin
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion
[0, 0, 0, ..., 0, 0, 0],
[0, 0, 0, ..., 0, 0, 0]], dtype=uint8)
In [41]: a[581350:,0].sum()
Out[41]: 0
Cheers
Robin
>
> Thanks,
> David
>
> On Mon, Jan 23, 2012 at 05:23:28AM -0500, David Warde-Farley wrote:
>> A colleague has run into this weird behaviour with N
,
[3, 3, 4, 4]],
[[1, 1, 2, 2],
[1, 1, 2, 2],
[3, 3, 4, 4],
[3, 3, 4, 4]]])
On Dec. 3, 2011, at 12:50PM, Derek Homeier wrote:
> On 03.12.2011, at 6:22PM, Robin Kraft wrote:
>
> > That does repeat the elements, but doesn't get them into th
ing to get there, but it seems
doable.
Anyone know what combination of manipulations would work with the result of
np.tile?
-Robin
On Dec 3, 2011, at 11:05 AM, Olivier Delalleau wrote:
> You can also use numpy.tile
>
> -=- Olivier
>
> 2011/12/3 Robin Kraft
>> Thanks
it np.kron(a, np.ones((2, 2), dtype='uint8'))
1 loops, best of 3: 27.8 s per loop
In this case repeat() peaked at about 1gb of ram usage while np.kron hit about
1.7gb.
Thanks again Warren. I'd tried way too many variations on reshape and rollaxis,
and should have come to the Numpy
some combination of np.resize or np.repeat and reshape + rollaxis
would do the trick, but I'm at a loss.
Many thanks!
-Robin
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion
ded CPython situations, see for
example pymex [1] which embeds Python + Numpy in a Matlab mex file and
works really well.
This seems to a be a problem specific to Jepp.
Just wanted to mention it in case it puts someone off trying something
unnecessarily in the future.
Cheers
Robin
[1] https://gi
nd it by following the symlink that is the
gfortran command and looking for an appropriate lib/ directory near
the target of that.
Cheers
Robin
On Wed, Jul 20, 2011 at 9:02 PM, Brandt Belson wrote:
> Hello,
> I'm struggling to create openmp subroutines. I've simplified the
: 0.282383
If I add a 1s sleep in the inner product function shows a much more
significant improvement (ie if it were a longer computation):
Not threaded: 50.6744170189
Using 8 processes: 8.152393
Still not quite linear but certainly an improvement.
Cheers
Robin
On Thu, Jun 16, 2011 at 9:19
.
Cheers
Robin
On Thu, Jun 16, 2011 at 9:05 PM, Brandt Belson wrote:
> Hi all,
> Thanks for the replies. As mentioned, I'm parallelizing so that I can take
> many inner products simultaneously (which I agree is embarrassingly
> parallel). The library I'm writing asks the user to
le before the fork
(so before pool = Pool() ) and then all the subprocesses can access it
without any pickling required. ie
myutil.data = listofdata
p = multiprocessing.Pool(8)
def mymapfunc(i):
return mydatafunc(myutil.data[i])
p.map(mymapfunc, range(len(myutil.data)))
Actually that w
I think numpy doesn't use umfpack. scipy.sparse used to, but now the
umfpack stuff has been moved out to a scikit.
So you probably won't see anything about those libraries, but if you
install scikits.umfpack and it works then you must be linked
correctly.
Cheers
Robin
On Fri, Feb 18,
Git is having some kind of major outage:
http://status.github.com/
"The site and git access is unavailable due to a database failure. We're
researching the issue."
On Nov 14, 2010, at 3:29 PM, numpy-discussion-requ...@scipy.org wrote:
>
> Message: 5
> Date: Sun, 14 Nov 2010 13:29:03 -0700
> Fr
t;x64 Compilers and
Tools". For example, in the SDK installer above:
On screen "Installation Options"
Select "Developer Tools"->"Visual C++ Compilers".
This item has the Feature Description "Install the Visual C++ 9.0
Compilers. These compilers allow yo
On Wed, Aug 18, 2010 at 8:20 AM, Sturla Molden wrote:
> Den 18. aug. 2010 kl. 08.19 skrev Martin Raspaud
> :
>
>> Once upon a time, when my boss wanted me to use matlab, I found myself
>> implementing a python interpreter in matlab...
>>
>
> There are just two sane solutions for Matlab: Either emb
et any significant boost in the
> performance of the code.
>
Try adding
-DF2PY_REPORT_ON_ARRAY_COPY
to the f2py command line.
This will cause f2py to report any array copies. If any of the
types/ordering of the arrays don't match f2py will silently make a copy -
this can really affec
, 1, 0, 1, 1],
> [0, 0, 1, 0, 0, 0]])
>
> y = x.reshape(3,2,3,2)
> y2 = y.sum(axis=3).sum(axis=1)
This is perfect - and so fast! Thanks! Now I just have to understand why it
works ...
Can anyone recommend a tutorial on working with (slicing, reshaping, etc.)
multi-di
ng akin to np.hstack to create this array, so it isn't
square and can't be reshaped appropriately (np.tile(np.arange(2**2).reshape(2,
2), 4)):
array([[0, 1, 0, 1, 0, 1, 0, 1],
[2, 3, 2, 3, 2, 3, 2, 3]])
Inefficient sample code below. Advice greatly appreciated!
-Robin
import nu
worry about 2008.
http://www.microsoft.com/downloads/details.aspx?familyid=7B0B0339-613A-46E6-AB4D-080D4D4A8C4E&displaylang=en
Cheers
Robin
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion
ng
scipy tests)
Cheers
Robin
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion
pe[1]))?
Actually that and the problems with scipy.sparse (spsolve doesn't
work) cover all of the errors I'm seeing... (I detailed those in a
seperate mail to the scipy list).
Cheers
Robin
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.o
Hi,
I am having some problems with win64 with all my tests failing.
I installed amd64 Python from Python.org and numpy and scipy from
http://www.lfd.uci.edu/~gohlke/pythonlibs/
I noticed that on windows sys.maxint is the 32bit value (2147483647
___
Num
hat were causing the segfaults. Makes me think of the
old phrase - problem is between keyboard and chair.
Cheers
Robin
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion
release2009a/win64.html
So I thought on the matlab/mex side 2008 should be fine, and I thought
since Python is built with 2008 that should also be OK. But obviously
something isn't!
Cheers
Robin
___
NumPy-Discussion mailing list
NumPy-Di
ython-into-matlab-mex-win64
Anyway I'm completely in the dark but wondered if some of the experts
on here would be able to spot something (perhaps to do with
incompatible C runtimes - I am not sure what runtime Python is built
with but I thought it was VS 2008).
Che
To build against the python.org 2.5 you need to use the older gcc:
export CC=/usr/bin/gcc-4.0
export CXX=/usr/bin/g++-4.0
should do it. By default snow leopard uses 4.2 now, which doesn't
support the -Wno-long-double option used when building python.
Cheers
Robin
On Mon, Apr 19, 2010 at
You can build numpy against Accelerate through macports by specifying
the +no_atlas variant.
Last time I tried I ran into this issue:
http://trac.macports.org/ticket/22201
but it looks like it should be fixed now.
Cheers
Robin
On Mon, Jan 18, 2010 at 8:15 PM, Mark Lescroart wrote:
>
My understanding was that 2.6/3.1 will never be buildable as an arch
selectable universal binary interpreter (like the apple system python)
due to this issue:
http://bugs.python.org/issue6834
I think this is only being fixed in 2.7/3.2 so perhaps from then
Python will distribute selectable universal bu
Hi,
Could we have a ma aware numpy.ma.log2 please, similar to np.ma.log
and np.ma.log10?
I think it should be as simple as the patch below but perhaps I've
missed something:
Thanks,
Robin
--- core.py.orig2009-12-13 15:14:14.0 +
+++ core.py 2009-12-13 15:14:53.
I believe the current default was chosen deliberately. I
think it is the view of the numpy developers that the n divisor has
more desireable properties in most cases than the traditional n-1 -
see this paper by Travis Oliphant for details:
http://hdl.handle.net/1877/438
Cheers
Robin
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion
On Fri, Nov 13, 2009 at 7:48 PM, Sturla Molden wrote:
> Robin skrev:
>> I had assumed when matlab unloads the mex function it would also
>> unload python - but it looks like other dynamic libs pulled in from
>> the mex function (in this case python and in turn nump
On Fri, Nov 13, 2009 at 6:50 PM, Pauli Virtanen wrote:
> Fri, 13 Nov 2009 17:23:19 +0000, Robin wrote:
>> I'm trying to embed Python in a MATLAB mex file. I've been coming under
>> some pressure to make my Python code available to my MATLAB colleagues
>> s
e possible to link python statically to my mex
function, so it really is unloaded when the mex function is... but I'm
getting a bit out of my depth with linker options, and I guess numpy
is always loaded dynamically anyway and will stick around.
Easy enough to work around it anyway - but just
>> I'm trying to embed Python in a MATLAB mex file. I've been coming
>> under some pressure to make my Python code available to my MATLAB
>> colleagues so I am trying to come up with a relatively general way of
>> calling numerical python code from Matlab.
I get a similar but different error tryin
I forgot to add it doesn't happen if Py_Finalize isn't called - if I
take that out and just let the OS/matlab unload the mex function then
I can run it many times without the error, but it does leak a bit of
memory each time.
Cheers
Robin
On Fri, Nov 13, 2009 at 5:23 PM, Robin w
such a big problem - I will just avoid unloading the
mex function (with clear function). But I thought it might be
indicative of a memory leak or some other problem since I think in
theory it should work (It does if numpy isn't imported).
vely, it may be possible to use some other suffix than .f,
> but I've not tried that.
>
> Then f2py file.f
That's great thanks very much! It's more or less exactly what I was hoping for.
I wonder if it's possible to get distutils
was
still supposed to be working. Certainly more traffic more recently
than the google group.
Cheers
Robin
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion
I just tried to send the message below to f2py-users -
f2py-us...@cens.ioc.ee, but delivery failed.
Not sure where else to report this so hopefully here is ok.
Cheers
Robin
-- Forwarded message --
From: Mail Delivery Subsystem
Date: Tue, Nov 3, 2009 at 9:40 PM
Subject
Just noticed this is only supported on linux - sorry for the noise
(not having a very good day today!)
Robin
On Tue, Nov 3, 2009 at 6:47 PM, Robin wrote:
> Hi,
>
> When I try to build a fortran module with f2py from '1.4.0.dev7618'
> with gfortran 4.2.3 from att.com and
py tickets? Or is there an obvious fix.
Inspecting gfint.so shows the same symbols for both architectures, and
_on_exit is listed there with a U which I guess means undefined.
Cheers
Robin
___
NumPy-Discussion mailing list
NumPy-Discussion@scip
On Tue, Nov 3, 2009 at 6:14 PM, Robin wrote:
> After some more pootling about I figured out a lot of the performance
> loss comes from using 32 bit integers by default when compiles 64 bit.
> I asked this question on stackoverflow:
> http://stackoverflow.com/questions/1668899/fortr
fortran with f2py from python in a way that
doesn't require the code to be changed depending on platform?
Or should I just pack it all in and use weave?
Robin
On Tue, Nov 3, 2009 at 4:29 PM, Robin wrote:
> Hi,
>
> I'm not sure if this is of much interest but it's been r
27;t find a way to get f2py to force 32 bit output. But the
performance was more or less the same (always several times slower the
32 bit att gfortran).
Any advice appreciated.
Cheers
Robin
subroutine bincount (x,c,n,m)
implicit none
integer, intent(in) :: n,m
integer, dimensi
lternative pythonw.c in the
ticket - but it won't be fixed in a release until 2.7.
Cheers
Robin
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion
n
well alone as before.
Cheers
Robin
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion
Thanks...
On Wed, Oct 21, 2009 at 11:41 AM, David Cournapeau
wrote:
> Robin wrote:
>>
>> Thanks - that looks ideal. I take it $HOME/.local is searched first so
>> numpy will be used fromt here in preference to the system numpy.
>>
>
> Yes, unless framework-ena
On Wed, Oct 21, 2009 at 10:28 AM, David Cournapeau
wrote:
> Robin wrote:
>> Hi,
>>
>> I was wondering what the recommended way to run numpy/scipy on mac os
>> x 10.6 is. I understood previously it was recommended to use
>> python.org python and keep everythin
(ie how could I tell the wx installer to install into the
virtualenv).
I was wondering what others do?
Cheers
Robin
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion
Forgot to include the fortran code used:
jm-g26b101:fortran robince$ cat test.f95
subroutine bincount (x,c,n,m)
implicit none
integer, intent(in) :: n,m
integer, dimension(0:n-1), intent(in) :: x
integer, dimension(0:m-1), intent(out) :: c
integer :: i
c = 0
do i = 0,
).
Cheers
Robin
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion
irst and then passed to the fortran subroutine - so I
wondered how f2py creates it (I think I traced it to
array_from_pyobj() but I couldn't really understand what it was doing
or whether it would always be zeros). I guess as you say though it is
always safer
2.5
64bit, 2.6 64bit). Is this right, or would different binaries be
required for XP, Vista, 7 etc. ?
Can anyone point me to a smallish Python package that includes fortran
code in this way that I could look to for inspiration?
Cheers
Robin
___
NumPy-
n c
which now performs a bit better than np.bincount, but still
significantly slower than the fortran. Is this to be expected or am I
missing something in the cython?
In [14]: timeit ctest.bincount(x,1024)
100 loops, best of 3: 3.31 ms per loop
Cheers
Robin
__
; in
VMware or Virtualbox or some other virtualisation software. With
recent CPU's there is very little performance penalty (running 32bit
on a 64bit host) and it can be very convenient (it is easy to map
network drives between guest and host which perform very well since
the '
etter for one job or the other... so some sort of
primer would be really handy.
Cheers
Robin
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion
On Sun, Jun 7, 2009 at 12:53 AM, Robin wrote:
> I haven't seen this before - is it something wrong with my build or
> the current svn state? I am using macports python 2.5.4 on os x 10.5.7
Hmmm... after rebuilding from the same version the problem seems to
have gone away... sorry fo
rong with my build or
the current svn state? I am using macports python 2.5.4 on os x 10.5.7
Cheers
Robin
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion
as far as I know there isn't really a good answer. There is xcorr in
pylab, but it isn't vectorised like xcorr from matlab...
Cheers
Robin
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion
54650e-01 8.936401263709e-01 8.177351741233e-01 ;
7.092517323839e-01 9.458774967489e-01 8.595104463863e-01 ] ],[ 4 3 2
])
Hope someone else finds it useful.
Cheers
Robin
On Tue, May 12, 2009 at 2:12 PM, Robin wrote:
> [crossposted to numpy-discussion and mlabwrap-user]
>
> Hi,
>
> I
ke this in Matlab and I
must admit I found it pretty painful, so I hope it can be useful to
someone else!
I will try and do one for Python for copying and pasting to Matlab,
but I'm expecting that to be a lot easier!
Cheers
Robin
array.m
Description: Binary data
_
2009/4/29 Stéfan van der Walt :
> 2009/4/29 Robin :
>> I have been using seterr to try to catch where Nans are appearing in
>> my analysis.
>>
>> I used all: 'warn' which worked the first time I ran the function, but
>> as specified in the documentation i
, without loosing my
session?
Thanks
Robin
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion
se).
There is some information about this on the wiki:
http://scipy.org/ParallelProgramming
Cheers
Robin
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion
On Thu, Mar 5, 2009 at 9:15 PM, Stéfan van der Walt wrote:
> Hi Robin
>
> 2009/3/5 Robin :
>> On Thu, Mar 5, 2009 at 10:57 AM, Robin wrote:
>>> On Thu, Mar 5, 2009 at 10:40 AM, Robin wrote:
>>>> Hi,
>>>>
>>>> I have an indexing proble
On Thu, Mar 5, 2009 at 10:57 AM, Robin wrote:
> On Thu, Mar 5, 2009 at 10:40 AM, Robin wrote:
>> Hi,
>>
>> I have an indexing problem, and I know it's a bit lazy to ask the
>> list, sometime when people do interesting tricks come up so I hope no
>> one minds
On Thu, Mar 5, 2009 at 10:40 AM, Robin wrote:
> Hi,
>
> I have an indexing problem, and I know it's a bit lazy to ask the
> list, sometime when people do interesting tricks come up so I hope no
> one minds!
>
> I have a 2D array X.shape = (a,b)
>
> and I want to c
On Thu, Mar 5, 2009 at 10:40 AM, Robin wrote:
> Hi,
>
> I have an indexing problem, and I know it's a bit lazy to ask the
> list, sometime when people do interesting tricks come up so I hope no
> one minds!
>
> I have a 2D array X.shape = (a,b)
>
> and I want to c
peats I'm not
sure if I can do it without loops.
Thanks for any help
Cheers
Robin
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion
any problem (except for the machine running out of memory :)
Cheers
Robin
>
> Cheers,
> Todd
>
>
> ___
> Numpy-discussion mailing list
> Numpy-discussion@scipy.org
> http://projects.scip
t sure).
Similarly with the GOTO blas, but I'm not sure if numpy builds with
that, so maybe we should take that reference out.
Robin
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion
Hi,
I made some changes to the ParallelProgramming wiki page to outline
use of the (multi)processing module as well as the threading module.
I'm very much not an expert on this - just researched it for myself,
so please feel free to correct/ extend/ delete as appropriate.
ic - the profiler is about the only thing I
miss from Matlab... Especially after an afternoon of refactoring
everything into tiny functions to get anything useful out of the
normal profiler and see where the bottleneck in my code was.
Robin
___
Numpy-di
Success in business appears to be elusive for many people. So what is it
that makes people successful?
http://www.bestentrepreneur.net/2008/02/want-to-know-billionaires-formula.html
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Go
t... So it's kind of
matric multiplication, but I want another vector dot'ed into the sum.
How can I implement this efficiently in numpy/scipy? I'm having
trouble breaking it down to vectorised operations. I suppose I could
try a loop in Cython but I was hoping there might be a tr
>
> which is kind of cryptic. (If I remove ndmin, numpy still complains.)
I think r_[x[0],x] will do you what you want...
Robin
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion
ack and
atlas)... You could use lapack/atlas if you wanted but installation is
probably simpler following the instructions on the wiki to use the
apple one...
Robin
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion
On Wed, Jun 4, 2008 at 10:59 AM, David Cournapeau
<[EMAIL PROTECTED]> wrote:
> Robin wrote:
>> I think theres much less chance of problems using the system python
>> for system things and leaving it well alone - and installing the
>> python.org for everyday use. The onl
he system python
for system things and leaving it well alone - and installing the
python.org for everyday use. The only problem with this is that the
system python works with dtrace while the normal one doesn't...
Cheers
Robin
>
> I guess I always feel a sense of uncertainty with having
On Fri, May 30, 2008 at 12:57 AM, Robin <[EMAIL PROTECTED]> wrote:
> You are indexing here with a 1d list [0,1]. Since you don't provide a
> column index you get rows 0 and 1.
> If you do a[ [0,1] , [0,1] ] then you get element [0,0] and element [0,1].
Whoops - you get [0,
dexing here with two scalars, 0,1.
> >>> a[[0,1]] # shape([[0,1]])= (1, 2)
> array([[ 0.87059263, 0.76795743, 0.13844935, 0.69040701, 0.92015062],
> [ 0.97313123, 0.85822558, 0.8579044 , 0.57425782, 0.57355904]])
You are indexing here
is specified, every occurrence of i at a position p contributes
weights[p] instead of 1.
See also: histogram, digitize, unique.
Robin
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion
27;t seen it this page gives useful examples of methods to
speed up python code (incuding weave.blitz), which has Hoyt says would
be ideal in this case:
http://scipy.org/PerformancePython
Robin
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://
Also you could use xrange instead of range...
Again, not sure of the size of the effect but it seems to be
recommended by the docstring.
Robin
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy
e
(although I'm not sure if that's quicker)
u *= expfac_m
u += (1-expfac_m)*aff_input.. etc.
Of course you can also take the (1-)'s outside of the loop although
again I'm not sure how much difference it would make.
So sorry I can't give any concrete advise but I hope I
s
timestep so you can store the full previous state and reference it in
the update function).
Just a suggestion - it was much more efficient for me to do it this
way with integrate and fire type neural networks... Also I hope I've
clearly expressed what I mean - it's getting late
activestate.com/ASPN/Cookbook/Python/Recipe/511474
which does wonders for my code. I was wondering if this function
should be included in Numpy as it seems to provide an important
feature, or perhaps an entry on the wiki (in Cookbook section?)
Thanks,
Robin
On Fri, Mar 7, 2008 at 9:06 AM, Fernando Perez <[EMAIL PROTECTED]> wrote:
> Hi Robin,
>
> As Ondrej pointed out, the expectation is a full-time commitment to
> the project. Other than that it sounds like you might be able to
> participate, and it's worth noting that
se projects (and perhaps some others) so that
interested students like myself can apply.
Thanks,
Robin
PS
My nick on IRC is 'thrope' and I try to hang out in there most of the
time I am online. I am also on Google Talk at this email address.
with 1.0.5.dev4786. I think the bug is that
the b assignment should also fail. They both fail (as I think they
should) if you take a as an array with more than one element.
I think the array constructor expects lists of numbers, not of arrays
etc. To do what you
ata, so v/span is rescaled to have maximum value
1, so (v * 127. / span) is the (signed) input vector rescaled to have
values in the range [-127,127]. Adding 128 makes unsigned again in
[0,256].
I'm not sure why they would be doing this - to me it looks they might
be using Image as a convenient way
you are working interactively in Ipython, you can do
%who ndarray
or
%whos ndarray
to get a list of arrays.
It might be possible to get this functionality within a script/program
through the ipython api, but I'm afraid I don't know much about that.
Cheers,
Robin
I do get the problem with a recent(ish) svn, on OS X 10.5.1, python
2.5.1 (from python.org):
In [76]: A = array(['a','aa','b'])
In [77]: B = array(['d','e'])
In [78]: A.searchsorted(B)
Out[78]: array([3, 0])
In [79]: numpy.__version__
Out[79]: '1.0.5.dev4722'
__
the test code.
Mac OS X 10.5.1
Python 2.5.1 (not apple one)
Numpy 1.0.5.dev4722
Robin
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion
nction_base.py",
line 398, in asarray_chkfinite
raise ValueError, "array must not contain infs or NaNs"
ValueError: array must not contain infs or NaNs
--
Ran 713 tests in 0.697s
FAILED (errors=1)
Out[3]:
Versions/2.5/lib/python2.5/site-packages/numpy/lib/function_base.py",
line 398, in asarray_chkfinite
raise ValueError, "array must not contain infs or NaNs"
ValueError: array must not contain infs or NaNs
--
Ran 692
I can confirm the same behaviour with numpy '1.0.4.dev4271' on OS X
10.4with python
2.5.1 (installer from python.org).
For me the memory used by the python process grows at about 1MB/sec. The
memory isn't released when the loop is canceled.
___
Numpy-dis
1 - 100 of 109 matches
Mail list logo