On 15 Feb 2016, at 6:55 pm, Jeff Reback wrote:
>
> https://github.com/pydata/pandas/releases/tag/v0.18.0rc1
Ah, think I forgot about the ‘releases’ pages.
Built on OS X 10.10 + 10.11 with python 2.7.11, 3.4.4 and 3.5.1.
17 errors in the test suite + 1 failure with python2.7 only; I can send
you
On 14 Feb 2016, at 1:53 am, Jeff Reback wrote:
>
> I'm pleased to announce the availability of the first release candidate of
> Pandas 0.18.0.
> Please try this RC and report any issues here: Pandas Issues
> We will be releasing officially in 1-2 weeks or so.
>
Thanks, looking forward to give t
> On 31 Jan 2016, at 9:48 am, Sebastian Berg wrote:
>
> On Sa, 2016-01-30 at 20:27 +0100, Derek Homeier wrote:
>> On 27 Jan 2016, at 1:10 pm, Sebastian Berg <
>> sebast...@sipsolutions.net> wrote:
>>>
>>> On Mi, 2016-01-27 at 11:19 +, Nadav Hore
On 27 Jan 2016, at 1:10 pm, Sebastian Berg wrote:
>
> On Mi, 2016-01-27 at 11:19 +, Nadav Horesh wrote:
>> Why the dot function/method is slower than @ on python 3.5.1? Tested
>> from the latest 1.11 maintenance branch.
>>
>
> The explanation I think is that you do not have a blas optimizat
On 27 Jan 2016, at 2:58 AM, Charles R Harris wrote:
>
> FWIW, the maintenance/1.11.x branch (there is no tag for the beta?) builds
> and passes all tests with Python 2.7.11
> and 3.5.1 on Mac OS X 10.10.
>
>
> You probably didn't fetch the tags, if they can't be reached from the branch
> hea
Hi Chuck,
> I'm pleased to announce that Numpy 1.11.0b1 is now available on sourceforge.
> This is a source release as the mingw32 toolchain is broken. Please test it
> out and report any errors that you discover. Hopefully we can do better with
> 1.11.0 than we did with 1.10.0 ;)
the tarball
On 16 Dec 2015, at 8:22 PM, Matthew Brett wrote:
>
>>> In [4]: %time testx = np.linalg.solve(testA, testb)
>>> CPU times: user 1min, sys: 468 ms, total: 1min 1s
>>> Wall time: 15.3 s
>>>
>>>
>>> so, it looks like you will need to buy a MKL license separately (which
>>> makes sense for a commerc
On 3 Nov 2015, at 6:03 pm, Chris Barker - NOAA Federal
wrote:
>
> I was more aiming to point out a situation where the NumPy's text file reader
> was significantly better than the Pandas version, so we would want to make
> sure that we properly benchmark any significant changes to NumPy's text
On 9 Apr 2015, at 9:41 pm, Andrew Collette wrote:
>
>> Congrats! Also btw, you might want to switch to a new subject line format
>> for these emails -- the mention of Python 2.5 getting hdf5 support made me
>> do a serious double take before I figured out what was going on, and 2.6 and
>> 2.7 wil
On 26 Oct 2014, at 02:21 pm, Eelco Hoogendoorn
wrote:
> Im not sure why the memory doubling is necessary. Isnt it possible to
> preallocate the arrays and write to them? I suppose this might be inefficient
> though, in case you end up reading only a small subset of rows out of a
> mostly corr
Hi Ariel,
> I think that the docstring in 1.9 is fine (has the 1.9 result). The docs
> online (for all of numpy) are still on version 1.8, though.
>
> I think that enabling the old behavior might be useful, if only so that I can
> write code that behaves consistently across these two versions
On 4 Oct 2014, at 08:37 pm, Ariel Rokem wrote:
> >>> import numpy as np
> >>> np.__version__
> '1.9.0'
> >>> np.gradient(np.array([[1, 2, 6], [3, 4, 5]], dtype=np.float))
> [array([[ 2., 2., -1.],
>[ 2., 2., -1.]]), array([[-0.5, 2.5, 5.5],
>[ 1. , 1. , 1. ]])]
>
> On the o
On 26 Aug 2014, at 09:05 pm, Adrian Altenhoff
wrote:
>> But you are right that the problem with using the first_values, which should
>> of course be valid,
>> somehow stems from the use of usecols, it seems that in that loop
>>
>>for (i, conv) in user_converters.items():
>>
>> i in user_c
Hi Adrian,
>> not sure whether to call it a bug; the error seems to arise before reading
>> any actual data
>> (even on reading from an empty string); when genfromtxt is checking the
>> filling_values used
>> to substitute missing or invalid data it is apparently testing on default
>> testing v
Hi Adrian,
> I tried to load data from a csv file into numpy using genfromtxt. I need
> only a subset of the columns and want to apply some conversions to the
> data. attached is a minimal script showing the error.
> In brief, I want to load columns 1,2 and 4. But in the converter
> function for t
On 5 Aug 2014, at 11:27 pm, Matthew Brett wrote:
> OSX wheels built and tested and uploaded OK :
>
> http://wheels.scikit-image.org
>
> https://travis-ci.org/matthew-brett/numpy-atlas-binaries/builds/31747958
>
> Will test against the scipy stack later on today.
Built and tested against the F
On 29 Jul 2014, at 02:43 pm, Robert Kern wrote:
> On Tue, Jul 29, 2014 at 12:47 PM, Josè Luis Mietta
> wrote:
> Robert, thanks for your help!
>
> Now I have:
>
> * Q nodes (Q stick-stick intersections)
> * a list 'NODES'=[(x,y,i,j)_1,, (x,y,i,j)_Q], where each element
> (x,y,i,j) re
On 18 Jul 2014, at 01:07 pm, josef.p...@gmail.com wrote:
> Are the problems with sending out the messages with the mailing lists?
>
> I'm getting some replies without original messages, and in some threads I
> don't get replies, missing part of the discussions.
>
There seem to be problems with
On 30.06.2014, at 23:10, Jeff Reback wrote:
> In pandas 0.14.0, generic whitespace IS parsed via the c-parser, e.g.
> specifying '\s+' as a separator. Not sure when you were playing last with
> pandas, but the c-parser has been in place since late 2012. (version 0.8.0)
>
> http://pandas-docs.g
On 30 Jun 2014, at 04:56 pm, Nathaniel Smith wrote:
>> A real need, which had also been discussed at length, is a truly performant
>> text IO
>> function (i.e. one using a compiled ASCII number parser, and optimally also
>> a more
>> memory-efficient one), but unfortunately all people intereste
On 30 Jun 2014, at 04:39 pm, Nathaniel Smith wrote:
> On Mon, Jun 30, 2014 at 12:33 PM, Julian Taylor
> wrote:
>> genfromtxt and loadtxt need an almost full rewrite to fix the botched
>> python3 conversion of these functions. There are a couple threads
>> about this on this list already.
>> The
Hi all,
I was just having a new look into the mess that is, imo, the support for
automatic
line ending recognition in genfromtxt, and more generally, the Python file
openers.
I am glad at least reading gzip files is no longer entirely broken in Python3,
but
actually detecting in particular “old
On 13.11.2013, at 3:07AM, Charles R Harris wrote:
> Python 2.4 fixes at https://github.com/numpy/numpy/pull/4049.
Thanks for the fixes; builds under OS X 10.5 now as well. There are two test
errors (or maybe as nose problem?):
NumPy version 1.7.2rc1
NumPy is installed in
/sw/src/fink.build/r
Hi,
On 03.11.2013, at 5:42PM, Julian Taylor wrote:
> I'm happy to announce the release candidate of Numpy 1.7.2.
> This is a bugfix only release supporting Python 2.4 - 2.7 and 3.1 - 3.3.
on OS X 10.5, build and tests succeed for Python 2.5-3.3, but Python 2.4.4
fails with
/sw/bin/python2.4 s
On 23.09.2013, at 7:03PM, Charles R Harris wrote:
> I have gotten no feedback on the removal of the numarray and oldnumeric
> packages. Consequently the removal will take place on 9/28. Scream now or
> never...
The only thing I'd care about is the nd_image subpackage, but as far as I can
see,
On 05.06.2013, at 9:52AM, Ted To wrote:
>> From the list archives (2011), I noticed that there is a bug in the
> python gzip module that causes genfromtxt to fail with python 2 but this
> bug is not a problem for python 3. When I tried to use genfromtxt and
> python 3 with a gzip'ed csv file, I
On 10.05.2013, at 2:51PM, Daniele Nicolodi wrote:
> If you wish to format numpy arrays preceding them with a variable name,
> the following is a possible solution that gives the same formatting as
> in your example:
>
> import numpy as np
> import sys
>
> def format(out, v, name):
>header =
On 10.05.2013, at 1:20PM, Sudheer Joseph wrote:
> If some one has a quick way I would like to learn from them or get a
> referecence
> where the formatting part is described which was
> my intention while posting here. As I have been using fortran I just tried
> to use it to explain my requir
Dear Sudheer,
On 07.05.2013, at 11:14AM, Sudheer Joseph wrote:
> I need to print few arrays in a tabular form for example below
> array IL has 25 elements, is there an easy way to print this as 5x5 comma
> separated table? in python
>
> IL=[]
> for i in np.arange(1,bno+1):
>
On 12.04.2013, at 6:01PM, Chris Barker - NOAA Federal
wrote:
> Maybe we need a short-hand for "clean up the previous parts of the
> thread to show only what you need to make your post relevant and
> clear"
>
+1
> Paul Ivanof wrote:
>> ... But I just came across a wonderfully short signature fr
On 12.04.2013, at 2:14AM, Charles R Harris wrote:
> On Thu, Apr 11, 2013 at 5:49 PM, Colin J. Williams
> wrote:
> On 11/04/2013 7:20 PM, Paul Hobson wrote:
>> On Wed, Apr 3, 2013 at 4:28 PM, Doug Coleman wrote:
>>
>> Also, gmail "bottom-posts" by default. It's transparent to gmail users. I'd
On 14.02.2013, at 3:55PM, Steve Spicklemire wrote:
> I got Xcode 4,6 from the App Store. I don't think it's the SDK since the
> python 2.7 version builds fine. It's just the 3.2 version that doesn't have
> the -I/Library/Frameworks/Python.Framework/Versions/3.2/include/python3.2m in
> the comp
On 06.12.2012, at 12:40AM, Mark Bakker wrote:
> I guess I wasn't explicit enough.
> Say I have an array with 100 numbers and I want to write it to a file
> with 6 numbers on each line (and hence, only 4 on the last line).
> Can I use savetxt to do that?
> What other easy tool does numpy have to do
On 29.11.2012, at 1:21AM, Robert Love wrote:
> I have a file with thousands of lines like this:
>
> Signal was returned in 204 microseconds
> Signal was returned in 184 microseconds
> Signal was returned in 199 microseconds
> Signal was returned in 4274 microseconds
> Signal was returned in 202 m
On 27.07.2012, at 8:30PM, Fernando Perez wrote:
> On Fri, Jul 27, 2012 at 9:43 AM, Derek Homeier
> wrote:
>> thanks, that was exactly what I was looking for - together with
>>
>> c.TerminalIPythonApp.exec_lines = ['import sys',
>>
On 27 Jul 2012, at 17:58, Tony Yu wrote:
> On Fri, Jul 27, 2012 at 11:39 AM, Derek Homeier
> wrote:
> On 27.07.2012, at 3:27PM, Benjamin Root wrote:
>
> > > I would prefer not to use: from xxx import *,
> > >
> > > because of the name pollution.
> &g
On 27.07.2012, at 3:27PM, Benjamin Root wrote:
> > I would prefer not to use: from xxx import *,
> >
> > because of the name pollution.
> >
> > The name convention that I copied above facilitates avoiding the pollution.
> >
> > In the same spirit, I've used:
> > import pylab as plb
>
> But in t
On 29 May 2012, at 15:42, Nathaniel Smith wrote:
>> I note the fine distinction between np.isscalar( ('hello') ) and
>> np.isscalar( ('hello'), )...
>
> NB you mean np.isscalar( ('hello',) ), which creates a single-element
> tuple. A trailing comma attached to a value in Python normally creates
On 29 May 2012, at 15:00, Mark Bakker wrote:
> Why does isscalar('hello') return True?
>
> I thought it would check for a number?
No, it checks for something that is of 'scalar type', which probably can be
translated as 'not equivalent to an array'. Since strings can form numpy
arrays,
I gues
On 06.05.2012, at 8:16AM, Paul Anton Letnes wrote:
> All tests for 1.6.2rc1 pass on
> Mac OS X 10.7.3
> python 2.7.2
> gcc 4.2 (Apple)
Passing as well on 10.6 x86_64 and on 10.5.8 ppc with
python 2.5.6/2.6.6/2.7.2 Apple gcc 4.0.1,
but I am getting one failure on Lion (same with Python 2.5.6+2.
On 29 Mar 2012, at 14:49, Robert Kern wrote:
>> all work. For a more general check (e.g. if it is any type of integer), you
>> can do
>>
>> np.issubclass_(a.dtype.type, np.integer)
>
> I don't recommend using that. Use np.issubdtype(a.dtype, np.integer) instead.
Sorry, you're right, this works
On 29 Mar 2012, at 13:54, Chao YUE wrote:
> how can I check type of array in if condition expression?
>
> In [75]: type(a)
> Out[75]:
>
> In [76]: a.dtype
> Out[76]: dtype('int32')
>
> a.dtype=='int32'?
this and
a.dtype=='i4'
a.dtype==np.int32
all work. For a more general check (e.g. if it
On 27.03.2012, at 2:07AM, Stephanie Cooke wrote:
> I am new to numpy. When I try to use the command array.shape, I get
> the following error:
>
> AttributeError: 'list' object has no attribute 'shape'
>
> Is anyone familiar with this type of error?
It means 'array' actually is not one, more pre
On 27.03.2012, at 1:26AM, Olivier Delalleau wrote:
> len(M) will give you the number of rows of M.
> For columns I just use M.shape[1] myself, I don't know if there exists a
> shortcut.
>
You can use tuple unpacking, if that helps keeping your code conciser…
nrow, ncol = M.shape
Cheers,
On 20 Mar 2012, at 14:40, Chao YUE wrote:
> I would be in agree. thanks!
> I use gawk to separate the file into many files by year, then it would be
> easier to handle.
> anyway, it's not a good practice to produce such huge line txt files
Indeed it's not, but it's also not good practice to
Dear Chao,
> Do we have a function in numpy that can automatically "shrink" a ndarray with
> redundant dimension?
>
> like I have a ndarray with shape of (13,1,1,160,1), now I have written a
> small function to change the array to dimension of (13,160) [reduce the extra
> dimension with length
On 26 Jan 2012, at 13:30, Paul Anton Letnes wrote:
> If by "store" you mean "store on disk", I recommend h5py datasets and
> attributes. Reportedly pytables is also good but I don't have any
> first hand experience there. Both python modules use the hdf5 library,
> written in C/C++/Fortran.
>
> P
On 24 Jan 2012, at 01:45, Olivier Delalleau wrote:
> Note sure if there's a better way, but you can do it with some custom load
> and save functions:
>
> >>> with open('f.txt', 'w') as f:
> ... f.write(str(x.dtype) + '\n')
> ... numpy.savetxt(f, x)
>
> >>> with open('f.txt') as f:
> ...
On 23 Jan 2012, at 22:07, Derek Homeier wrote:
>> In [4]: r = np.ones(3,dtype=[('name', '|S5'), ('foo', '> '>
>> In [5]: r.tofile('toto.txt',sep='\n')
>>
>> bash-4.2$ cat toto.txt
>> ('1&
On 23 Jan 2012, at 21:15, Emmanuel Mayssat wrote:
> Is there a way to save a structured array in a text file?
> My problem is not so much in the saving procedure, but rather in the
> 'reloading' procedure.
> See below
>
>
> In [3]: import numpy as np
>
> In [4]: r = np.ones(3,dtype=[('name', '|
On 04.01.2012, at 5:10AM, questions anon wrote:
> Thanks for your responses but I am still having difficuties with this
> problem. Using argmax gives me one very large value and I am not sure what it
> is.
> There shouldn't be any issues with the shape. The latitude and longitude are
> the sam
On 26.12.2011, at 7:37PM, Fabian Dill wrote:
> I have a problem with a structured numpy array.
> I create is like this:
> tiles = numpy.zeros((header["width"], header["height"],3), dtype =
> numpy.uint8)
> and later on, assignments such as this:
> tiles[x, y,0] = 3
>
> Now uint8 is not sufficie
On 20.12.2011, at 9:01PM, Jack Bryan wrote:
> customize Gnu95FCompiler using config
> C compiler: gcc -pthread -fno-strict-aliasing -g -O2 -DNDEBUG -g -fwrapv -O3
> -Wall -Wstrict-prototypes -fPIC
>
> compile options: '-Inumpy/core/src/private -Inumpy/core/src -Inumpy/core
> -Inumpy/core/src/np
Hi Jack,
> In order to install scipy, I am trying to install numpy 1.6.1. on GNU/linux
> redhat 2.6.18.
>
> But, I got error about fortran compiler.
>
> I have gfortran. I do not have f77/f90/g77/g90.
>
that's good!
> I run :
> python setup.py build --fcompiler=gfortran
>
> It woks well
On 07.12.2011, at 9:38PM, Oleg Mikulya wrote:
> Agree with your statement. Yes, it is MKL, indeed. For linear equations it is
> no difference, but there is difference for other functions. And yes, my
> suspicions is just threading options. How to pass them to MKL from python?
> Should I change
On 07.12.2011, at 5:54AM, questions anon wrote:
> sorry the 'all_TSFC' is for my other check of maximum using concatenate and
> N.max, I know that works so I am comparing it to this method. The only reason
> I need another method is for memory error issues.
> I like the code I have written so f
On 07.12.2011, at 5:07AM, Olivier Delalleau wrote:
> I *think* it may work better if you replace the last 3 lines in your loop by:
>
> a=all_TSFC[0]
> if len(all_TSFC) > 1:
> N.maximum(a, TSFC, out=a)
>
> Not 100% sure that would work though, as I'm not en
On 06.12.2011, at 11:13PM, Wes McKinney wrote:
> This isn't the place for this discussion but we should start talking
> about building a *high performance* flat file loading solution with
> good column type inference and sensible defaults, etc. It's clear that
> loadtable is aiming for highest com
On 03.12.2011, at 6:47PM, Olivier Delalleau wrote:
> Ah sorry, I hadn't read carefully enough what you were trying to achieve. I
> think the double repeat solution looks like your best option then.
Considering that it is a lot shorter than fixing the tile() result, you
are probably right (I've
On 03.12.2011, at 6:22PM, Robin Kraft wrote:
> That does repeat the elements, but doesn't get them into the desired order.
>
> In [4]: print a
> [[1 2]
> [3 4]]
>
> In [7]: np.tile(a, 4)
> Out[7]:
> array([[1, 2, 1, 2, 1, 2, 1, 2],
>[3, 4, 3, 4, 3, 4, 3, 4]])
>
> In [8]: np.tile(a, 4)
On 1 Dec 2011, at 21:35, Chris Barker wrote:
> On 12/1/2011 9:15 AM, Derek Homeier wrote:
>>>>> np.array((2, 12,0.001+2j), dtype='|S8')
>> array(['2', '12', '(0.001+2'], dtype='|S8')
>>
>> - notice the la
On 1 Dec 2011, at 17:39, Charles R Harris wrote:
> On Thu, Dec 1, 2011 at 6:52 AM, Thouis (Ray) Jones wrote:
> Is this expected behavior?
>
> >>> np.array([-345,4,2,'ABC'])
> array(['-34', '4', '2', 'ABC'], dtype='|S3')
>
>
>
> Given that strings should be the result, this looks like a bug. I
Hi,
On 25 Oct 2011, at 21:14, Pauli Virtanen wrote:
> 25.10.2011 20:29, Matthew Brett kirjoitti:
> [clip]
>> In [7]: (res-1) / 2**32
>> Out[7]: 8589934591.98
>>
>> In [8]: np.float((res-1) / 2**32)
>> Out[8]: 4294967296.0
>
> Looks like a bug in the C library installed on the machine, t
On 25 Oct 2011, at 20:05, Matthew Brett wrote:
>>> Both the same as numpy:
>>>
>>> [mb312@jerry ~]$ gcc test.c
>>> test.c: In function 'main':
>>> test.c:5: warning: incompatible implicit declaration of built-in function
>>> 'powl'
>>
>> I think implicit here means that that the arguments and th
On 15.10.2011, at 9:42PM, Aronne Merrelli wrote:
>
> On Sat, Oct 15, 2011 at 1:12 PM, Matthew Brett
> wrote:
> Hi,
>
> Continuing the exploration of float128 - can anyone explain this behavior?
>
> >>> np.float64(9223372036854775808.0) == 9223372036854775808L
> True
> >>> np.float128(92233720
On 15.10.2011, at 9:21PM, Hugo Gagnon wrote:
> I need to print individual elements of a float64 array to a text file.
> However in the file I only get 12 significant digits, the same as with:
>
a = np.zeros(3)
a.fill(1./3)
print a[0]
> 0.
len(str(a[0])) - 2
> 12
>
On 11.10.2011, at 9:18PM, josef.p...@gmail.com wrote:
>>
>> In [42]: c = np.zeros(4, np.int16)
>> In [43]: d = np.zeros(4, np.int32)
>> In [44]: np.around([1.6,np.nan,np.inf,-np.inf], out=c)
>> Out[44]: array([2, 0, 0, 0], dtype=int16)
>>
>> In [45]: np.around([1.6,np.nan,np.inf,-np.inf], out=d)
On 11 Oct 2011, at 20:06, Matthew Brett wrote:
> Have I missed a fast way of doing nice float to integer conversion?
>
> By nice I mean, rounding to the nearest integer, converting NaN to 0,
> inf, -inf to the max and min of the integer range? The astype method
> and cast functions don't do what
Hi Nils,
On 11 Oct 2011, at 16:34, Nils Wagner wrote:
> How do I use genfromtxt to read a file with the following
> lines
>
> 11 2.2592365264892578D+01
> 22 2.2592365264892578D+01
> 13 2.669845581055D+00
>
the
minimum no. of dimensions required, thus it returns a 1D-array in the latter
case. If you are using numpy 1.6 or later, you can ensure to get a consistent
shape by passing the "ndmin=2" option.
Cheers,
Derek
--
--------
e would not do). But this is just within a simple shell script.
Cheers,
Derek
--
--------
Derek Homeier Centre de Recherche Astrophysique de Lyon
ENS Lyon
lated to pull
request
https://github.com/numpy/numpy/pull/143
(which implements similar functionality, plus a lot more, for a genfromtxt-like
function). So don't be surprised if the loadtxt patch comes back later, in a
completely revised form…
Cheers,
aping the array - that's probably even the most common use case for
loadtxt, but that method lacks way too much generality for my taste.
Back to accumulator, I suppose.
Cheers,
Derek
--
---
nces, but as you indicate, even for those the
conversion seems to dominate the cost.
Cheers,
Derek
--
Derek Homeier Centre de Recherche As
ass solution as well -- so you
> don't need to de-compress twice.
Absolutely; on compressed data the time for the extra pass jumps up to +30-50%.
Cheers,
Derek
--
make spaces the default delimiter
enable automatic decompression (given the modularity, could you simply
use np.lib._datasource.open() like genfromtxt?)
Cheers,
Derek
--
Derek Homeie
Hi,
as the subject says, the array_* comparison functions currently do not operate
on structured/record arrays. Pull request
https://github.com/numpy/numpy/pull/146
implements these comparisons.
There are two commits, differing in their interpretation whether two
arrays with different field na
On 25.08.2011, at 8:42PM, Chris.Barker wrote:
> On 8/24/11 9:22 AM, Anthony Scopatz wrote:
>>You can use Python pickling, if you do *not* have a requirement for:
>
> I can't recall why, but it seem pickling of numpy arrays has been
> fragile and not very performant.
>
Hmm, the pure Python v
On 11.08.2011, at 8:50PM, Russell E. Owen wrote:
> It seems a shame that loadtxt has no argument for predicted length,
> which would allow preallocation and less appending/copying data.
>
> And yes...reading the whole file first to figure out how many elements
> it has seems sensible to me -- a
On 16 Aug 2011, at 23:51, Hongchun Jin wrote:
> Thanks Derek for the quick reply. But I am sorry, I did not make it clear in
> my last email. Assume I have an array like
> ['CAL_LID_L2_05kmCLay-Prov-V3-01.2008-01-01T00-37-48ZD.hdf'
>
> 'CAL_LID_L2_05kmCLay-Prov-V3-01.2008-01-01T00-37-48ZD.hd
Hi Hongchun,
On 16 Aug 2011, at 23:19, Hongchun Jin wrote:
> I have a question regarding how to trim a string array in numpy.
>
> >>> import numpy as np
> >>> x = np.array(['aaa.hdf', 'bbb.hdf', 'ccc.hdf', 'ddd.hdf'])
>
> I expect to trim a certain part of each element in the array, for exampl
On 10 Aug 2011, at 22:03, Gael Varoquaux wrote:
> On Wed, Aug 10, 2011 at 04:01:37PM -0400, Anne Archibald wrote:
>> A 1 Gb text file is a miserable object anyway, so it might be desirable
>> to convert to (say) HDF5 and then throw away the text file.
>
> +1
There might be concerns about ensurin
On 10 Aug 2011, at 19:22, Russell E. Owen wrote:
> A coworker is trying to load a 1Gb text data file into a numpy array
> using numpy.loadtxt, but he says it is using up all of his machine's 6Gb
> of RAM. Is there a more efficient way to read such text data files?
The npyio routines (loadtxt as
On 7 Aug 2011, at 23:27, Dag Sverre Seljebotn wrote:
>> Enumpy_test.c: In function ‘PyInit_numpy_test’:
>> numpy_test.c:11611: warning: ‘return’ with no value, in function returning
>> non-void
>> .numpy_test.cpp: In function ‘PyObject* PyInit_numpy_test()’:
>> numpy_test.cpp:11611: error: return
On 7 Aug 2011, at 22:31, Paul Anton Letnes wrote:
> Looks like you have done some great work! I've been using f2py in the past,
> but I always liked the idea of cython - gradually wrapping more and more code
> as the need arises. I read somewhere that fortran wrapping with cython was
> coming -
On 7 Aug 2011, at 04:09, Sturla Molden wrote:
> Den 06.08.2011 11:18, skrev Dag Sverre Seljebotn:
>> We are excited to announce the release of Cython 0.15, which is a huge
>> step forward in achieving full Python language coverage as well as
>> many new features, optimizations, and bugfixes.
>>
>
Hi,
commits c15a807e and c135371e (thus most immediately addressed to Mark, but I
am sending this to the list hoping for more insight on the issue) introduce a
test failure with Python 2.5+2.6 on Mac:
FAIL: test_timedelta_scalar_construction (test_datetime.TestDateTime)
On 2 Aug 2011, at 19:15, Christopher Barker wrote:
> In [32]: s = numpy.array(a, dtype=tfc_dtype)
> ---
> TypeError Traceback (most recent
> call last)
>
> /Users/cbarker/ in ()
>
> TypeError:
On 2 Aug 2011, at 18:57, Thomas Markovich wrote:
> It appears that uninstalling python 2.7 and installing the scipy
> superpack with the apple standard python removes the
Did the superpack installer automatically install numpy to the
python2.7 directory when present? Even if so, I reckon you
On 29.07.2011, at 1:38AM, Anne Archibald wrote:
> The can is open and the worms are everywhere, so:
>
> The big problem with one-based indexing for numpy is interpretation.
> In python indexing, -1 is the last element of the array, and ranges
> have a specific meaning. In a hypothetical one-based
On 29.07.2011, at 1:19AM, Stéfan van der Walt wrote:
> On Thu, Jul 28, 2011 at 4:10 PM, Anne Archibald
> wrote:
>> Don't forget the everything-looks-like-a-nail approach: make all your
>> arrays one bigger than you need and ignore element zero.
>
> Hehe, why didn't I think of that :)
>
> I gues
On 07.07.2011, at 7:16PM, Robert Pyle wrote:
> .../Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/numpy/core/numeric.py:1922:
> RuntimeWarning: invalid value encountered in absolute
> return all(less_equal(absolute(x-y), atol + rtol * absolute(y)))
>
>
On 30.06.2011, at 11:57PM, Thomas K Gamble wrote:
>> np.add(b.reshape(2048,3136) * c, d, out=a[:,:3136])
>>
>> But to say whether this is really the equivalent result to what IDL does,
>> one would have to study the IDL manual in detail or directly compare the
>> output (e.g. check what happens t
On 30.06.2011, at 7:32PM, Thomas K Gamble wrote:
> I'm trying to convert some IDL code to python/numpy and i'm having some
> trouble understanding the rules for boradcasting during some operations.
> example:
>
> given the following arrays:
> a = array((2048,3577), dtype=float)
> b = array((256,
o with int/float numbers
set as missing_values, and reading to regular arrays. I've tested this on 1.6.1
and the current development branch as well, and the missing_values are only
considered for masked arrays. This is not likely to change soon, and may
actually be intentional, so to proces
On 27.06.2011, at 7:11PM, Nils Becker wrote:
>>> Finally, the former Scientific.IO NetCDF interface is now part of
>>> scipy.io, but I assume it only supports netCDF 3 (the documentation
>>> is not specific about that). This might be the easiest option for a
>>> portable data format (if Matlab sup
On 27.06.2011, at 6:36PM, Robert Kern wrote:
>> Some late comments on the note (I was a bit surprised that HDF5 installation
>> seems to be a serious hurdle to many - maybe I've just been profiting from
>> the fink build system for OS X here - but I also was not aware that the
>> current netCDF
On 21.06.2011, at 8:35PM, Christopher Barker wrote:
> Robert Kern wrote:
>> https://raw.github.com/numpy/numpy/master/doc/neps/npy-format.txt
>
> Just a note. From that doc:
>
> """
> HDF5 is a complicated format that more or less implements
> a hierarchical filesystem-in-a-file. This f
On 26.06.2011, at 8:48PM, Chao YUE wrote:
> I want to read a csv file with many (49) columns, the first column is string
> and remaning can be float.
> how can I avoid type in like
>
> data=numpy.genfromtxt('data.csv',delimiter=';',names=True, dtype=(S10, float,
> float, ..))
>
> Can I ju
On 21.06.2011, at 7:58PM, Neal Becker wrote:
> I think, in addition, that hdf5 is the only one that easily interoperates
> with
> matlab?
>
> speaking of hdf5, I see:
>
> pyhdf5io 0.7 - Python module containing high-level hdf5 load and save
> functions.
> h5py 2.0.0 - Read and write HDF5 fi
1 - 100 of 144 matches
Mail list logo