uch that I want to dive in and get
started working on 3.0?"
just wondering,
-Chris
--
Christopher Barker, Ph.D.
Oceanographer
Emergency Response Division
NOAA/NOS/OR&R(206) 526-6959 voice
7600 Sand Point Way NE (206) 526-6329 fax
Seattle, WA 98115 (206
extra = 100
l = extra
a = numpy.zeros((l,))
for i in xrange(100):
b = numpy.empty((extra,))
a = numpy.append(a, b)
return a
a1 = f1()
a2 = f2()
a3 = f3()
if a1.shape == a2.shape == a3.shape:
print "they are all returning the same size array"
el
On 9/14/11 1:01 PM, Christopher Barker wrote:
> numpy.ndarray.resize is a different method, and I'm pretty sure it
> should be as fast or faster that np.empty + np.append.
My profile:
In [25]: %timeit f1 # numpy.resize()
1000 loops, best of 3: 163 ns per loop
In [26]:
+ np.append.
It is often confusing that there is a numpy function and ndarray method
with the same name and slightly different usage.
-Chris
--
Christopher Barker, Ph.D.
Oceanographer
Emergency Response Division
NOAA/NOS/OR&R(206) 526-6959 voice
7600 Sand Point Way NE
Cython code for a variety of types from a single definition.
-Chris
--
Christopher Barker, Ph.D.
Oceanographer
Emergency Response Division
NOAA/NOS/OR&R(206) 526-6959 voice
7600 Sand Point Way NE (206) 526-6329 fax
Seattle, WA 98115 (206) 526-6317 main
On 8/4/11 10:02 AM, Christopher Barker wrote:
> On 8/4/11 8:53 AM, Jeff Whitaker wrote:
>> Kiko: I think the difference may be that when you read the data with
>> netcdf4-python, it tries to unpack the short integers to a float32
>> array.
>
> Jeff, why is that? is
On 8/3/11 3:56 PM, Gökhan Sever wrote:
> Back to the reality. After clearing the cache using Warren's suggestion:
>
> In [1]: timeit -n1 -r1 a = np.fromfile('temp.npa', dtype=np.uint16)
> 1 loops, best of 1: 7.23 s per loop
yup -- that cache sure can be handy!
-Chr
> You'll have to do the conversion manually then, at which point you will
> may run out of memory anyway.
why would you have to do the conversion at all? (OK, you may, depending
on your use case, but for the most part, data stored in a file as an
integer type would be suitable fo
fromfile('temp.npa', dtype=np.uint16)
1 loops, best of 3: 2.45 s per loop
so ti seems I'm not seeing cache effects, but maybe you are.
Anyway, we haven't heard from the OP -- I'm not sure what s/he thought
was slow.
-Chris
>
> On Wed, Aug 3, 2011 at 10:50 AM,
you sure that [:] forces the
full data read? It probably does, but I'm not totally sure.
is "z" a numpy array object at that point?
-Chris
--
Christopher Barker, Ph.D.
Oceanographer
Emergency Response Division
NOAA/NOS/OR&R(206) 526-6959 v
On 8/3/11 11:09 AM, Ian Stokes-Rees wrote:
> On 8/3/11 12:50 PM, Christopher Barker wrote:
>> As a reference, reading that much data in from a raw file into a numpy
>> array takes 2.57 on my machine (a rather old Mac, but disks haven't
>> gotten much faster).
>
>
fromfile('temp.npa', dtype=np.uint16)
(using ipython's timeit)
-Chris
--
Christopher Barker, Ph.D.
Oceanographer
Emergency Response Division
NOAA/NOS/OR&R(206) 526-6959 voice
7600 Sand Point Way NE (206) 526-6329 fax
Seattle, WA 98115
eted differently (fancy indexing?). It
kind of breaks python "duck typing" (a sequence is a sequence), but it's
useful, too.
So when a list fails to do what you want, try a tuple.
-Chris
--
Christopher Barker, Ph.D.
Oceanographer
Emergency Response Division
NOAA/NOS/OR&R
bug in the sciy superpack installer -- if it was
build for the system python2.6, it should not get installed into 2.7.
Unless you did something to force that.
-Chris
--
Christopher Barker, Ph.D.
Oceanographer
Emergency Response Division
NOAA/NOS/OR&R(206) 526-6959 voice
7600 Sand
"smart" about it. So as a rule, you need to
be quite specific when working with structured dtypes.
However, the default is for numpy to map tuples to dtypes, so if you
pass in a tuple instead, it works:
In [34]: t = tuple(a)
In [35]: s = numpy.array(t, dtype=tfc_dtype)
In [36]: s
Out[36]
standards for this? What do you use for saving scientific
> data?
>
>
> thank you,
>
> Brian Blais
>
>
>
--
Christopher Barker, Ph.D.
Oceanographer
Emergency Response Division
NOAA/NOS/OR&R(206) 526-6959 vo
6, not 10.7 is because 10.7 didn't exist yet) --
with the exception of the above, which is, unfortunately, a common
numpy/scipy use case.
-Chris
--
Christopher Barker, Ph.D.
Oceanographer
Emergency Response Division
NOAA/NOS/OR&R(206) 526-6959 voice
7600 Sand Point W
in the installer -- what it is really looking for is the
python.org build.
HTH,
-Chris
> Cheers,
>
> Ian
> ___
> NumPy-Discussion mailing list
> NumPy-Discussion@scipy.org
> http://mail.scipy.org/mailman/listinfo/nump
's
pretty cool.
There are a few other options, including weave, of course.
-Chris
--
Christopher Barker, Ph.D.
Oceanographer
Emergency Response Division
NOAA/NOS/OR&R(206) 526-6959 voice
7600 Sand Point Way NE (206) 526-6329 fax
Seattle, WA 98115
ython when performance isn't critical, but
you need a deeper understanding if you want to work with the data in C
or Fortran or to tune performance in python.
So as long as there is an API to query and control how things work, I
like that it's hidden from simple python code.
-Chris
-
ability to get the original value back if you want to stop
ignoring it? Maybe that's not inherent to what an IGNORE means, but it
seems pretty key to me.
-Chris
--
Christopher Barker, Ph.D.
Oceanographer
Emergency Response Division
NOAA/NOS/OR&R(206) 526-6959 vo
o, will make that irrelevant, but I want
to be clear about that kind of use case.
-Chris
--
Christopher Barker, Ph.D.
Oceanographer
Emergency Response Division
NOAA/NOS/OR&R(206) 526-6959 voice
7600 Sand Point Way NE (206) 526-6329 fax
Seattle, WA 98115 (20
ach row, but putting items in the middle, and shifting things to the
left strikes me as a plain old bad idea (and a pain to implement)
-Chris
--
Christopher Barker, Ph.D.
Oceanographer
Emergency Response Division
NOAA/NOS/OR&R(206) 526-6959 voice
7600 Sand Point Way NE (
*N))
If that doesn't do it right, you may be able to mess with the strides,
etc. do some googling, and check out:
numpy.lib.stride_tricks
-Chris
> for N=2, that already takes 0.5 seconds but i intend to use it
> for N=3 and N=4 ...
>
> thanks for your input,
> q
>
;s pretty useful to have it be false.
However, I also do:
if x is not None:
rather than-
if x:
so as to be unambiguous about what I'm testing for (and because if x ==
0, I don't want the test to fail), so I guess:
if arr[i] is np.NA:
would be perfectly analogous.
-Chris
ng special" is "bad" (or "missing" , or
"ignore"), but the cool thing is that if you use an int:
0 = "unmasked"
1 = "masked because of one thing"
2 = "masked because of another"
etc., etc.
This could be pretty powerful
-Chris
here already? I think it should.
-Chris
--
Christopher Barker, Ph.D.
Oceanographer
Emergency Response Division
NOAA/NOS/OR&R(206) 526-6959 voice
7600 Sand Point Way NE (206) 526-6329 fax
Seattle, WA 98115 (206) 526-
the way, what might be the performance hit of a "new" dtype --
wouldn't we lose all sort of opportunities for the compiler and hardware
to optimize? I can only image an "if" statement with every single
computation. But maybe that isn't any more of a hit that a
nt that even for floats, masks have significant advantages over
"just using NaN". One might be that you can mask and unmask a value for
different operations, without losing the value.
-Chris
--
Christopher Barker, Ph.D.
Oceanographer
Emergency Response Division
NOAA/NOS/OR&
t; easy sharing with libraries that have not been written with numpy in
> mind.
Isn't that what the PEP 3118 extended buffer protocol is supposed to be for?
Anyway, good stuff!
-Chris
--
Christopher Barker, Ph.D.
Oceanographer
Emergency Response Division
NOAA/NOS/OR&R
d be nice.
That being said, I like the simplicity of the .npy format, and I don't
know that anyone wants to take any of this on anyway.
-Chris
--
Christopher Barker, Ph.D.
Oceanographer
Emergency Response Division
NOAA/NOS/OR&R(206) 526-6959 voice
7600 Sand Point
on
There is also pytables, which uses HDF5 under the hood.
-Chris
--
Christopher Barker, Ph.D.
Oceanographer
Emergency Response Division
NOAA/NOS/OR&R(206) 526-6959 voice
7600 Sand Point Way NE (206) 526-6329 fax
Seattle
st, and self-describing (but not a standard outside of numpy).
I doubt pickle will ever be your best bet.
-Chris
--
Christopher Barker, Ph.D.
Oceanographer
Emergency Response Division
NOAA/NOS/OR&R(206) 526-6959 voice
7600 Sand Point Way NE (206) 526-6329 fax
Seattle, WA
___
> > NumPy-Discussion mailing list
> > NumPy-Discussion@scipy.org <mailto:NumPy-Discussion@scipy.org>
> > http://mail.scipy.org/mailman/listinfo/numpy-discussion
>
> _______
> NumPy-Discussion mailing list
> NumPy-Disc
Robert Kern wrote:
> On Fri, Jun 17, 2011 at 13:27, Christopher Barker
> wrote:
>
>> Actually, I'm a bit confused about dtypes from an OO design perspective
>> anyway. I note that the dtypes seem to have all (most?) of the methods
>> of ndarrays (or placeholde
(most?) of the methods
of ndarrays (or placeholders, anyway), which I don't quite get.
oh well, I've found what I need, thanks.
-Chris
--
Christopher Barker, Ph.D.
Oceanographer
Emergency Response Division
NOAA/NOS/OR&R(206) 526-6959 voice
7600 Sand Point Way N
.max_value
a_dtype.min_value
etc.
In this case, I have a uint16, so I can hard code it, but it would be
nice to be able to be write the code in a more generic fashion.
-Chris
--
Christopher Barker, Ph.D.
Oceanographer
Emergency Response Division
NOAA/NOS/OR&R(206) 526-6959 v
o do with it.
got it, thanks.
> Which by the way is what the shared memory arrays I and Gaël made will do,
> but we still have the annoying pickle overhead.
Do you have a pointer to that code? It sounds handy.
-Chris
--
Christopher Barker, Ph.D.
Oceanographer
Emergency Response Division
NO
Sturla Molden wrote:
> On Windows this sucks, because there is no fork system call.
Darn -- I do need to support Windows.
> Here we are
> stuck with multiprocessing and pickle, even if we use shared memory.
What do you need to pickle if you're using shared memory?
-Chris
-
s a lot of
overhead involved such that there is more time in communication than
computation.
yup -- clearly the case here. I wonder if it's just array size though --
won't cPickle time scale with array size? So it may not be size pe-se,
but rather how much computation you need fo
at his code and explain why the slowdown occurs (hint,
hint!)
-CHB
--
Christopher Barker, Ph.D.
Oceanographer
Emergency Response Division
NOAA/NOS/OR&R(206) 526-6959 voice
7600 Sand Point Way NE (206) 526-6329 fax
Seattle, WA 98115
s, and I'd
> like to
> > split up the tasks between the different cores. I'm not using
> numpy.dot, if
> > I'm not mistaken I don't think that would do what I need.
> > Thanks again,
> > Brandt
> >
> >
>
,
88242318.6001, 97967895.]))
> I guess it is better to always specify the correct range, but wouldn't
> it be preferable if the function provided a
> warning when this case occurs ?
>
>
> ---
> Re: [Numpy-discussion] np.histogram: uppe
x27;, '2011-06-12', '2011-06-15', '2011-06-18'],
> dtype='datetime64[D]')
so dtype 'M8' defaults to increments of days?
of course, I've lost track of the difference between 'M' and 'M8'
(I've never liked the dtype
roducts).
are you using numpy.dot() for that? If so, then the above applies to
that as well.
I know I could look at your code to answer these questions, but I
thought this might help.
-Chris
--
Christopher Barker, Ph.D.
Oceanographer
Emergency Response Division
NOAA/NOS/OR&R
any use of these dtypes for work that required
greater precision, but does anyone really need both year, month, day
specification AND nanoseconds? Given all the leap-second issues, that
seems a bit ridiculous.
But it would make things easier.
I note that in this entire conversation, all the talk
rg <mailto:NumPy-Discussion@scipy.org>
> http://mail.scipy.org/mailman/listinfo/numpy-discussion
>
>
>
>
>
> ___
> NumPy-Discussion mailing
ormat) datetime.
Anyway, my key point is that converting to/from calendar-based units and
"linear time" units is fraught with peril -- it needs to be really clear
when that is happening, and the user needs to have a clear and ideally
easy way to define how it should happen.
-Chr
ssible, but I still like it.
maybe two types:
datetime_calendar: for Calendar-type units (months, business days, ...)
datetime_continuous: for "linear units" (seconds, hours, ...)
or something like that?
-Chris
--
Christopher Bark
stinction between linear, convertible, time units, and time units that
vary depending on where you are on which calendar, and what calendar you
are using, should be kept clearly distinct.
-Chris
--
Christopher Barker, Ph.D.
Oceanographer
Emergency Response Division
NOAA/NOS/OR&R
r", etc)
So folks would specify a time axes as : "months since 2010-01" and
expect that they were getting calandar months, like "1" would mean Feb,
2010, instaed of January 31, 2010 (or whatever).
Anyway, lots of room for confusion, so whatever we come up with needs
tween calendars is in what
timedeltas mean, not what a unit in time is.
> ISO8601 seems quite OK.
All that does is specify a string representation, no?
-Chris
--
Christopher Barker, Ph.D.
Oceanographer
Emergency Response Division
NOAA/NO
c/climate modelers need, and how at least one
well-respected group has addresses these issues.
http://www.earthsystemmodeling.org/esmf_releases/non_public/ESMF_5_1_0/ESMC_crefdoc/node6.html
(If that long URL breaks, I found that by googling: "ESMF calendar date
time")
-Chris
year = 365.25 days (for instance0
1 month = 1year/12
But I think it's better to simply disallow them, and keep that use for
what I'm calling the "Calendar" functions. And "business day" is
particularly ugly, and, I'm sure defined diff
ial ones used for climate modeling and the like, that have nice
properties like all months being 30 days long, etc. Plus, as discussed,
various "business" calendars.
So: I think that the calendar-related functions need fairly self
contained library, with various classes for the various cal
1, 1, 1, 1]), array([ 0.5, 1.5, 2.5, 3.5, 4.5]))
or, if you want to be more explicit:
In [14]: np.histogram(x, bins=np.linspace(0.5, 4.5, 5))
Out[14]: (array([1, 1, 1, 1]), array([ 0.5, 1.5, 2.5, 3.5, 4.5]))
HTH,
-Chris
--
Christopher Barker, Ph.D.
Oceanographer
Emergency Response D
2.)
I don't see how that is particularly useful, at least not any more
useful that nanprod, nandiv, etc, etc...
What am I missing?
-Chris
--
Christopher Barker, Ph.D.
Oceanographer
Emergency Response Division
NOAA/NOS/OR&R(206) 526-6959 voice
7600 Sand Point Way NE
e
> In []: [0, 4, 8] in A
> Out[]: True
> In []: [8, 4, 0] in A
> Out[]: True
> In []: [2, 4, 6] in A
> Out[]: True
> In []: [6, 4, 2] in A
> Out[]: True
> In []: [3, 1, 5] in A
> Out[]: True
> In [1061]: [3, 1, 4] in A
> Out[1061]: True
> But
> In []: [1,
thwhile, but I think np.in1d is exactly what you
are looking for:
indexes = np.in1d( records.integer_field, values )
Funny I'd never noticed that before.
-Chris
--
Christopher Barker, Ph.D.
Oceanographer
Emergency Response Division
NOAA/NOS/OR&R(206) 526-
numpy-related question anyway.
without more context, we can't even begin to figure out what's wrong.
Good luck,
-Chris
--
Christopher Barker, Ph.D.
Oceanographer
Emergency Response Division
NOAA/NOS/OR&R(206) 526-6959 voice
7600 Sand Point Way NE (206)
>> ndarray has 2 dimensions but record array has 1 dimensions
>>
>> This makes seemingly reasonable things, like using apply_along_axis()
>> over a table of data with named columns, impossible:
> each row (record) is treated as one array element, so the structured
> array is only 1d.
>
> If you ha
.mgrid[-60:90:((60.j+90.j)*4. + 1j)]
which is klunkier that linspace.
I'd have used two different function names for the different mgrid
functionality, rather that than the complex kludge
-Chris
--
Christopher Barker, Ph.D.
Oceanographer
Emergency Response Division
NOAA/NOS/OR&R
75])
I think this is too bad, actually, because we're back to range()-type
tricks to get the end point:
In [20]: np.mgrid[-1:3.25:.25]
Out[20]:
array([-1. , -0.75, -0.5 , -0.25, 0. , 0.25, 0.5 , 0.75, 1. ,
1.25, 1.5 , 1.75, 2. , 2.25, 2.5 , 2.75, 3. ])
-Chris
: "%.3f"%np.round(1.23456789, 3)
Out[110]: '1.235'
HTH, Chris
--
Christopher Barker, Ph.D.
Oceanographer
Emergency Response Division
NOAA/NOS/OR&R(206) 526-6959 voice
7600 Sand Point Way NE (206) 526-6329 fax
Seattle, WA 98115 (2
there are a lot
of editors that can make a mess of line endings.
If you can read the entire file into memory at once, this is almost
trivial, if you can't -- there is a bit more bookeeping code to be written.
DARN -- I think I said my last note was the last on this topic!
-Chris
--
Sorry to keep harping on this, but for history's sake, I was one of the
folks that got 'U' introduced in the first place. I was dealing with a
nightmare of unix, mac and dos test files, 'U' was a godsend.
On 4/5/11 4:51 PM, Matthew Brett wrote:
> The difference between 'rt' and 'U' is (this is
e in a unicode world (at least a little) there is simply
no way around the fact that you can't reliably read a file without
knowing how it is encoded.
My thought at this point is to say that the numpy text file reading
stuff only works on 1byte, ansi encoding (nad maybe only ascii), and be
do
draw.line(points.flatten().tolist(), fill="rgb(0,255,0)", width=4)
Thanks all,
-Chris
--
Christopher Barker, Ph.D.
Oceanographer
Emergency Response Division
NOAA/NOS/OR&R(206) 526-6959 voice
7600 Sand Point Way N
s like it would be slower than it should be (to be honest,
not profiled).
Is there a faster way?
Maybe numpy should have a ndarray.totuple() method.
-Chris
--
Christopher Barker, Ph.D.
Oceanographer
Emergency Response Division
NOAA/NOS/OR&R(206) 526-6959 voice
760
sal newlines mode is enabled. Lines
in the input can end in '\n', '\r', or '\r\n', and these are translated
into '\n' before being returned to the caller.
"""
> It may indeed be desirable
> to read the files as text, but that would requi
r unclear, ask here, and then you'll have something to contribute.
-Chris
> Do check out the developer zone, maybe search the the open tickets and
> see if there are any bugs you want to tackle.
> Maybe write some tests for untested code you find.
Good ideas as well, of cour
t; So, in this case, the ** operator and np.power are identical, but
> diverges from the behavior of python's operator.
I think consistency within numpy is more important that consistency with
python -- I expect differences like this from python (long integers,
different data types, etc)
-C
maybe the little presentation and sample code I gave to the
Seattle Python Interest group will help:
http://www.seapig.org/November2010Notes
-Chris
--
Christopher Barker, Ph.D.
Oceanographer
Emergency Response Division
NOAA/NOS/OR&R(206) 526-6959 voice
7600 Sand Point Way NE
one element in a
row of the a array -- they are not all next to each other. A regular C
library generally won't be able to work with data laid out like this.
HTH,
-Chris
--
Christopher Barker, Ph.D.
Oceanographer
Emergency Response Division
NOAA/NOS/OR&R(206) 52
for job ID: 179178
NOTE: This is a potion being hired by GDIT to work with NOAA, so any
questions about salary, benefits, etc, etc should go to GDIT. However,
feel free to send me questions about our organization, working
conditions, more detail about the nature of the projects etc.
-
r if it does that -- give it a try. It does use
ascii-to-float function written for numpy to handle things like that.
> Of course, this is easy to wrap
> in a small function but I expect it to be slow when the input size is in
> the Mb range.
Never expect -- do a simple solution, the
]: arr[:,4]==2
Out[56]:
array([ True, False, False, False, False, False, False, False, False,
False, False, False, False, False, False, False, False, False,
False], dtype=bool)
that's a boolean array that can be used for indexing, operating with
"where", etc.
HTH,
-Chris
On 3/17/11 2:57 PM, Mark Wiebe wrote:
> Dtypes being mutable looks like a serious bug to me, it's violating the
> definition of 'hashable' given here:
I can imagine other problems is would cause, as well -- is there any
reason that dtypes should be mutable?
-Chris
-
2 0.2
> 0.50. 0. 0.4 0.8
> 0.55.51.50.5 1.5
It's not as robust, but if performance matters, fromfile() should be faster:
f.readline() # to skip the first line
arr = np.fromfile(f, sep=' ', dtype=np.float64).reshape((-1, 5))
(untested)
-
there are defined as
in-place operators, I don't expect any casting to occur.
I think this is the kind of thing that would be great to have a warning
the first time you do it, but once you understand, the warnings would be
really, really annoying!
-Chris
--
Christopher Barker, Ph.D.
Oc
high level
complex data structures, use Python.
And the fact that using all these in the same program is pretty easy
makes it all possible.
-Chris
--
Christopher Barker, Ph.D.
Oceanographer
Emergency Response Division
NOAA/NOS/OR&R(206) 526-6959 voice
7600 Sand Point Way NE
haven't gotten around to it.
I hope it was a bit helpful, and I'm looking forward to seeing what's
next for carray!
-Chris
--
Christopher Barker, Ph.D.
Oceanographer
Emergency Response Division
NOAA/NOS/OR&R(206) 526-6959 voice
7600 Sand Point Way NE (206) 526
On 3/10/11 11:29 AM, Christopher Barker wrote:
> By the way, it would be great to have a growable array that could be
> efficiently used in Cython code -- so you could accumulate a bunch of
> native C datatype data easily.
I just noticed that carray is written in Cython, so that part
On 3/10/11 9:51 AM, Francesc Alted wrote:
> A Thursday 10 March 2011 18:05:11 Christopher Barker escrigué:
>> NOTE: this looks like it could use a "growable" numpy array, much
>> like one I've written before -- maybe it's time to revive that
>> project
On 3/7/11 5:51 PM, Sturla Molden wrote:
> Den 07.03.2011 18:28, skrev Christopher Barker:
>> 1, 2, 3, 4
>> 5, 6
>> 7, 8, 9, 10, 11, 12
>> 13, 14, 15
>> ...
>>
> A ragged array, as implemented in C++, Java or C# is just an array of
> arrays (or 'a
or your Cython code enough to
remove all python dependencies -- or at least have something you could
compile and run without a python interpreter running.
-Chris
--
Christopher Barker, Ph.D.
Oceanographer
Emergency Response Division
NOAA/NOS/OR&R(206) 526-6959 voice
7600 Sand Po
http://www.pytables.org/docs/manual/ch04.html#VLArrayClassDescr
>
>> is a "vlen array" stored contiguously in netcdf?
great, thanks! that gives me an example of one API I might want to use.
> I don't really know, but one limitation of variable length arrays in
> HDF5
On 3/7/11 9:33 AM, Francesc Alted wrote:
> A Monday 07 March 2011 18:28:11 Christopher Barker escrigué:
>> I'm setting out to write some code to access and work with ragged
>> arrays stored in netcdf files. It dawned on me that ragged arrays
>> are not all that uncommon,
ny "standard" way to work with such data?
-Chris
--
Christopher Barker, Ph.D.
Oceanographer
Emergency Response Division
NOAA/NOS/OR&R(206) 526-6959 voice
7600 Sand Point Way NE (206) 526-6329 fax
Seattle, WA 98115
much, then it shouldn't be changed, but I think it better reflects what
dtypes are all about.
-Chris
--
Christopher Barker, Ph.D.
Oceanographer
Emergency Response Division
NOAA/NOS/OR&R(206) 526-6959 voice
7600 Sand Point Way NE (206) 526-6329 fax
Seattle, WA 98115
ma50
> #print prices_ma20_greater_than_ma50
>
>
> plot.plot(prices)
> plot.plot(ma20)
> plot.plot(ma50)
> plot.show()
> [/quote]
>
> Someone can give me some clues?
>
> Best Regards,
>
> _
5.2, 5.2, 6.4, 1.3]])
-Chris
--
Christopher Barker, Ph.D.
Oceanographer
Emergency Response Division
NOAA/NOS/OR&R(206) 526-6959 voice
7600 Sand Point Way NE (206) 526-6329 fax
Seattle, WA 98115 (206) 526-6317 main recept
adlines() should take a number of rows as an optional parameter.
not a big deal, now that we have list comprehensions, but still it would
be nice, and it makes sense to put it into loadtxt() for sure.
-Chris
--
Christopher Barker, Ph.D.
Oceanographer
Emergency Response Division
NOAA/NOS/OR&a
ure if it
handles 16 bit greyscale, either.
I'd look at a lib that can read tiff properly -- some have been
suggested here, and you can also use GDAL, which is meant for
geo-referenced data, but you can ignore the geo information and just get
an image if you want.
-Chris
--
Christopher
If you want to downsample by a integer amount (i.e a factor of 2) in
each dimension, I have some Cython code that optimizes that. I'm happy
to send it along.
-Chris
--
Christopher Barker, Ph.D.
Oceanographer
Emergency Response Division
NOAA/NOS/OR&R(206) 526-6959 voice
7600
HDF5 on disk, but
I think more "natural" access.
-Chris
--
Christopher Barker, Ph.D.
Oceanographer
Emergency Response Division
NOAA/NOS/OR&R(206) 526-6959 voice
7600 Sand Point Way NE (206) 526-6329 fax
Seattle, WA 981
use:
np.fromfile(open_file, sep=' ', count=num_items_to_read)
It'll only work for multi-line text if the separator is whitespace,
which it was in your example. But if it does, it should be pretty fast.
-Chris
--
Christopher Barker, Ph.D.
Oceanographer
Emergency Response Divi
e of numpy.
> i then would like to use visvis for visualizing this in 3D.
you'll have to see what visvis is expecting in terms of data types, etc.
HTH,
-Chris
--
Christopher Barker, Ph.D.
Oceanographer
Emergency Response Division
NOAA/NOS/OR&R(206) 526-6959 voice
yourself.
And I do get that. And yet, experimentally, appending numpy arrays (on
that one simple example) appeared to be O(N). Granted, a much larger
constant that for lists, but it sure looks linear to me.
Should it be O(N^2)? Maybe I need to run it for larger N , but I got
impatient as it is
development discussion, so I'll subscribe to
both lists anyway, and filter them to the same place in my email. So it
makes little difference to me.
-Chris
--
Christopher Barker, Ph.D.
Oceanographer
Emergency Response Division
NOAA/NOS/OR&R(206) 526-6959 voice
7600 Sa
1 - 100 of 744 matches
Mail list logo