On 4 February 2014 15:01, RayS wrote:
> I was struggling with methods of reading large disk files into numpy
> efficiently (not FITS or .npy, just raw files of IEEE floats from
> numpy.tostring()). When loading arbitrarily large files it would be nice to
> not bother reading more than the plot
On 8 October 2013 19:56, Valentin Haenel wrote:
> I ended up using: PyArray_TypeObjectFromType
> from cython so:
>
> np.dtype(cnp.PyArray_TypeObjectFromType(self.ndtype)).str
>
> Maybe i can avoid the np.dtype call, when using PyArray_Descr?
>
In short: yes.
`PyArray_TypeObjectFromType` first u
rrect and the code is wrong.
Do different domains use different conventions here? Are there some
references to back up one stance or another?
But all else being equal, I'm guessing there'll be far more appetite for
updating the documentation than the code.
Regards,
Richard Hattersley
On
Hi Valentin,
On 8 October 2013 13:23, Valentin Haenel wrote:
> Certain functions, like
> `PyArray_SimpleNewFromData` `PyArray_SimpleNew` take a typeenum
> Is there any way to go from typeenum to something that can be
> passed to the dtype constructor, like mapping 12 -> '
If you just want the c
On 27 September 2013 13:27, Sebastian Berg wrote:
> And most importantly, is there any behaviour thing in the index
> machinery that is bugging you, which I may have forgotten until now?
>
Well, since you asked... I'd *love* to see the fancy indexing behaviour
moved to a separate method(s).
Yes,
On 23 September 2013 18:03, Charles R Harris wrote:
> I have gotten no feedback on the removal of the numarray and oldnumeric
> packages. Consequently the removal will take place on 9/28. Scream now or
> never...
>
I know I always like to get feedback either way ... so +1 for removal.
Thanks.
___
> Something we have done in matplotlib is that we have made PEP8 a part of
the tests.
In Iris and Cartopy we've also done this and it works well. While we
transition we have an exclusion list (which is gradually getting shorter).
We've had mixed experiences with automatic reformatting, so prefer t
On 28 June 2013 17:33, Charles R Harris wrote:
> On Fri, Jun 28, 2013 at 5:27 AM, Richard Hattersley
> wrote:
>> So:
>> np.array([scalar]) => np.array([scalar], dtype=my_dtype)
>> But:
>> np.array(scalar) => np.array(scalar, dtype=object)
>
&g
On 21 June 2013 19:57, Charles R Harris wrote:
> You could check the numpy/core/src/umath/test_rational.c.src code to see if
> you are missing something.
In v1.7+ the difference in behaviour between my code and the rational
test case is because my scalar type doesn't subclass np.generic (aka.
PyG
On 21 June 2013 19:57, Charles R Harris wrote:
> You could check the numpy/core/src/umath/test_rational.c.src code to see if
> you are missing something.
My code is based in large part on exactly those examples (I don't
think I could have got this far using the documentation alone!), but
I've rec
On 21 June 2013 14:49, Charles R Harris wrote:
> Bit short on detail here ;) How did you create/register the dtype?
The dtype is created/registered during module initialisation with:
dtype = PyObject_New(PyArray_Descr, &PyArrayDescr_Type);
dtype->typeobj = &Time360Type;
...
PyArra
Hi all,
In my continuing adventures in the Land of Custom Dtypes I've come
across some rather disappointing behaviour in 1.7 & 1.8.
I've defined my own class `Time360`, and a corresponding dtype
`time360` which references Time360 as its scalar type.
Now with 1.6.2 I can do:
>>> t = Time360(2013,
Hi Andrew,
> Maybe a stupid question, but do you know a reference I could look at
> for the metadata and c_metadata fields you described?
Sorry ... no. I've not found anything. :-(
If I remember correctly, I got wind of the metadata aspect from the
mailing list discussions of datetime64. So for
Hi Nathaniel,
Thanks for the useful feedback - it'll definitely save me some time
chasing around the code base.
> dtype callbacks and ufuncs don't in general get access to the
> dtype object, so they can't access whatever parameters exist
Indeed - it is a little awkward. But I'm hoping I can use
Hi David,
On 25 May 2013 15:23, David Cournapeau wrote:
> As some of you may know, Stéfan and me will present a tutorial on
> NumPy C code, so if we do our job correctly, we should have a few new
> people ready to help out during the sprints.
>
Is there any chance you'll be repeating this at Eu
On 24 May 2013 15:12, Richard Hattersley wrote:
> Or is the intended use of parametrisation more like:
> >>> weird = my_stuff.make_dtype([34,33,31,30,30,29,29,30,31,32,34,35])
> >>> a = np.zeros(n, dtype=weird)
>
Or to put it another way I have a working `mak
Hi all,
I'm in the process of defining some new dtypes to handle non-physical
calendars (such as the 360-day calendar used in the climate modelling
world). This is all going fine[*] so far, but I'd like to know a little bit
more about how much is ultimately possible.
The PyArray_Descr members `me
+1 for getting rid of this inconsistency
We've hit this with Iris (a met/ocean analysis package - see github), and
have had to add several workarounds.
On 19 April 2013 16:55, Chris Barker - NOAA Federal
wrote:
> Hi folks,
>
> In [264]: np.__version__
> Out[264]: '1.7.0'
>
> I just noticed that
> Since the files are huge, and would make me run out of memory, I need to
read data skipping some records
Is it possible to describe what you're doing with the data once you have
subsampled it? And if there were a way to work with the full resolution
data, would that be desirable?
I ask because
Hi,
[First of all - thanks to everyone involved in the 1.7 release. Especially
Ondřej - it takes a lot of time & energy to coordinate something like this.]
Is there an up to date release schedule anywhere? The trac milestone still
references June.
Regards,
Richard Hattersley
On 20 Septe
[ True True True True True False False False False
False],
fill_value = 99)
Regards,
Richard Hattersley
On 10 September 2012 17:43, Chao YUE wrote:
> Dear all numpy users,
>
> what's the easy way if I just want to change part of the unmasked array
> elements into
Hi,
re: valgrind - to get better results you might try the suggestions from:
http://svn.python.org/projects/python/trunk/Misc/README.valgrind
Richard
On 31 August 2012 09:03, Ondřej Čertík wrote:
> Hi,
>
> There is segfault reported here:
>
> http://projects.scipy.org/numpy/ticket/1588
>
> I'v
The project/environment we work with already targets Python 2.7, so it'd be
fine for us and our collaborators. But it's hard to comment in a more
altruistic way without knowing the impact of the change. Is it possible to
summarise the benefits? (e.g. Simplifies NumPy codebase; allows better
support
For what it's worth, I'd prefer ndmasked.
As has been mentioned elsewhere, some algorithms can't really cope with
missing data. I'd very much rather they fail than silently give incorrect
results. Working in the climate prediction business (as with many other
domains I'm sure), even the *potential
24314961, 2.77933521, 2.00092505, 3.25180563, 2.05392726,
2.80559459, 4.43030939],
[ 4.19270199, 2.89257994, 3.91366449, 3.29262138, 3.6779851 ,
4.06619636, 4.7183393 ],
[ 6.58055518, 6.59232922, 7.00473346, 5.22612494, 7.07170015,
6.54570121, 7.
ys will need a *new* interface.
For example:
>>> import mumpy # Yes - I know it's a terrible name! But I had to write
*something* ... sorry! ;-)
>>> import numpy
>>> a = mumpy.array(...) # makes a masked array
>>> b = numpy.array(...) # makes a plain array
>
just talking about the end-user
experience within Python. In other words, I'm starting from issubclass(POA,
MA) == True, and trying to figure out what the Python API implications
would be.
On 27 April 2012 14:55, Nathaniel Smith wrote:
> On Fri, Apr 27, 2012 at 11:32 AM, Richard Hatters
I know used a somewhat jokey tone in my original posting, but fundamentally
it was a serious question concerning a live topic. So I'm curious about the
lack of response. Has this all been covered before?
Sorry if I'm being too impatient!
On 25 April 2012 16:58, Richard Hattersley wro
class.
Putting aside the ABI issue, would it help downstream API compatibility if
the POA was a subclass of the MA? Code that's expecting/casting-to a POA
might continue to work and, where appropriate, could be upgraded in their
own time to accept MAs.
Richard Hattersley
>> 1) The use of string constants to identify NumPy processes. It would
>> seem better to use library defined constants (ufuncs?) for better
>> future-proofing, maintenance, etc.
>
> I don't see how this would help with future-proofing or maintenance --
> can you elaborate?
>
> If this were C, I'd
" use this style of interface? If it's a good
idea for "pad", perhaps it should be applied more generally?
numpy.aggregate(MEAN, ...), numpy.group(MEAN, ...), etc. anyone?
Richard Hattersley
On 30 March 2012 02:55, Travis Oliphant wrote:
>
> On Mar 29, 2012, at 12:53 PM,
> Both work on my computer, while your example indeed leads to a MemoryError
> (because shape 459375*459375 would be a decently big matrix...)
Nicely understated :)
For 32-bit values "decently big" => 786GB
___
NumPy-Discussion mailing list
NumPy-Discus
OK - that's useful feedback.
Thanks!
On 26 March 2012 21:03, Ralf Gommers wrote:
>
>
> On Mon, Mar 26, 2012 at 5:42 PM, Charles R Harris
> wrote:
>>
>>
>>
>> On Mon, Mar 26, 2012 at 2:29 AM, Richard Hattersley
>> wrote:
>>>
>>&g
sted effort, so are there some aspects of datetime64 which are
more experimental than others? Is there a summary of unresolved issues
and/or plans for change?
Thanks,
Richard Hattersley
On 25 March 2012 13:57, Ralf Gommers wrote:
> Hi,
>
> We decided to label both NA and datetime APIs
ch out for divide-by-zero from "aNirChannel/aBlueChannel".
Regards,
Richard Hattersley
On 19 March 2012 11:04, Matthieu Rigal wrote:
> Dear Numpy fellows,
>
> I have actually a double question, which only aims to answer a single one :
> how to get the followi
+1 on the NEP guideline
As part of a team building a scientific analysis library, I'm
attempting to understand the current state of NumPy development and
its likely future (with a view to contributing if appropriate). The
proposed NEP process would make that a whole lot easier. And if
nothing else
36 matches
Mail list logo