At 06:47 PM 12/23/2014, you wrote:
The performance of fftpack depends very strongly on the array size
-- sizes that are powers of two are good, but also powers of three,
five and seven, or numbers whose only prime factors are from
(2,3,5,7). For problems that can use padding, rounding up the si
At 06:32 AM 10/26/2014, you wrote:
On Sun, Oct 26, 2014 at 1:21 PM, Eelco Hoogendoorn
wrote:
> Im not sure why the memory doubling is necessary. Isnt it possible to
> preallocate the arrays and write to them?
Not without reading the whole file first to know how many rows to preallocate
Seems
Most of my work has used the Fourrier based method for "linear" rebin
of evenly sampled time data of length m (say 1500) to a new number of
samples n (say 2048); the delta time change per sample is a constant
over the array.
I'd like to test the effect a non-constant delta t, ie, stretching
s
In looking up module info for company code policy, I noticed the page
http://www.networksolutions.com/whois/results.jsp?domain=numpy.org
gives "WHOIS LIMIT EXCEEDED - SEE WWW.PIR.ORG/WHOIS FOR DETAILS"
So the domain has been getting a lot of attention
today:http://pir.org/resources/faq/ "Public
Thanks for the clarification, but how is the numpy rounding directed?
Round to nearest, ties to even?
http://en.wikipedia.org/wiki/IEEE_floating_point#Rounding_rules
Just curious, as I couldn't find a reference.
- Ray
At 07:44 AM 7/27/2014, you wrote:
>On Sun, Jul 27, 2014 at 3:16
At 02:04 AM 7/27/2014, you wrote:
>You won't be able to do it by accident or omission or a lack of
>discipline. It's not a tempting public target like, say, np.seterr().
BTW, why not throw an overflow error in the large float32 sum() case?
Is it too expensive to check while accumulating?
- Ray
At 02:36 PM 7/25/2014, you wrote:
>But it doesn't compensate for users to be aware of the problems. I
>think the docstring and the description of the dtype argument is pretty clear.
Most of the docs for the affected functions do not have a Note with
the same warning as mean()
- Ray
__
At 11:29 AM 7/25/2014, you wrote:
>On Fri, Jul 25, 2014 at 5:56 PM, RayS wrote:
> > The important point was that it would be best if all of the
> methods affected
> > by summing 32 bit floats with 32 bit accumulators had the same Notes as
> > numpy.mean(). We went through
At 07:22 AM 7/25/2014, you wrote:
> We were talking on this in the office, as we
> realized it does affect a couple of lines dealing
> with large arrays, including complex64.
> As I expect Python modules to work uniformly
> cross platform unless documented otherwise, to me
> that includes 32 vs 6
At 01:22 AM 7/25/2014, you wrote:
> Actually the maximum precision I am not so
> sure of, as I personally prefer to make an
> informed decision about precision used, and get
> an error on a platform that does not support
> the specified precision, rather than obtain
> subtly or horribly broke
Probably a number of scipy places as well
import numpy
import scipy.stats
print numpy.__version__
print scipy.__version__
for s in range(16777214, 16777944):
if scipy.stats.nanmean(numpy.ones((s, 1), numpy.float32))[0]!=1:
print '\nbroke', s, scipy.stats.nanmean(numpy.ones((s, 1),
import numpy
print numpy.__version__
for s in range(1864100, 1864200):
if numpy.ones((s, 9), numpy.float32).sum()!= s*9:
print '\nbroke', s
break
else:
print '\r',s,
C:\temp>python np_sum.py
1.8.0b2
1864135
broke 1864136
import numpy
print numpy.__version__
for s
At 04:56 AM 7/11/2014, you wrote:
>Matthew, we posted the release of 0.14.1 last night. Are these
>picked up and build here automatically?
>https://nipy.bic.berkeley.edu/scipy_installers/
I see it's at http://www.lfd.uci.edu/~gohlke/pythonlibs/#pandas
- Ray
___
I recently tried diff and gradient for some
medical time domain data, and the result nearly looked like pure noise.
I just found this after seeing John Agosta's post
https://gist.github.com/mblondel/487187
"""
Find the solution for the second order differential equation
u'' = -u
with u(0) = 1
27, 2014 at 7:42 AM, RayS
<<mailto:r...@blue-cove.com>r...@blue-cove.com> wrote:
I find this interesting, since I work with medical data sets of 100s
of MB, and regularly run into memory allocation problems when doing a
lot of Fourrier analysis, waterfalls etc. The per-process limit
I find this interesting, since I work with medical data sets of 100s
of MB, and regularly run into memory allocation problems when doing a
lot of Fourrier analysis, waterfalls etc. The per-process limit seems
to be about 1.3GB on this 6GB quad-i7 with Win7. For live data
collection routines I s
I've often wondered the particulars of the MKL; I have licensed via
Enthought and distributed compiled works to client(s), and often use
C. Gohkle's distros myself.
- Ray
At 05:29 PM 3/26/2014, you wrote:
Hi,
On Wed, Mar 26, 2014 at 4:48 PM, Matthew Brett
wrote:
> Hi,
>
> Can I check what
At 04:42 AM 3/1/2014, you wrote:
>Currently I am trying to come up with some ideas about enhancing NumPy.
Hello Leo,
How about you implement fft.zoom_fft() as a single function? (Not to
be confused with chirp-Z)
We might be able to lend some ideas, but I've never been satisfied with mine:
http:/
When will we see a
http://sourceforge.net/projects/numpy/files/NumPy/1.8.1/Changelog/download
changelog?
I'd like to get this into our organization's SRS, and a list of fixes
(related or not) would be great.
- Ray
___
NumPy-Discussion mailing list
NumP
Has anyone alerted C Gohlke?
http://www.lfd.uci.edu/~gohlke/pythonlibs/#numpy
- Ray
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion
At 06:07 AM 2/11/2014, you wrote:
>On 11/02/2014 14:56, Sturla Molden wrote:
> > Daniele Nicolodi wrote:
> >
> >> Correct me if I'm wrong, but this assumes that missing data points are
> >> represented with Nan. In my case missing data points are just missing.
> >
> > Then your data cannot be sto
> On 11.02.2014 14:08, Daniele Nicolodi wrote:
>> Hello,
>>
>> I have two time series (2xN dimensional arrays) recorded on the same
>> time basis, but each with it's own dead times (and start and end
>> recording times). I would like to obtain two time series containing
>> only the time overlap
> On 11.02.2014 14:08, Daniele Nicolodi wrote:
>> Hello,
>>
>> I have two time series (2xN dimensional arrays) recorded on the same
>> time basis, but each with it's own dead times (and start and end
>> recording times). I would like to obtain two time series containing
>> only the time overlap
At 05:46 AM 2/6/2014, Alan G Isaac wrote:
Compare np.mat('1 2; 3 4')
to np.array([[1, 2], [3, 4]])
for readability and intimidation factor.
Little things matter when getting started
with students who lack programming background.
my $.02:
'1 2; 3 4'
is a non-obvious and non-intuitive way to des
At 12:11 PM 2/5/2014, Richard Hattersley wrote:
On 4 February 2014 15:01, RayS
<<mailto:r...@blue-cove.com>r...@blue-cove.com> wrote:
I was struggling with methods of reading large disk files into
numpy efficiently (not FITS or .npy, just raw files of IEEE floats
from numpy.tostr
At 07:35 AM 2/4/2014, Julian Taylor wrote:
On Tue, Feb 4, 2014 at 4:27 PM, RayS
<<mailto:r...@blue-cove.com>r...@blue-cove.com> wrote:
At 07:09 AM 2/4/2014, you wrote:
>On 04/02/2014 16:01, RayS wrote:
> > I was struggling with methods of reading large disk files into
At 07:09 AM 2/4/2014, you wrote:
>On 04/02/2014 16:01, RayS wrote:
> > I was struggling with methods of reading large disk files into numpy
> > efficiently (not FITS or .npy, just raw files of IEEE floats from
> > numpy.tostring()). When loading arbitrarily large files it wou
I was struggling with methods of reading large disk files into numpy
efficiently (not FITS or .npy, just raw files of IEEE floats from
numpy.tostring()). When loading arbitrarily large files it would be
nice to not bother reading more than the plot can display before
zooming in. There apparent
28 matches
Mail list logo