Hi tutor,
I have a large text file that has chunks of data like this:
headerA n1
line 1
line 2
...
line n1
headerB n2
line 1
line 2
...
line n2
Where each chunk is a header and the lines that follow it (up to the next
header). A header has the number of lines in the chunk as its second field.
I
On Sat, Feb 20, 2010 at 11:55 AM, Luke Paireepinart
wrote:
>
>
> On Sat, Feb 20, 2010 at 1:50 PM, Kent Johnson wrote:
>
>> On Sat, Feb 20, 2010 at 11:22 AM, Andrew Fithian
>> wrote:
>> > can
>> > you help me speed it up even more?
>> >
Hi tutor,
I'm have a statistical bootstrapping script that is bottlenecking on a
python function sample_with_replacement(). I wrote this function myself
because I couldn't find a similar function in python's random library. This
is the fastest version of the function I could come up with (I used
c
Hi tutor,
I have this code for generating a confidence interval from an array of
values:
import numpy as np
import scipy as sp
def mean_confidence_interval(data, confidence=0.95):
a = 1.0*np.array(data)
n = len(a)
m, se = np.mean(a), sp.stats.stderr(a)
h = se * sp.stats.t._pp
My guess is that Python sees the space and decides it isn't a short
string so it doesn't get interned. I got the expected results when I
used 'green_ideas' instead of 'green ideas'.
-Drew
On Wed, Jul 1, 2009 at 6:43 PM, Marc Tompkins wrote:
> On Wed, Jul 1, 2009 at 5:29 PM, Robert Berman wrote: