On Thu, May 08, 2008 at 10:04:28AM -0600, Charles R Harris wrote:
>What realistic probability is in the range exp(-1000) ?
I recently tried to do Fisher information estimation of a very noisy
experiment. For this I needed to calculate the norm of the derivation of
the probability over the enti
On Thu, May 8, 2008 at 2:37 PM, Warren Focke <[EMAIL PROTECTED]>
wrote:
>
>
> On Thu, 8 May 2008, Charles R Harris wrote:
>
> > On Thu, May 8, 2008 at 11:46 AM, Anne Archibald <
> [EMAIL PROTECTED]>
> > wrote:
> >
> >> 2008/5/8 Charles R Harris <[EMAIL PROTECTED]>:
> >>>
> >>> On Thu, May 8, 2008
On Thu, 8 May 2008, Charles R Harris wrote:
> On Thu, May 8, 2008 at 11:46 AM, Anne Archibald <[EMAIL PROTECTED]>
> wrote:
>
>> 2008/5/8 Charles R Harris <[EMAIL PROTECTED]>:
>>>
>>> On Thu, May 8, 2008 at 10:56 AM, Robert Kern <[EMAIL PROTECTED]>
>> wrote:
When you're running an optim
2008/5/8 T J <[EMAIL PROTECTED]>:
> On 5/8/08, Anne Archibald <[EMAIL PROTECTED]> wrote:
> > Is "logarray" really the way to handle it, though? it seems like you
> > could probably get away with providing a logsum ufunc that did the
> > right thing. I mean, what operations does one want to do on
On Thu, May 8, 2008 at 11:46 AM, Anne Archibald <[EMAIL PROTECTED]>
wrote:
> 2008/5/8 Charles R Harris <[EMAIL PROTECTED]>:
> >
> > On Thu, May 8, 2008 at 10:56 AM, Robert Kern <[EMAIL PROTECTED]>
> wrote:
> > >
> > > When you're running an optimizer over a PDF, you will be stuck in the
> > > regi
On 5/8/08, Anne Archibald <[EMAIL PROTECTED]> wrote:
> Is "logarray" really the way to handle it, though? it seems like you
> could probably get away with providing a logsum ufunc that did the
> right thing. I mean, what operations does one want to do on logarrays?
>
> add -> logsum
> subtract -> ?
On Thu, May 8, 2008 at 2:12 PM, Charles R Harris
<[EMAIL PROTECTED]> wrote:
>
> On Thu, May 8, 2008 at 11:46 AM, Anne Archibald <[EMAIL PROTECTED]>
> wrote:
>>
>> 2008/5/8 Charles R Harris <[EMAIL PROTECTED]>:
>> >
>> > On Thu, May 8, 2008 at 10:56 AM, Robert Kern <[EMAIL PROTECTED]>
>> > wrote:
>>
On Thu, May 8, 2008 at 11:46 AM, Anne Archibald <[EMAIL PROTECTED]>
wrote:
> 2008/5/8 Charles R Harris <[EMAIL PROTECTED]>:
> >
> > On Thu, May 8, 2008 at 10:56 AM, Robert Kern <[EMAIL PROTECTED]>
> wrote:
> > >
> > > When you're running an optimizer over a PDF, you will be stuck in the
> > > regi
On Thu, May 8, 2008 at 12:39 PM, Anne Archibald <[EMAIL PROTECTED]>
wrote:
> 2008/5/8 Charles R Harris <[EMAIL PROTECTED]>:
> >
> > David, what you are using is a log(log(x)) representation internally.
> IEEE
> > is *not* linear, it is logarithmic.
>
> As Robert Kern says, yes, this is exactly wha
2008/5/8 Charles R Harris <[EMAIL PROTECTED]>:
>
> David, what you are using is a log(log(x)) representation internally. IEEE
> is *not* linear, it is logarithmic.
As Robert Kern says, yes, this is exactly what the OP and all the rest
of us want.
But it's a strange thing to say that IEEE is logar
On Thu, May 8, 2008 at 11:31 AM, Warren Focke <[EMAIL PROTECTED]>
wrote:
>
>
> On Thu, 8 May 2008, Charles R Harris wrote:
>
> > On Thu, May 8, 2008 at 10:11 AM, Anne Archibald <
> [EMAIL PROTECTED]>
> > wrote:
> >
> >> 2008/5/8 Charles R Harris <[EMAIL PROTECTED]>:
> >>>
> >>> What realistic prob
On Thu, May 8, 2008 at 12:53 PM, Charles R Harris
<[EMAIL PROTECTED]> wrote:
>
> On Thu, May 8, 2008 at 11:18 AM, David Cournapeau <[EMAIL PROTECTED]>
> wrote:
>>
>> On Fri, May 9, 2008 at 2:06 AM, Nadav Horesh <[EMAIL PROTECTED]>
>> wrote:
>> > Is the 80 bits float (float96 on IA32, float128 on AM
On Thu, May 8, 2008 at 11:18 AM, David Cournapeau <[EMAIL PROTECTED]>
wrote:
> On Fri, May 9, 2008 at 2:06 AM, Nadav Horesh <[EMAIL PROTECTED]>
> wrote:
> > Is the 80 bits float (float96 on IA32, float128 on AMD64) isn't enough?
> It has a 64 bits mantissa and can represent numbers up to nearly 1E
2008/5/8 Charles R Harris <[EMAIL PROTECTED]>:
>
> On Thu, May 8, 2008 at 10:56 AM, Robert Kern <[EMAIL PROTECTED]> wrote:
> >
> > When you're running an optimizer over a PDF, you will be stuck in the
> > region of exp(-1000) for a substantial amount of time before you get
> > to the peak. If you d
On Thu, 8 May 2008, Charles R Harris wrote:
> On Thu, May 8, 2008 at 10:11 AM, Anne Archibald <[EMAIL PROTECTED]>
> wrote:
>
>> 2008/5/8 Charles R Harris <[EMAIL PROTECTED]>:
>>>
>>> What realistic probability is in the range exp(-1000) ?
>>
>> Well, I ran into it while doing a maximum-likelihoo
On Fri, May 9, 2008 at 2:06 AM, Nadav Horesh <[EMAIL PROTECTED]> wrote:
> Is the 80 bits float (float96 on IA32, float128 on AMD64) isn't enough? It
> has a 64 bits mantissa and can represent numbers up to nearly 1E(+-)5000.
It only make the problem happen later, I think. If you have a GMM with
m
On Fri, May 9, 2008 at 1:54 AM, Charles R Harris
<[EMAIL PROTECTED]> wrote:
> Yes, and Gaussians are a delusion beyond a few sigma. One of my pet peeves.
> If you have more than 8 standard deviations, then something is fundamentally
> wrong in the concept and formulation.
If you have a mixture of
א: Re: [Numpy-discussion] Log Arrays
On Thu, May 8, 2008 at 10:11 AM, Anne Archibald <[EMAIL PROTECTED]>
wrote:
> 2008/5/8 Charles R Harris <[EMAIL PROTECTED]>:
> >
> > What realistic probability is in the range exp(-1000) ?
>
> Well, I ran into it while doi
On Thu, May 8, 2008 at 12:02 PM, Charles R Harris
<[EMAIL PROTECTED]> wrote:
>
> On Thu, May 8, 2008 at 10:56 AM, Robert Kern <[EMAIL PROTECTED]> wrote:
>>
>> On Thu, May 8, 2008 at 11:25 AM, Charles R Harris
>> <[EMAIL PROTECTED]> wrote:
>> >
>> > On Thu, May 8, 2008 at 10:11 AM, Anne Archibald
>>
On Thu, May 8, 2008 at 10:56 AM, Robert Kern <[EMAIL PROTECTED]> wrote:
> On Thu, May 8, 2008 at 11:25 AM, Charles R Harris
> <[EMAIL PROTECTED]> wrote:
> >
> > On Thu, May 8, 2008 at 10:11 AM, Anne Archibald <
> [EMAIL PROTECTED]>
> > wrote:
> >>
> >> 2008/5/8 Charles R Harris <[EMAIL PROTECTED]>
On Thu, May 8, 2008 at 10:52 AM, David Cournapeau <[EMAIL PROTECTED]>
wrote:
> On Fri, May 9, 2008 at 1:25 AM, Charles R Harris
> <[EMAIL PROTECTED]> wrote:
> >
>
> >
> > But to expand on David's computation... If the numbers are stored without
> > using logs, i.e., as the exponentials, then the s
On Thu, May 8, 2008 at 11:25 AM, Charles R Harris
<[EMAIL PROTECTED]> wrote:
>
> On Thu, May 8, 2008 at 10:11 AM, Anne Archibald <[EMAIL PROTECTED]>
> wrote:
>>
>> 2008/5/8 Charles R Harris <[EMAIL PROTECTED]>:
>> >
>> > What realistic probability is in the range exp(-1000) ?
>>
>> Well, I ran into
On Thu, May 8, 2008 at 10:42 AM, David Cournapeau <[EMAIL PROTECTED]>
wrote:
> On Fri, May 9, 2008 at 1:04 AM, Charles R Harris
> <[EMAIL PROTECTED]> wrote:
>
> > < 1e-308 ?
>
> Yes, all the time. I mean, if it was not, why people would bother with
> long double and co ? Why denormal would exist
On Fri, May 9, 2008 at 1:25 AM, Charles R Harris
<[EMAIL PROTECTED]> wrote:
>
>
> But to expand on David's computation... If the numbers are stored without
> using logs, i.e., as the exponentials, then the sum is of the form:
>
> x_1*2**y_1 + ... + x_i*2**y_i
You missed the part on parametric mod
On Fri, May 9, 2008 at 1:04 AM, Charles R Harris
<[EMAIL PROTECTED]> wrote:
> < 1e-308 ?
Yes, all the time. I mean, if it was not, why people would bother with
long double and co ? Why denormal would exist ? I don't consider the
comparison with the number of particules to be really relevant here
On Thu, May 8, 2008 at 10:11 AM, Anne Archibald <[EMAIL PROTECTED]>
wrote:
> 2008/5/8 Charles R Harris <[EMAIL PROTECTED]>:
> >
> > What realistic probability is in the range exp(-1000) ?
>
> Well, I ran into it while doing a maximum-likelihood fit - my early
> guesses had exceedingly low probabil
2008/5/8 Charles R Harris <[EMAIL PROTECTED]>:
>
> What realistic probability is in the range exp(-1000) ?
Well, I ran into it while doing a maximum-likelihood fit - my early
guesses had exceedingly low probabilities, but I needed to know which
way the probabilities were increasing.
Anne
2008/5/8 David Cournapeau <[EMAIL PROTECTED]>:
> On Thu, May 8, 2008 at 10:20 PM, Charles R Harris
> <[EMAIL PROTECTED]> wrote:
> >
> >
> > Floating point numbers are essentially logs to base 2, i.e., integer
> > exponent and mantissa between 1 and 2. What does using the log buy you?
>
> Prec
On Thu, May 8, 2008 at 9:20 AM, David Cournapeau <[EMAIL PROTECTED]> wrote:
> On Thu, May 8, 2008 at 10:20 PM, Charles R Harris
> <[EMAIL PROTECTED]> wrote:
> >
> >
> > Floating point numbers are essentially logs to base 2, i.e., integer
> > exponent and mantissa between 1 and 2. What does using t
On Thu, May 8, 2008 at 10:20 PM, Charles R Harris
<[EMAIL PROTECTED]> wrote:
>
>
> Floating point numbers are essentially logs to base 2, i.e., integer
> exponent and mantissa between 1 and 2. What does using the log buy you?
Precision, of course. I am not sure I understand the notation base =
2,
On Thu, May 8, 2008 at 1:26 AM, T J <[EMAIL PROTECTED]> wrote:
> Hi,
>
> For precision reasons, I almost always need to work with arrays whose
> elements are log values. My thought was that it would be really neat
> to have a 'logarray' class implemented in C or as a subclass of the
> standard ar
On Thu, May 8, 2008 at 12:26 AM, T J <[EMAIL PROTECTED]> wrote:
>
> >>> x = array([-2,-2,-3], base=2)
> >>> y = array([-1,-2,-inf], base=2)
> >>> z = x + y
> >>> z
> array([-0.415037499279, -1.0, -3])
> >>> z = x * y
> >>> z
> array([-3, -4, -inf])
> >>> z[:2].sum()
> -2.41503749928
>
Wh
Hi,
For precision reasons, I almost always need to work with arrays whose
elements are log values. My thought was that it would be really neat
to have a 'logarray' class implemented in C or as a subclass of the
standard array class. Here is a sample of how I'd like to work with
these objects:
>
33 matches
Mail list logo