On 11/01/2010 09:39 PM, Joon wrote:
Thanks for the replies.
I tried several stuff like changing dot into sum in the gradient
calculations just to see how they change the results, but it seems
that part of the code is the only place where the results get affected
by the choice of dot/sum.
I
"... Also, IIRC, 1.0 cannot be represented exactly as a float,"
Not true
Nadav
-Original Message-
From: numpy-discussion-boun...@scipy.org on behalf of Matthieu Brucher
Sent: Tue 02-Nov-10 11:05
To: Discussion of Numerical Python
Subject: Re: [Numpy-discussion] Precision
>> It would be great if someone could let me know why this happens.
>
> They don't use the same implementation, so such tiny differences are
> expected - having exactly the same solution would have been surprising,
> actually. You may be surprised about the difference for such a trivial
> operation
On Tue, Nov 2, 2010 at 11:43 AM, Charles R Harris
wrote:
>
> It seems to be more of a problem on 32 bits, what with the variety of sse*.
> OTOH, all 64 bit systems have at least sse2 available together with more sse
> registers and I believe the x87 instruction set is not available when
> running
On Mon, Nov 1, 2010 at 7:49 PM, Robert Kern wrote:
> On Mon, Nov 1, 2010 at 20:21, Charles R Harris
> wrote:
> >
> > On Mon, Nov 1, 2010 at 5:30 PM, Joon wrote:
> >>
> >> Hi,
> >>
> >> I just found that using dot instead of sum in numpy gives me better
> >> results in terms of precision loss. F
Thanks for the replies. I tried several stuff like changing dot into sum in the gradient calculations just to see how they change the results, but it seems that part of the code is the only place where the results get affected by the choice of dot/sum. I am using 64bit machine and EPD python (I
On 11/02/2010 08:30 AM, Joon wrote:
> Hi,
>
> I just found that using dot instead of sum in numpy gives me better
> results in terms of precision loss. For example, I optimized a function
> with scipy.optimize.fmin_bfgs. For the return value for the function, I
> tried the following two things:
>
>
On Mon, Nov 1, 2010 at 20:21, Charles R Harris
wrote:
>
> On Mon, Nov 1, 2010 at 5:30 PM, Joon wrote:
>>
>> Hi,
>>
>> I just found that using dot instead of sum in numpy gives me better
>> results in terms of precision loss. For example, I optimized a function with
>> scipy.optimize.fmin_bfgs. Fo
On Mon, Nov 1, 2010 at 5:30 PM, Joon wrote:
> Hi,
>
> I just found that using dot instead of sum in numpy gives me better results
> in terms of precision loss. For example, I optimized a function with
> scipy.optimize.fmin_bfgs. For the return value for the function, I tried the
> following two t
Hi,I just found that using dot instead of sum in numpy gives me better results in terms of precision loss. For example, I optimized a function with scipy.optimize.fmin_bfgs. For the return value for the function, I tried the following two things:sum(Xb) - sum(denominator)and dot(ones(Xb.
10 matches
Mail list logo