Since some questioned the scaling idea, here are runs first with
scaling and then without scaling. Note how much better the solution
is in the first run (see arrows). It is also evident from the data
> head(data, 3)
y x1 x2 x3
1 0.660 20 7.0 1680
2 0.165 5 1.7 350
3 0.660 20 7.0 1
> On 14 Nov 2015, at 17:02, Berend Hasselman wrote:
>
>>
>> On 14 Nov 2015, at 16:15, Lorenzo Isella wrote:
>>
>> Dear All,
>> I am using optim() for a relatively simple task: a linear model where
>> instead of minimizing the sum of the squared errors, I minimize the sum
>> of the squared rel
> On 14 Nov 2015, at 16:15, Lorenzo Isella wrote:
>
> Dear All,
> I am using optim() for a relatively simple task: a linear model where
> instead of minimizing the sum of the squared errors, I minimize the sum
> of the squared relative errors.
> However, I notice that the default algorithm is ve
I meant the parscale parameter.
On Sat, Nov 14, 2015 at 10:30 AM, Gabor Grothendieck
wrote:
> Tyipcally the parameters being optimized should be the same order of
> magnitude or else you can expect numerical problems. That is what the
> fnscale control parameter is for.
>
> On Sat, Nov 14, 2015
Tyipcally the parameters being optimized should be the same order of
magnitude or else you can expect numerical problems. That is what the
fnscale control parameter is for.
On Sat, Nov 14, 2015 at 10:15 AM, Lorenzo Isella
wrote:
> Dear All,
> I am using optim() for a relatively simple task: a li
Dear All,
I am using optim() for a relatively simple task: a linear model where
instead of minimizing the sum of the squared errors, I minimize the sum
of the squared relative errors.
However, I notice that the default algorithm is very sensitive to the
choice of the initial fit parameters, wherea
6 matches
Mail list logo