You are starting to sound like Dr Nash [1]... "use optimr".
[1] https://stat.ethz.ch/pipermail/r-help/2018-July/458498.html
On March 14, 2020 2:27:48 PM PDT, Abby Spurdle wrote:
>##
>I ran before posting, and waited a while...
>(Re: The postin
##
I ran before posting, and waited a while...
(Re: The posting guide, which I'm going to start putting a lot more weight on).
Noting, I was wondering if the posting guide has a mistake, because
<4*runif(1)> doesn't do anything special...
(Hopef
It seems CG is having problems with the cube root. This converges while
still using CG:
S1 <- optim(1001,function(x) (production1(x)^3), method = "CG",
control = list(fnscale=-1))
On Thu, Mar 12, 2020 at 9:34 AM Skyler Saleebyan
wrote:
>
> I am trying to familiarize myself with optim() with a
I got that last point wrong as well.
(Each iteration is using five evaluations).
Ignore all my comments on this subject.
On 3/14/20, Abby Spurdle wrote:
>> It is correctly signalling that it hasn't converged (look at
>> optim.sol$convergence, which "indicates that the iteration limit maxit
>> h
> It is correctly signalling that it hasn't converged (look at
> optim.sol$convergence, which "indicates that the iteration limit maxit
> had been reached".) But CG should be taking bigger steps. On a 1D
> quadratic objective function with no errors in the derivatives, it
> should take one step t
Once again, CG and its successors aren't envisaged for 1D problems. Do you
really want to perform brain surgery with a chain saw?
Note that
production4 <- function(L) { - production3(L) }
sjn2 <- optimize(production3, c(900, 1100))
sjn2
gives
$minimum
[1] 900.0001
$objective
[1] 84.44156
Whe
On 12/03/2020 8:52 p.m., Abby Spurdle wrote:
L= 1006.536
L= 1006.537
L= 1006.535
It appears to have chosen step size 0.001, not 0.1. It should be
getting adequate accuracy in both 1st and 2nd derivatives.
Those little ripples you see in the plot are not relevant.
I'm impressed.
But you're
> L= 1006.536
> L= 1006.537
> L= 1006.535
> It appears to have chosen step size 0.001, not 0.1. It should be
> getting adequate accuracy in both 1st and 2nd derivatives.
> Those little ripples you see in the plot are not relevant.
I'm impressed.
But you're still wrong.
Try this:
-
#n
On 12/03/2020 7:25 p.m., Abby Spurdle wrote:
There is nothing in that plot to indicate that the result given by
optim() should be accepted as optimal. The numerical approximation to
the derivative is 0.055851 everywhere in your graph
That wasn't how I intended the plot to be interpreted.
By de
> There is nothing in that plot to indicate that the result given by
> optim() should be accepted as optimal. The numerical approximation to
> the derivative is 0.055851 everywhere in your graph
That wasn't how I intended the plot to be interpreted.
By default, the step size (in x) is 1e-5, which
On 12/03/2020 1:22 p.m., Abby Spurdle wrote:
I'm sorry, Duncan.
But I disagree.
This is not a "bug" in optim function, as such.
(Or at least, there's nothing in this discussion to suggest that there's a bug).
But rather a floating point arithmetic related problem.
The OP's function looks simple
Hi Abby: Either way, thanks for your efforts with the derivative plot.
Note that John Nash is a SERIOUS EXPERT in optimization so I would just go
by what he
said earlier. Also, I don't want to speak for Duncan but I have a feeling
that he meant "inadequacy" in the CG
method rather than a bug in
> (1) An exact solution can be derived quickly
Please disregard note (1) above.
I'm not sure if it was right.
And one more comment:
The conjugate gradient method is an established method.
So the question is, is the optim function applying this method or not...
And assuming that it is, then R is
I'm sorry, Duncan.
But I disagree.
This is not a "bug" in optim function, as such.
(Or at least, there's nothing in this discussion to suggest that there's a bug).
But rather a floating point arithmetic related problem.
The OP's function looks simple enough, at first glance.
But it's not.
Plotti
Thanks for the replies. Since I was seeing this glitch with CG in my 1d and
2d formulation of the problem I was trying to figure out what was going on
that led to the failure. I'll switch to a more suitable method and keep
these considerations in mind.
On Thu, Mar 12, 2020, 9:23 AM J C Nash wrot
It looks like a bug in the CG method. The other methods in optim() all
work fine. CG is documented to be a good choice in high dimensions; why
did you choose it for a 1 dim problem?
Duncan Murdoch
On 12/03/2020 2:30 a.m., Skyler Saleebyan wrote:
I am trying to familiarize myself with optim(
As author of CG (at least the code that was used to build it), I can say I was
never happy with that code. Rcgmin is the replacement I wrote, and I believe
that
could still be improved.
BUT:
- you have a 1D optimization. Use Brent method and supply bounds.
- I never intended CG (or BFGS or Ne
It is possible to work out this problem explicitly. Playing with a few
different calls to optim shows that the method="L-BFGS-B" gives the correct
answer.
I don't have particular insight into why method="CG" is problematic.
On Thu, Mar 12, 2020 at 4:12 PM Jeff Newmiller
wrote:
> The help file p
The help file points out that CG is "fragile" ... and I would expect that
failing to define a gradient function will exacerbate that.
I think you should use a different algorithm or specify a gradient function.
You might also consider working with the more recent optimr package contributed
by D
I am trying to familiarize myself with optim() with a relatively simple
maximization.
Description:
L and K are two terms which are constrained to add up to a total 10
(with respective weights to each). To map this constraint I plugged K into
the function (to make this as simple as possible.)
20 matches
Mail list logo