Thanks to all who responded,
I've found a very useful code here:
http://courses.washington.edu/fish507/notes.html
In particular the Lecture 3...
Héctor
2015-10-17 7:05 GMT+00:00 Berend Hasselman :
>
> Your model is producing -Inf entries in the vector Be (in function modl
> and LL) at some s
Your model is producing -Inf entries in the vector Be (in function modl and LL)
at some stage during the optimization process.
You should first do something about that before anything else.
Berend
> On 17 Oct 2015, at 03:01, Bert Gunter wrote:
>
> I made no attempt to examine your details fo
I made no attempt to examine your details for problems, but in general,
My problem
> is that the results change a lot depending on the initial values... I can't
> see what I am doing wrong...
>
> This is a symptom of an overparameterized model: The parameter estimates
> are unstable even though t
Hello,
You cannot change the numerical accuracy, it's a built-in constant. To
see it use
?.Machine
.Machine$double.eps # smallest value different from zero
Actually, .Machine$double.eps is the "the smallest positive
floating-point number x such that 1 + x != 1"
You can try the following
Rui, thanks for your reply. You meant that it is the issue of accuracy? So if
I change the numerical accuracy, my results can be output? Thanks a lot!
--
View this message in context:
http://r.789695.n4.nabble.com/Optimization-problem-tp4663821p4663928.html
Sent from the R help mailing list arc
Hello,
Your thoght is mathematically right but numerically wrong. The result
given by optimize is so close to the real minimum that numerical
accuracy comes in and it becomes indistinguishable from the value you're
expecting.
You get the minimum up to a certain accuracy, not more.
Hope this
Thank you professor. I think the minimum value of x^2 between -1 and 1 should
be x=0, y=0. but the result is not that. I am thinking is any wrong with my
thought?
Thanks for helping me out!
--
View this message in context:
http://r.789695.n4.nabble.com/Optimization-problem-tp4663821p4663898.ht
On Apr 10, 2013, at 03:24 , nntx wrote:
> As a simple example, I want to find minimum value for x^2, but it can't be
> obtained by:
> f<-function(x)x^2
> optimize(f,lower=-1,upper=1)
Works fine for me. What did you expect it to do?
> f<-function(x)x^2
> optimize(f,lower=-1,upper=1)
$minimum
[1]
On 09-02-2013, at 21:08, Axel Urbiz wrote:
> Dear List,
>
> I'm new in R. I'm trying to solve a simple constrained optimization
> problem.
>
> Essentially, let's say I have a matrix as in the object 'mm' inside the
> function below. My objective function should have a matrix of parameters,
> o
Hi Greg,
The problem is that I also have restrictions for each variable (they must be
higher than -.07 and smaller than .2) and I'm dealing with a lot of them.
I've already tried the second approach but, as far as it seems, the function
doesn't satisfy my objective.
That's what I'm doing:
...
There are a couple of options.
First if you want the mean to equal 7, then that means the sum must
equal 21 and therefore you can let optim only play with 2 of the
variables, then set the 3rd to be 21-s1-s2.
If you want the mean to be greater than 7 then just put in a test, if
the mean is less th
Sent: Wednesday, July 28, 2010 11:11 AM
To: r-h...@stat.math.ethz.ch
Subject: Re: [R] Optimization problem with nonlinear constraint
Uli Kleinwechter uni-hohenheim.de> writes:
>
> Dear Ravi,
>
> As I've already written to you, the problem indeed is to find a solution
>
--
From: r-help-boun...@r-project.org [mailto:r-help-boun...@r-project.org] On
Behalf Of Hans W Borchers
Sent: Wednesday, July 28, 2010 11:11 AM
To: r-h...@stat.math.ethz.ch
Subject: Re: [R] Optimization problem with nonlinear constraint
Uli Kleinwechter uni-hohenheim.de> writes:
>
> Dear
Uli Kleinwechter uni-hohenheim.de> writes:
>
> Dear Ravi,
>
> As I've already written to you, the problem indeed is to find a solution
> to the transcendental equation y = x * T^(x-1), given y and T and the
> optimization problem below only a workaround.
I don't think optimization is the ri
Dear Ravi,
As I've already written to you, the problem indeed is to find a solution
to the transcendental equation y = x * T^(x-1), given y and T and the
optimization problem below only a workaround.
John C. Nash has been so kind to help me on here. In case anyone faces a
similar problem in
Hi Uli,
I am not sure if this is the problem that you really want to solve. The
answer is the solution to the equation y = x * T^(x-1), provided a solution
exists. There is no optimization involved here. What is the real problem
that you are trying to solve?
If you want to solve a more meaning
> I don't see why one would want to pretend that the function is continuous.
It isn't.
> The x variable devices is discrete.
> Moreover, the whole solution space is small: the possible solutions are
integers in the range of maybe 20-30.
Yes, you are right, what I'd like to think is that the outco
I don't see why one would want to pretend that the function is
continuous. It isn't.
The x variable devices is discrete.
Moreover, the whole solution space is small: the possible solutions
are integers in the range of maybe 20-30.
Bill
On Fri, Jun 18, 2010 at 9:00 AM, José E. Lozano wrote:
>
>>>
>> How about smoothing the percentages, and then take the second
>> derrivative to find the inflection point?
>>
>> which.max(diff(diff((lowess(percentages)$y
>
> This solution is what I've been using so far. The only difference is that
I am smoothing the 1st derivative, since its
> the one
Hello:
> Here is a general approach using smoothing using the Gasser-Mueller
kernel,
> which is implemented in the "lokern" package. The optimal bandwidth for
> derivative estimation is automatically chosen using a plug-in
approximation.
> The code and the results are attached here.
Maybe am I
> How about smoothing the percentages, and then take the second derrivative
to find the inflection point?
>
> which.max(diff(diff((lowess(percentages)$y
This solution is what I've been using so far. The only difference is that I
am smoothing the 1st derivative, since its the one I want to be s
Here is a general approach using smoothing using the Gasser-Mueller kernel,
which is implemented in the "lokern" package. The optimal bandwidth for
derivative estimation is automatically chosen using a plug-in approximation.
The code and the results are attached here.
Let me know if you have any
min(devices[percentages==max(percentages)])
Bill
__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contain
How about smoothing the percentages, and then take the second derrivative to
find the inflection point?
which.max(diff(diff((lowess(percentages)$y
Bart
--
View this message in context:
http://r.789695.n4.nabble.com/Optimization-problem-tp2258654p2258828.html
Sent from the R help mailing l
Ravi Varadhan jhmi.edu> writes:
>
> Dear Hans,
>
> I agree with your comments. My intuition was that the quadratic
> form would be better behaved than the radical form (less
> nonlinear!?). So, I was "hoping" to see a change in behavior when
> the cost function was altered from a radical (i.
gt; Ravi Varadhan, Ph.D.
> Assistant Professor,
> Division of Geriatric Medicine and Gerontology
> School of Medicine
> Johns Hopkins University
>
> Ph. (410) 502-2619
> email: rvarad...@jhmi.edu
>
>
> - Original Message -
> From: Erwin Kalvelagen
&
klau...@gmx.de
Date: Sunday, January 17, 2010 8:06 am
Subject: Re: [R] optimization problem
To: Ravi Varadhan , erwin.kalvela...@gmail.com,
hwborch...@googlemail.com
Cc: r-h...@stat.math.ethz.ch
> Dear Erwin, Ravi and Hans Werner,
>
> thanks a lot for your replies. I don't think
atric Medicine and Gerontology
School of Medicine
Johns Hopkins University
Ph. (410) 502-2619
email: rvarad...@jhmi.edu
- Original Message -
From: "Hans W. Borchers"
Date: Sunday, January 17, 2010 3:54 am
Subject: Re: [R] optimization problem
To: r-h...@stat.math.eth
hey can be used?
Thank a lot again!
Klaus
Original-Nachricht
> Datum: Sat, 16 Jan 2010 23:42:08 -0500
> Von: Ravi Varadhan
> An: Erwin Kalvelagen
> CC: r-h...@stat.math.ethz.ch
> Betreff: Re: [R] optimization problem
>
> Interesting!
>
> Now, if I
Ravi Varadhan jhmi.edu> writes:
>
> Interesting!
>
> Now, if I change the "cost matrix", D, in the LSAP formulation slightly
> such that it is quadratic, it finds the best solution to your example:
Dear Ravi,
I thought your solution is ingenious, but after the discussion with
Erwin Kalvela
.edu
- Original Message -
From: Erwin Kalvelagen
Date: Saturday, January 16, 2010 5:26 pm
Subject: Re: [R] optimization problem
To: Ravi Varadhan
Cc: r-h...@stat.math.ethz.ch
> I believe this is a very good approximation but not a 100% correct
> formulation of the original p
dhan, Ph.D.
> Assistant Professor,
> Division of Geriatric Medicine and Gerontology
> School of Medicine
> Johns Hopkins University
>
> Ph. (410) 502-2619
> email: rvarad...@jhmi.edu
>
>
> - Original Message -
> From: Erwin Kalvelagen
> Date: Saturday,
f Geriatric Medicine and Gerontology
School of Medicine
Johns Hopkins University
Ph. (410) 502-2619
email: rvarad...@jhmi.edu
- Original Message -
From: Erwin Kalvelagen
Date: Saturday, January 16, 2010 1:36 pm
Subject: Re: [R] optimization problem
To: Ravi Varadhan
Cc: r-h...@stat.math
Ph.D.
> Assistant Professor,
> Division of Geriatric Medicine and Gerontology
> School of Medicine
> Johns Hopkins University
>
> Ph. (410) 502-2619
> email: rvarad...@jhmi.edu
>
>
> - Original Message -
> From: Erwin Kalvelagen
> Date: Saturday, January
cine
Johns Hopkins University
Ph. (410) 502-2619
email: rvarad...@jhmi.edu
- Original Message -
From: Ravi Varadhan
Date: Saturday, January 16, 2010 10:00 am
Subject: Re: [R] optimization problem
To: Erwin Kalvelagen
Cc: r-h...@stat.math.ethz.ch
> Thanks, Erwin, for pointing out th
January 16, 2010 2:35 am
Subject: Re: [R] optimization problem
To: r-h...@stat.math.ethz.ch
> Ravi Varadhan jhmi.edu> writes:
> > dist <- function(A, B) {
> > # Frobenius norm of A - B
> > n <- nrow(A)
> > sum(abs(B - A))
> > }
> >
>
&
Ravi Varadhan jhmi.edu> writes:
> dist <- function(A, B) {
> # Frobenius norm of A - B
> n <- nrow(A)
> sum(abs(B - A))
> }
>
See http://mathworld.wolfram.com/FrobeniusNorm.html for a definition of the
Frobenius norm.
Erwin
--
Hi Klaus,
This problem can be cast as a linear sum assignment problem (LSAP), and solved
using the `solve_LSAP' function in the "clue" package. I show how to solve a
slightly more general problem of finding a permulation of matrix A (n x n) that
minimizes the Frobenius distance to a given matr
gmx.de> writes:
>
> Dear R-experts,
>
> this is not a direct R-problem but I hope you can help me anyway.
>
> I would like to minimize || PG-I || over P, where P is a p x p permutation
matrix (obtained by permuting the rows
> and/or columns of the identity matrix), G is a given p x p matrix
Note your problem is equivalent to the unconstrained problem:
f(a1^2 / (a1^2 + a2^2), a2^2 / (a1^2 + a2^2), x3, x4, a3^2, a4^2)
optimizing over a1, a2, a3, a4, x3, x4. See the optimization task view
for specific functions:
http://cran.r-project.org/web/views/Optimization.html
On Sat, Oct
In case anyone is still reading this thread, I want to add this:
In a current problem (a data-shy five-parameter nonlinear
optimization), I found "nlminb" markedly more reliable than
"optim" with method "L-BFGS-B". In reviewing the fit I made, I
found that "optim" only came close to its own minimum
"Hans W. Borchers" <[EMAIL PROTECTED]> wrote:
> Why not use one of the global optimizers in R, for instance 'DEoptim', and
> then apply optim() to find the last six decimals? I am relatively sure that
> the Differential Evolution operator has a better chance to come near a
> global optimum than a
Why not use one of the global optimizers in R, for instance 'DEoptim', and
then apply optim() to find the last six decimals? I am relatively sure that
the Differential Evolution operator has a better chance to come near a
global optimum than a loop over optim(), though 'DEoptim' may be a bit slow
tedzzx <[EMAIL PROTECTED]> wrote:
>
> If I want to find out the globle minia, how shoul I change my code?
I sometimes use optim() within a loop, with random starting
values for each iteration of the loop. You can save the
objective function value each time and pick the best solution.
Last time I
tedzzx gmail.com> writes:
>
>
> Hi, all
>
> I am facing an optimization problem. I am using the function optim(par,fun),
> but I find that every time I give different original guess parameter, I can
> get different result. For example
> I have a data frame named data:
> head(data)
>price
If I want to find out the globle minia, how shoul I change my code?
Thanks a lot
Armin Meier wrote:
>
> Hi,
> I guess your function has several local minima and depending on where
> you start, i.e. your initial variables, you get into another mimimum.
>
> HTH
> Armin
>
> __
tedzxx asked about apparent multiple optima. See below.
Users should be aware that optim() does local optimization. The default
Nelder-Mead approach is fairly robust at finding such a local minimum, though
it may halt if it is on a flat area of the loss function surface. I would
recommend tryi
Hi,
I guess your function has several local minima and depending on where
you start, i.e. your initial variables, you get into another mimimum.
HTH
Armin
__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read th
Is this the actual problem that you are trying to optimize, i.e. optimize a
function with respect to a scalar unknown parameter?
If so, just use "optimize" and specify the search interval for the algorithm
as [0,1].
Ravi.
On the help page for nlm (type ?nlm) check out the 'See Also' section.
It mentions other functions such as 'optim' and 'nlminb' which can do
constrained optimizations.
-Original Message-
From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED]
On Behalf Of Jiang Peng
Sent: Monday, November 24, 2
50 matches
Mail list logo