Pascal is still used, though sometimes under slightly different names. An
actively maintained
piece of software I use extensively is Double Commander. This is built using
Free Pascal, which
will (at last try) run my 1989 "Compact Numerical Methods" codes. If you use
optim(), you are
using my "B
It may be overkill, but package nlsr has function nlxb() that can handle
various models and bound the parameters. Note that bounds can sometimes give
weird results if the bounds and initial parameter guesses are such that the
minimization of the sum of squares gets "stuck".
JN
On 2025-04-21 09:
Every time I give a seminar on optimization (most recently in Feb
at Univ Cote d'Azur -- thank you Yassine for the welcome!) I point out
Algorithms CONVERGE
Programs TERMINATE
If you race a Maserati (fmincon?) on a dirt bike course, you'll likely
get stuck on the first mud mound, which co
This may be way off the mark, but is it possible that the ARM machine is using
the "new" IEEE-754 arithmetic that does not have 80 bit extended? The standard
was changed (in ways to allow non-compliant systems to be compliant) because
ARM does not have the hardware registers. There are reasons why
cross platform. I had been using YAD in Linux, which is nice,
but leaves out Windows and Mac users.
Thanks again.
JN
On 2025-03-07 23:02, Steve Martin wrote:
Hi John,
Does it work if you run R CMD r -i FailBill.R?
Steve
---- Original Message
On 3/7/25 10:45, J C Nash wrote:
I w
I want to use littler (i.e. "r -i ") to run an R script so I can
set up a clickable icon for a program which uses package staplr.
Actually to use staplr to consolidate two files and remove some unwanted
pages before printout.
A minimal example program is FailBill.R, which has the single line
l
Thanks Duncan for being a bulldog on this issue. Finding bugs like this take
energy and tenacity.
JN
On 2025-01-21 11:40, Duncan Murdoch wrote:
And now I've found the elusive bug in show.error.locations = TRUE as well, and
posted a patch for that too.
Back to the original topic of this thread
erlying function, so that may depend on which front end you are using. I'm
talking about R.app on a Mac.
Duncan Murdoch
On 2024-12-18 10:32 a.m., J C Nash wrote:
I've been working on a small personal project that needs to select files for
manipulation from
various directories and mov
I've been working on a small personal project that needs to select files for
manipulation from
various directories and move them around in planned ways. file.choose() is a
nice way to select
files. However, I've noticed that if file.choose() is called within a function,
it is the
directory from
On 2024-12-13 13:55, Daniel Lobo wrote:
1. Why nloptr() is failing where other programs can continue with the
same set of data, numbers, and constraints?
2. Is this enough ground to say that nloptr is inferior and user
should not use this in complex problems?
As I indicated in a recent respo
Interesting that alabama and nloptr both use auglag but alabama gets a lower
objective fn.
I think there could be lots of exploration of controls and settings to play
with to find out
what is going on.
alabama::auglag
f, ci, ce,ob,val: 0 -4.71486e-08 1029.77 1029.77 at [1] -0.610594
Dec 2024 14:30:03 -0500
From: J C Nash
To: r-help@r-project.org
The following may or may not be relevant, but definitely getting somewhat
different results.
As this was a quick and dirty try while having a snack, it may have bugs.
# Lobo2412.R -- from R Help 20241213
#Original artificial data
l_g_eq = Hxlobo,
opts =
# list("algorithm" = "NLOPT_LN_COBYLA", "xtol_rel" = 1.0e-8, print_level=1))
# -0.2186159 -0.5032066 6.4458823 -0.4125948
sol <- auglag(par=t0, fn=flobo, hin=hinlobo, heq=Hxlobo,
control.outer=list(trace=TRUE))
sol
#===
COBYLA stands for Contrained Optimization by Linear Approximation.
You seem to have some squares in your functions. Maybe BOBYQA would
be a better choice, though it only does bounds, so you'd have to introduce
a penalty, but then more of the optimx solvers would be available. With
only 4 paramete
My late friend Morven Gentleman, not long after he stepped down from being chair
of Computer Science at Waterloo, said that it seemed computer scientists had to
create
a new computer language for every new problem they encountered.
If we could use least squares to measure this approximation, we'
On 2024-09-28 13:57, avi.e.gr...@gmail.com wrote:
Python users often ask if a solution is “pythonic”. But I am not aware
of R users having any special name like “R-thritic” and that may be a
good thing.
__
R-help@r-project.org mailing list -- To UN
This won't answer the questions, but will point out that I wrote the
Nelder-Mead,
BFGS (I call it Variable Metric) and CG methods in BASIC in 1974. They were
re-coded
many times and then incorporated in R around 1995 as I recall (Brian Ripley did
the
incorporation). There are some great 50 year
This is likely tangential, but as a Linux user I have learned to avoid
any directory name with - or ( or ) or other things, even spaces. Whether or not
those are "valid", they seem to cause trouble.
For example "-" can be picked up in bash scripts as a trigger for options.
And in this case, it l
I won't send to list, but just to the two of you, as I don't have
anything to add at this time. However, I'm wondering if this approach
is worth writing up, at least as a vignette or blog post. It does need
a shorter example and some explanation of the "why" and some testing
perhaps.
If there's i
sible
to get a single "formula" as one expression even if spread over multiple lines,
then nlxb() might be able to handle it.
J Nash (maintainer of nlsr and optimx)
On 2023-11-06 11:53, Troels Ring wrote:
HEPESFUNC <-
function(H,SID,HEPTOT,pK1,pK2,pK3) {
XX <- (H^3/(
Dear Juel,
The R lists are automated, and while there is probably someone with access
to remove particular subscribers, it is generally easier to UNSUBSCRIBE to
them.
I believe Jim was on several lists. The full collection is at
https://www.r-project.org/mail.html
The main help list can be uns
Homework?
On 2023-08-25 12:47, ASHLIN VARKEY wrote:
Sir,
I want to solve the equation Q(u)=mean, where Q(u) represents the quantile
function. Here my Q(u)=(c*u^lamda1)/((1-u)^lamda2), which is the quantile
function of Davies (Power-pareto) distribution. Hence I want to solve ,
*(c*u^lamda1)/((1
truly appreciate your kind and valuable
contribution.
Cheers,
Paul
El El sáb, 19 de ago. de 2023 a la(s) 3:35 p. m., J C Nash mailto:profjcn...@gmail.com>> escribió:
Why bother. nlsr can find a solution from very crude start.
Mixture <- c(17, 14, 5, 1, 11, 2, 16, 7, 19, 23, 20,
0.00629212 5.997e-06 1049 2.425e-42 4.049e-08
721.8
Beta2 0.00867741 1.608e-05 539.7 1.963e-37 -2.715e-08
56.05
Beta3 0.00801948 8.809e-05 91.03 2.664e-24 1.497e-08
10.81
J Nash
On 2023-08-19 16:19, Paul Bernal wrote:
Dear
More to provide another perspective, I'll give the citation of some work
with Harry Joe and myself from over 2 decades ago.
@Article{,
author = {Joe, Harry and Nash, John C.},
title = {Numerical optimization and surface estimation with imprecise
function evaluations},
journal = {Statist
Very true,
Carolyn J. Miller
M.S. Student, Ecology
SUNY-ESF, Environmental Biology
From: Bert Gunter
Sent: Tuesday, January 31, 2023 10:46 AM
To: Carolyn J Miller
Cc: Boris Steipe ; r-help@r-project.org
Subject: Re: [R] question
"The combination of
samples and they are not
representing the same information.
The joys of data analysis!
Thanks for your feedback,
Carolyn J. Miller
M.S. Student, Ecology
SUNY-ESF, Environmental Biology
From: Boris Steipe
Sent: Tuesday, January 31, 2023 10:16 AM
To: Carolyn J Miller
Thank you!
Carolyn J. Miller
M.S. Student, Ecology
SUNY-ESF, Environmental Biology
From: Ebert,Timothy Aaron
Sent: Tuesday, January 31, 2023 9:50 AM
To: Carolyn J Miller ; PIKAL Petr ;
r-help@r-project.org
Subject: RE: question
As indicated here:
https
oughts I'd appreciate any suggestions.
Thanks for your help and clarifying that for me.
Carolyn J. Miller
M.S. Student, Ecology
SUNY-ESF, Environmental Biology
From: PIKAL Petr
Sent: Tuesday, January 31, 2023 2:36 AM
To: Carolyn J Miller; r-help@r-project.
Hi guys,
I am using the cor() function to see if there are correlations between March
cortisol levels and December cortisol levels and I'm trying to figure out if
the function is doing what I want it to do.
Each sample has it's own separate row in the CSV file that I'm working out of.
March Co
A crude but often informative approach is to treat the nonlinear equations as a
nonlinear least squared problem. This is NOT a generally recommended solution
technique,
but can help to show some of the multiple solutions. Moreover, it forces some
attention
to the problem. Unfortunately, it often
, but prospective
students and mentors need to be
starting now.
Cheers,
John Nash
On 2023-01-05 14:52, Rodrigo Ribeiro Remédio wrote:
Rob J Hyndman gives great explanation here (https://robjhyndman.com/hyndsight/estimation/) for reasons why results from
R's arima may differ from other s
A rather crude approach to solving nonlinear equations is to rewrite the
equations as residuals and minimize the sum of squares. A zero sumsquares
gives a solution. It is NOT guaranteed to work, of course.
I recommend a Marquardt approach minpack.lm::nlslm or my nlsr::nlfb. You
will need to spec
It is not automatic, but I've used Xournal for different tasks of editing a pdf.
It would certainly allow page numbers to be added, essentially by overlaying a
text box on each page. Clumsy, but possibly useful.
I tend to use Xournal to blank parts of documents that recipients should not
see,
e.
Homework!
On 2022-06-11 10:24, Shantanu Shimpi wrote:
Dear R community,
Please help me in knowing how to do following non-parametric tests:
1. kruskal-Wallis test
2. Wilcoxson rank sum test
3. Lee Cronbac Alpha test
4. Spearman's Rank correlation test
5. Henry Garrett
In 2017 I ran (with Julia Silge and Spencer Graves) a session at UseR! on
navigating the R package universe.
See https://github.com/nashjc/Rnavpkg. It was well attended, and we took a few
bites out of the whale, but
probably the whale didn't notice.
A possibility has been hinted at by Duncan --
I get two similar graphs.
https://web.ncf.ca/nashjc/jfiles/Rplot-Labone-4095.pdf
https://web.ncf.ca/nashjc/jfiles/RplotLabone10K.pdf
Context:
R version 4.1.2 (2021-11-01)
Platform: x86_64-pc-linux-gnu (64-bit)
Running under: Linux Mint 20.2
Matrix products: default
BLAS: /usr/lib/x86_64-linu
https://www.ibm.com/cloud/blog/python-vs-r
On 2021-10-28 2:57 a.m., Catherine Walt wrote:
Hello members,
I am familiar with python's Numpy.
Now I am looking into R language.
What is the main difference between these two languages? including advantages
or disadvantages.
Thanks.
__
I can understand Rolf's concern. Make is a tool that is very helpful,
but also not trivial to learn how to make work. If a good Makefile
has been set up, then things are easy, but I've generally found my
skills limited to fairly simple Makefiles.
I would suggest that what is needed is a bit of mod
You have the answer in the error message: the objective function has
been calculated as +/-Inf somehow. You are going to have to figure
out where the function is computed and why it is not finite.
JN
On 2021-08-15 12:41 a.m., 최병권 wrote:
> Hello Dear,
>
> I am Choy from Seoul.
> I have a question
As someone who works on trying to improve the optimization codes in R,
though mainly in the unconstrained and bounds-constrained area, I think my
experience is more akin to that of HWB. That is, for some problems -- and
the example in question does have a reparametrization that removes the
constrai
I might (and that could be a stretch) be expert in unconstrained problems,
but I've nowhere near HWB's experience in constrained ones.
My main reason for wanting gradients is to know when I'm at a solution.
In practice for getting to the solution, I've often found secant methods
work faster, thoug
Use nlsr::nlxb() to get analytic derivatives. Though your problem is pretty
rubbishy --
look at the singular values. (You'll need to learn some details of nlxb()
results to
interpret.)
Note to change the x to t in the formula.
JN
> f1 <- y ~ a+b*sin(2*pi*t)+c*cos(2*pi*t)
> res1 <- nls(f1, dat
This is likely because Hessian is being approximated.
Numerical approximation to Hessian will overstep the bounds because
the routines that are called don't respect the bounds (they likely
don't have the bounds available).
Writing numerical approximations that respect bounds and other constraints
Can you put together your example as a single runnable scipt?
If so, I'll try some other tools to see what is going on. There
have been rumours of some glitches in the L-BFGS-B R implementation,
but so far I've not been able to acquire any that I can reproduce.
John Nash (maintainer of optimx pac
As per my post on this, it is important to distinguish between
"CG" as a general approach and optim::CG. The latter -- my algorithm 22
from Compact Numerical Methods for Computers in 1979 -- never worked
terribly well. But Rcgmin and Rtnmin from optimx often (but not always)
perform quite well.
Th
optim() has no method really suitable for very large numbers of parameters.
- CG as set up has never worked very well in any of its implementations
(I wrote it, so am allowed to say so!). Rcgmin in optimx package works
better, as does Rtnmin. Neither are really intended for 60K parameters
ho
at
is present in a "doc"ument.
Best, J Nash
On 2021-02-18 2:16 p.m., Kevin Thorpe wrote:
>
>> On Feb 18, 2021, at 2:03 PM, Robert Dodier wrote:
>>
>> EXTERNAL EMAIL:
>>
>> On Thu, Feb 18, 2021 at 10:12 AM Kevin Thorpe
>> wrote:
>>
>>
6:05 p.m., Duncan Murdoch wrote:
> On 21/01/2021 5:20 p.m., J C Nash wrote:
>> In a separate thread Jeff Newmiller wrote:
>>> rm(list=ls()) is a bad practice... especially when posting examples. It
>>> doesn't clean out everything and it removes
>>> objec
In a separate thread Jeff Newmiller wrote:
> rm(list=ls()) is a bad practice... especially when posting examples. It
> doesn't clean out everything and it removes objects created by the user.
This query is to ask
1) Why is it bad practice to clear the workspace when presenting an example?
I'm as
https://github.com/rstats-gsoc/gsoc2021/wiki
has been set up, but is NOT up to date as Google has announced changes
to the project structure, essentially making them half the size. That actually
fits with some work I'd like to see done to try to consolidate packages nlsr
and minpack.lm into an imp
The issue is almost certainly in the objective function i.e., diagH,
since Nelder Mead doesn't use any matrix operations such as Choleski.
I think you probably need to adjust the objective function to catch
singularities (non-positive definite cases). I do notice that you have
two identical parame
Possibly way off target, but I know some of our U of O teaching
systems boot by reverting to a standard image i.e., you get back
to a vanilla system. That would certainly kill any install.
JN
On 2020-08-28 10:22 a.m., Rene J Suarez-Soto wrote:
> Hi,
>
> I have a very strange is
Hi,
I have a very strange issue. I am currently running R 4.0.2. The files in
my library/base/ are being deleted by some unknown reason. I have had to
install R over 20 times in the last 2 month. I have installed using user
privileges and admin. I have installed it to different directories but the
Thanks to Peter for noting that the numerical derivative part of code doesn't
check bounds in optim().
I tried to put some checks into Rvmmin and Rcgmin in optimx package (they were
separate packages before, and
still on CRAN), but I'm far from capturing all the places where numerical
derivative
My earlier posting on this thread was misleading. I thought the OP was trying to
fit a sigmoid to data. The problem is about fitting 0,1 responses.
The reproducible example cleared this up. Another strong demonstration that
a "simple reproducible example" can bring clarity so much more quickly tha
There is a large literature on nonlinear logistic models and similar
curves. Some of it is referenced in my 2014 book Nonlinear Parameter
Optimization Using R Tools, which mentions nlxb(), now part of the
nlsr package. If useful, I could put the Bibtex refs for that somewhere.
nls() is now getting
For this and the nlminb posting, a reproducible example would be useful.
The optimx package (I am maintainer) would make your life easier in that it
wraps nlminb and optim() and other solvers, so you can use a consistent call.
Also you can compare several methods with opm(), but do NOT use this fo
Apologies in advance if this is a red herring, but I've had a number of issues
with installing latest (4.0.2) version of R and related packages. Maybe my
experience will be helpful to you.
Note that you don't give your OS etc., which also my mean my suggestions are
moot.
I run Linux, mostly Mint.
SANN is almost NEVER the tool to use.
I've given up trying to get it removed from optim(), and will soon give up
on telling folk not to use it.
JN
On 2020-07-22 3:06 a.m., Zixuan Qi wrote:
> Hi,
>
> I encounter a problem. I use optim() function in R to estimate likelihood
> function and the met
The error msg says it all if you know how to read it.
> When I run the optimization (given that I can't find parameters that
> fit the data by eyeball), I get the error:
> ```
> Error in chol.default(object$hessian) :
> the leading minor of order 1 is not positive definite
Your Jacobian (deriv
On 2020-06-15 9:26 a.m., Martin Maechler wrote:
> It allows you to smell the true original fresh air if you
> want instead of having to breathe continuously being wrapped
> inside sugar candy.
__
R-help@r-project.org mailing list -- To UNSUBSCRIBE an
Your best chance to get some interest is to adapt an existing package
such as linprog or lpSolve to use your algorithm. Then there will be
sufficient structure to allow R users and developers to see your
ideas working, even if they are not efficiently programmed. It's
always easier to start with so
description of the differences, but ...
JN
On 2020-05-13 11:28 a.m., Rasmus Liland wrote:
> On 2020-05-09 11:40 -0400, J C Nash wrote:
>>
>>> solve(D)
>> [,1] [,2]
>> [1,] -2.0 1.0
>> [2,] 1.5 -0.5
>>> D %*% solve(D)
>> [,1] [,2]
d at interval optimization as an alternative since
> it can lead to provably global minima?
>
> Bernard
> Sent from my iPhone so please excuse the spelling!"
>
>> On May 13, 2020, at 8:42 AM, J C Nash wrote:
>>
>> The Richards' curve is analytic, so nl
The Richards' curve is analytic, so nlsr::nlxb() should work better than nls()
for getting derivatives --
the dreaded "singular gradient" error will likely stop nls(). Also likely,
since even a 3-parameter
logistic can suffer from it (my long-standing Hobbs weed infestation problem
below), is
th
I get the output at the bottom, which seems OK.
Can you include sessionInfo() output?
Possibly this is a quirk of the particular distro or machine, BLAS or LAPACK,
or something in your workspace. However, if we have full information, someone
may be
able to run the same setup in a VM (if I have th
1 4.8097e+010.026139 2.6139e-02
> [93,] 3359.2 -1.1565e+01 1.8397e+010.026139 2.6139e-02
> [94,] 3359.2 2.3698e+01 -1.6866e+010.026139 2.6139e-02
> [95,] 3359.2 4.4700e+03 6.8321e+00 -12.836180 2.6139e-02
> [96,] 3359.2 4.6052e+04 6.8321e+00 -7.158584 2.6139
The double exponential is well-known as a disaster to fit. Lanczos in his
1956 book Applied Analysis, p. 276 gives a good example which is worked through.
I've included it with scripts using nlxb in my 2014 book on Nonlinear Parameter
Optimization Using R Tools (Wiley). The scripts were on Wiley's
Given there's confirmation of some issue with the repositories,
I'm wondering where it should be reported for fixing. It looks like
the repo has been set up but not copied/moved to the appropriate
server or location, i.e., cloud rather than cran. My guess is that
there are some users struggling and
Did you update your software sources (/etc/apt/sources.list or entry in
/etc/apt/sources.list.d)?
JN
On 2020-04-29 1:01 p.m., Carlos H. Mireles wrote:
> Hello everyone, I'm trying to upgrade R from 3.6.3 to 4.0.0 using the linux
> terminal commands (sudo apt upgrade r-base r-base-dev) but I get
After looking at MASS::fitdistr and fitdistrplus::fitdist, the latter seems to
have
code to detect (near-)singular hessian that is almost certainly the "crash
site" for
this thread. Was that package tried in this work?
I agree with Mark that writing one's own code for this is a lot of work, and
Peter is correct. I was about to reply when I saw his post.
It should be possible to suppress the Hessian call. I try to do this
generally in my optimx package as computing the Hessian by finite differences
uses a lot more compute-time than solving the optimization problem that
precedes the usual
Generally nlsr package has better reliability in getting parameter estimates
because it tries to use automatic derivatives rather than a rather poor
numerical
estimate, and also uses a Levenberg-Marquardt stabilization of the linearized
model. However, nls() can sometimes be a bit more flexible.
This thread points out the important and often overlooked
difference between "convergence" of an algorithm and "termination"
of a program. I've been pushing this button for over 30 years,
and I suspect that it will continue to come up from time to time.
Sometimes it is helpful to put termination c
Yes, I was waiting to see how long before it would be noticed that this is
not the sort of problem for which nls() is appropriate.
And I'll beat the drum again that nls() uses a simple (and generally
deprecated) forward difference derivative approximation that gets into
trouble a VERY high proport
you just set Rmpfr precision to double your actual desired precision
> and move on? Though I suppose you might consider more than doubling the
> desired precision to deal with exponentiation [1].
>
> [1] https://en.m.wikipedia.org/wiki/Extended_precision#Working_range
>
> On M
Are you using a PC, please? You may want to consider installing OpenBLAS.
> It’s a bit tricky but worth the time/effort.
>
> Thanks,
> Erin
>
>
> On Sat, Mar 14, 2020 at 2:10 PM J C Nash <mailto:profjcn...@gmail.com>> wrote:
>
> Rmpfr does "support&quo
Rmpfr does "support" matrix algebra, but I have been trying for some
time to determine if it computes "double" precision (i.e., double the
set level of precision) inner products. I suspect that it does NOT,
which is unfortunate. However, I would be happy to be wrong about
this.
JN
On 2020-03-14 3
Once again, CG and its successors aren't envisaged for 1D problems. Do you
really want to perform brain surgery with a chain saw?
Note that
production4 <- function(L) { - production3(L) }
sjn2 <- optimize(production3, c(900, 1100))
sjn2
gives
$minimum
[1] 900.0001
$objective
[1] 84.44156
Whe
As author of CG (at least the code that was used to build it), I can say I was
never happy with that code. Rcgmin is the replacement I wrote, and I believe
that
could still be improved.
BUT:
- you have a 1D optimization. Use Brent method and supply bounds.
- I never intended CG (or BFGS or Ne
Flawn Academic
Center (FAC) and Robert A. Welch Hall (WEL).
For more information and to register:
https://stat.utexas.edu/training/ssi
-
MICHAEL J. MAHOMETA
Director of Statistical Consulting and Professional Education
Department of Statistics and Data Sciences | The University of Texas at A
I would second Rui's suggestion. However, as a package developer and
maintainer, I think
it is important to note that users need to be encouraged to use good tools. I
work with optimization
codes. My software was incorporated into the optim() function a LONG time ago.
I have updated
and expanded
I'm not going to comment at all on the original question, but on a very common
--
and often troublesome -- mixing of viewpoints about data modelling.
R and other software is used to "fit equations to data" and to "estimate
models".
Unfortunately, a good bit of both these tasks is common. Usually
Dear R community,
I just stumbled upon the following behavior in R version 3.6.0:
set.seed(42)
y <- rep(0, 30)
x <- rbinom(30, 1, prob = 0.91)
# The following will not show any t-statistic or p-value
summary(lm(y~x))
# The following will show t-statistic and p-value
summary(lm(1+y~x))
My expec
Unsubscribe
On Sat, 10 Aug 2019 at 20:30, Spencer Brackett <
spbracket...@saintjosephhs.com> wrote:
> Hello,
>
> I am trying to read the following Xena dataset into R for data analysis:
>
> https://tcga.xenahubs.net/download/TCGA.GBMLGG.sampleMap/HumanMethylation450.gz
>
> I tried to run the foll
Nobody has mentioned Julia. Last year Changcheng Li did a Google Summer of Code
project to
add automatic differentiation capability to R. autodiffR package was result,
but it is still
"beta". The main awkwardness, as I would guess for Wolfram and other wrappings,
is the
non-R side having "update
In reading the original post, I could not help but get a feeling that the
writers were
going through an exercise in learning how to put a package on CRAN. Having
organized "Navigating
the R Package Universe" at UseR!2017, where Spencer Graves, Julia Silge and I
pointed out the
difficulties for u
I was about to reply to the item with a similar msg as Bert, but then
realized that the students were pointing out that the function (possibly
less than perfectly documented -- I didn't check) only works for complete
years. I've encountered that issue myself when teaching forecasting. So
I was prep
*pi/180
> p1 = 2*ext/cos(inc_radians)
> p2 = p1 + 1i*mykz
> #minimize distance to a provided coherence value
> coh = abs(mycoh - ((p1 / p2) * ((exp(p2*height)-1) /
> (exp(p1*height)-1
> return (coh)
> }
Of course, you might just try a more powerful approach. Duncan responded to the
obvious issue earlier,
but the second problem seems to need the analytic derivatives of the nlsr
package. Note that
nlsLM uses the SAME very simple forward difference derivative approximation for
the Jacobian.
Optimi
27;m posting mainly to try to put a box around the
issue to
help others avoid it.
Best,
JN
---
# candlestick function
# J C Nash 2011-2-3
cstick.f<-function(x,alpha=100){
x<-as.vector(x)
r2<-crossprod(x)
f<-as.double(r2+alpha/r2)
return(f)
}
x <- (-100:100)/5
y &
nls() is a Model T Ford trying to drive on the Interstate. The code
is quite old and uses approximations that work well when the user
provides a reasonable problem, but in cases where there are mixed large
and small numbers like yours could get into trouble.
Duncan Murdoch and I prepared the nlsr
My friend Morven Gentleman who died recently was for some time chair of the
computer
faculty at Waterloo and (Fortune nomination!) once said "The response of many
computer
scientists to any problem is to invent a new programming language."
Looking at Ross Ihaka's video, I got the impression he w
000","", z, fixed = TRUE)
> [1] "In Alvarez Cabral street by no. 105."
>
>
> Bert Gunter
>
> "The trouble with having an open mind is that people keep coming along and
> sticking things into it."
> -- Opus (aka Berkeley Breathe
I am trying to fix up some image files (jpg) that have comments in them.
Unfortunately, many have had extra special characters encoded.
rdjpgcom, called from an R script, returns a comment e.g.,
"In Alvarez Cabral street by no. 105.\\000"
I want to get rid of "\\000", but sub seems
to be giving
n a script
Date: Fri, 14 Dec 2018 19:33:25 -0500
From: J C Nash
To: r-help
When in a console (I was in Rstudio) I can run
dir("../provenance-of-rootfinding/", pattern="\\.Rmd")
to list all Rmd files in the specified directory.
However, when I try to run this in a script
When in a console (I was in Rstudio) I can run
dir("../provenance-of-rootfinding/", pattern="\\.Rmd")
to list all Rmd files in the specified directory.
However, when I try to run this in a script under
Rscript --vanilla
I don't get the files to list.
Am I missing something obvious that I s
The postings about polyalgorithms don't mention that optimx has a
tool called polyopt() for this. Though I included it in the package,
it has not been widely tested or applied, and more experience with such
approaches would certainly be of interest to a number of workers, though
I suspect the resul
A bit pedestrian, but you might try
pf <- function(x){5/((1+x)^1) + 5/((1+x)^2) + 5/((1+x)^3) + 105/((1+x)^4) -105}
uniroot(pf,c(-10,10))
curve(pf, c(-10,10))
require(pracma)
tryn <- newton(pf, 0)
tryn
pf(0)
pf(0.03634399)
yc <- c(-105, 5,5,5,105)
rooty <- polyroot(yc)
rooty
rootx <- 1/rooty - 1
r
1 - 100 of 872 matches
Mail list logo