Re: [Rd] nlminb with constraints failing on some platforms

2019-01-31 Thread ProfJCNash
This is not about the failure on some platforms, which is an important
issue. However, what is below may provide a temporary workaround until
the source of the problem is uncovered.

FWIW, the problem seems fairly straightforward for most optimizers at my
disposal in the R-forge (developmental) version of the optimx package at
https://r-forge.r-project.org/projects/optimizer/

I used the code

## KKristensen19nlminb.R
f <- function(x) sum( log(diff(x)^2+.01) + (x[1]-1)^2 )
opt <- nlminb(rep(0, 10), f, lower=-1, upper=3)
xhat <- rep(1, 10)
abs( opt$objective - f(xhat) ) < 1e-4  ## Must be TRUE
opt
library(optimx)
optx <- opm(rep(0,10), f, lower=-1, upper=3, method="ALL")
summary(optx, order=value)
optxc <- opm(rep(0,10), f, gr="grcentral", lower=-1, upper=3, method="ALL")
summary(optxc, order=value)
optxn <- opm(rep(0,10), f, gr="grnd", lower=-1, upper=3, method="ALL")
summary(optxn, order=value)

It should not be too difficult to actually supply the gradient, which
would give speedier and more reliable outcomes.


JN



On 2019-01-28 3:56 a.m., Kasper Kristensen via R-devel wrote:
> I've noticed unstable behavior of nlminb on some Linux systems. The problem 
> can be reproduced by compiling R-3.5.2 using gcc-8.2 and running the 
> following snippet:
> 
> f <- function(x) sum( log(diff(x)^2+.01) + (x[1]-1)^2 )
> opt <- nlminb(rep(0, 10), f, lower=-1, upper=3)
> xhat <- rep(1, 10)
> abs( opt$objective - f(xhat) ) < 1e-4  ## Must be TRUE
> 
> The example works perfectly when removing the bounds. However, when bounds 
> are added the snippet returns 'FALSE'.
> 
> An older R version (3.4.4), compiled using the same gcc-8.2, did not have the 
> problem. Between the two versions R has changed the flags to compile Fortran 
> sources:
> 
> < SAFE_FFLAGS = -O2 -fomit-frame-pointer -ffloat-store
> ---
>> SAFE_FFLAGS = -O2 -fomit-frame-pointer -msse2 -mfpmath=sse
> 
> Reverting to the old SAFE_FFLAGS 'solves' the problem.
> 
>> sessionInfo()
> R version 3.5.2 (2018-12-20)
> Platform: x86_64-pc-linux-gnu (64-bit)
> Running under: Scientific Linux release 6.4 (Carbon)
> 
> Matrix products: default
> BLAS/LAPACK: 
> /zdata/groups/nfsopt/intel/2018update3/compilers_and_libraries_2018.3.222/linux/mkl/lib/intel64_lin/libmkl_gf_lp64.so
> 
> locale:
> [1] C
> 
> attached base packages:
> [1] stats graphics  grDevices utils datasets  methods   base
> 
> loaded via a namespace (and not attached):
> [1] compiler_3.5.2
> 
> 
> 
>   [[alternative HTML version deleted]]
> 
> __
> R-devel@r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-devel

__
R-devel@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel


Re: [Rd] nlminb with constraints failing on some platforms

2019-01-31 Thread ProfJCNash
I'm not entirely sure what you are asking. However, optimx is really NOT
meant as a production tool. I intend it as a way to
1) try out a lot of optimizers quickly on a user's problem or problem
class to select a method or methods that suit well;
2) to provide (in the source code of optimr()) an example of how to
call the particular optimizers. They all have a lot of different syntax
elements, which in fact are the biggest headache in building and
extending optimx.

Best, JN

On 2019-01-31 9:26 a.m., Amit Mittal wrote:
> Prof Nash, Prof Galanos
> 
> Is it possible to use a generic code stub in front of packages that use
> optimx to improve optimx use or curtail it according to the requirements?
> 
> 
> Best Regards
> 
> Amit
> 
> +91 7899381263
> 
>  
> 
>  
> 
>  
> 
>  
> 
>  
> 
>  
> 
> Please request Skype as available 
> 
> 5^th  Year FPM (Ph.D.) in Finance and Accounting Area
> 
> Indian Institute of Management, Lucknow, (U.P.) 226013 India
> 
> http://bit.ly/2A2PhD
> 
> AEA Job profile : http://bit.ly/AEAamit
> 
> FMA 2 page profile : http://bit.ly/FMApdf2p
> 
> SSRN top10% downloaded since July 2017: http://ssrn.com/author=2665511
> 
> 
> 
> On Thu, Jan 31, 2019 at 7:22 PM ProfJCNash  <mailto:profjcn...@gmail.com>> wrote:
> 
> This is not about the failure on some platforms, which is an important
> issue. However, what is below may provide a temporary workaround until
> the source of the problem is uncovered.
> 
> FWIW, the problem seems fairly straightforward for most optimizers at my
> disposal in the R-forge (developmental) version of the optimx package at
> https://r-forge.r-project.org/projects/optimizer/
> 
> I used the code
> 
> ## KKristensen19nlminb.R
> f <- function(x) sum( log(diff(x)^2+.01) + (x[1]-1)^2 )
> opt <- nlminb(rep(0, 10), f, lower=-1, upper=3)
> xhat <- rep(1, 10)
> abs( opt$objective - f(xhat) ) < 1e-4  ## Must be TRUE
> opt
> library(optimx)
> optx <- opm(rep(0,10), f, lower=-1, upper=3, method="ALL")
> summary(optx, order=value)
> optxc <- opm(rep(0,10), f, gr="grcentral", lower=-1, upper=3,
> method="ALL")
> summary(optxc, order=value)
> optxn <- opm(rep(0,10), f, gr="grnd", lower=-1, upper=3, method="ALL")
> summary(optxn, order=value)
> 
> It should not be too difficult to actually supply the gradient, which
> would give speedier and more reliable outcomes.
> 
> 
> JN
> 
> 
> 
> On 2019-01-28 3:56 a.m., Kasper Kristensen via R-devel wrote:
> > I've noticed unstable behavior of nlminb on some Linux systems.
> The problem can be reproduced by compiling R-3.5.2 using gcc-8.2 and
> running the following snippet:
> >
> > f <- function(x) sum( log(diff(x)^2+.01) + (x[1]-1)^2 )
> > opt <- nlminb(rep(0, 10), f, lower=-1, upper=3)
> > xhat <- rep(1, 10)
> > abs( opt$objective - f(xhat) ) < 1e-4  ## Must be TRUE
> >
> > The example works perfectly when removing the bounds. However,
> when bounds are added the snippet returns 'FALSE'.
> >
> > An older R version (3.4.4), compiled using the same gcc-8.2, did
> not have the problem. Between the two versions R has changed the
> flags to compile Fortran sources:
> >
> > < SAFE_FFLAGS = -O2 -fomit-frame-pointer -ffloat-store
> > ---
> >> SAFE_FFLAGS = -O2 -fomit-frame-pointer -msse2 -mfpmath=sse
> >
> > Reverting to the old SAFE_FFLAGS 'solves' the problem.
> >
> >> sessionInfo()
> > R version 3.5.2 (2018-12-20)
> > Platform: x86_64-pc-linux-gnu (64-bit)
> > Running under: Scientific Linux release 6.4 (Carbon)
> >
> > Matrix products: default
> > BLAS/LAPACK:
> 
> /zdata/groups/nfsopt/intel/2018update3/compilers_and_libraries_2018.3.222/linux/mkl/lib/intel64_lin/libmkl_gf_lp64.so
> >
> > locale:
> > [1] C
> >
> > attached base packages:
> > [1] stats     graphics  grDevices utils     datasets  methods   base
> >
> > loaded via a namespace (and not attached):
> > [1] compiler_3.5.2
> >
> >
> >
> >       [[alternative HTML version deleted]]
> >
> > __
> > R-devel@r-project.org <mailto:R-devel@r-project.org> mailing list
> > https://stat.ethz.ch/mailman/listinfo/r-devel
> 
> __
> R-devel@r-project.org <mailto:R-devel@r-project.org> mailing list
> https://stat.ethz.ch/mailman/listinfo/r-devel
>

__
R-devel@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel


Re: [Rd] nlminb with constraints failing on some platforms

2019-02-08 Thread ProfJCNash
It may be worth noting that both Avraham and I are members of the

histoRicalg project
(https://gitlab.com/nashjc/histoRicalg) that has some modest funding
from R-Consortium.
The type of concern this nlminb thread raises is why the project was
proposed. That is,
older codes that may predate IEEE arithmetic and modern programming
language processors
often were built with a different understanding of how algorithm
expressions would be
executed.

Documenting the resolution of this issue and others like it will be
welcome and we will hope
to be able to collect such results in a form that may help resolve
similar matters in future.

Best, JN


On 2019-02-06 7:15 a.m., Avraham Adler wrote:

> If it helps, the BLAS I used is compiled to use 6 threads.
>
> On Wed, Feb 6, 2019 at 3:47 AM Berend Hasselman  wrote:
>
>>> On 6 Feb 2019, at 10:58, Martin Maechler 
>> wrote:
>> .
>> ---
>>> I summarize what has been reported till:
>>>
>>> Failure in these cases
>>> 
>>> 1. Kasper K ("Scientific Linux", self compiled R, using Intel's MKL
>>>   for BLAS/LAPACK)
>>> 2. (By Bill Dunlap): Microsoft R Open (MRO) 3.4.2, also using
>>>   MKL with 12 cores
>>> 3. (By Brad Bell)  : R 3.5.2 Fedora 28 (x86_64) pkg, OpenBLAS(?)
>>> 4. (by MM) : R 3.5.2 Fedora 28 (x86_64) pkg, BLAS+Lapack =
>> OpenBLAS
>>> Success
>>> ===
>>>
>>> - (by MM): R-devel, R 3.5.2 patched on FC28, *self compiled* gcc
>> 8.2,
>>>using R's BLAS/Lapack
>>> - (by Ralf Stubner): R 3.5.2 from Debian Stable (gcc 6.2) + OpenBLAS
>>> - (by Berend H.)   : R 3.5.2 [from CRAN] on macOS 10.14.3 (BLAS/Lapack
>> ??)
>>
>> R 3.5.2 from CRAN using R's BLAS/Lapack.
>>
>> Berend
>>
>> 
>>
>>> It would be great if this could be solved...
>>>
>>> Martin
>>>
>>>
>>>
 I have tried passing in the gradient and turning on the trace and it
>> gives nearly the exact same trace with and without the gradient.
>>>[...]
>>>
>>> __
>>> R-devel@r-project.org mailing list
>>> https://stat.ethz.ch/mailman/listinfo/r-devel
>> __
>> R-devel@r-project.org mailing list
>> https://stat.ethz.ch/mailman/listinfo/r-devel
>>

__
R-devel@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel


[Rd] equality testing, was all.equal....

2015-07-31 Thread ProfJCNash
These issues have been around for many years. I've created some upset
among some programmers by using equality tests for reals (in R doubles).

However, there's a "but", and it is that I test using

  if ( (a + offset) == (b + offset) ) {}

where offset is a number like 100.0. This is really "equality to a
scale" defined by the offset. It also seems to inhibit those who don't
know what is going on from changing a tolerance. It will, of course, be
defeated by some optimizing compilers.

I started doing this on very small computers (<4K bytes for program and
data) where I wanted to avoid lots of checks on whether a and b were
small. Then I realized it simplified code and is suitable for most tests
of equality.

It may be that an all.equal.offset() function would be useful.


Best, JN

__
R-devel@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel


[Rd] Best way to implement optional functions?

2015-10-23 Thread ProfJCNash
I'm relieved to read that this issue is becoming more visible. In my own
work on optimizers, I've been finding it awkward to provide a clean
solution to allowing users to run e.g., optimx, when some optimizers are
not installed. Unfortunately, I've not found what I consider to be a
solution with any elegance.

JN

On 15-10-23 06:00 AM, r-devel-requ...@r-project.org wrote:
> Message: 8
> Date: Thu, 22 Oct 2015 15:55:01 -0400
> From: Duncan Murdoch 
> To: "R-devel@r-project.org" 
> Subject: [Rd] Best way to implement optional functions?
> Message-ID: <56293f15.80...@gmail.com>
> Content-Type: text/plain; charset=utf-8; format=flowed
> 
> I'm planning on adding some new WebGL functionality to the rgl package, 
> but it will pull in a very large number of dependencies. Since many 
> people won't need it, I'd like to make the new parts optional.
> 
> The general idea I'm thinking of is to put the new stuff into a separate 
> package, and have rgl "Suggest" it.  But I'm not sure whether these 
> functions  should only be available in the new package (so users would 
> have to attach it to use them), or whether they should be in rgl, but 
> fail if the new package is not available for loading.
> 
> Can people suggest other packages that solve this kind of problem in a 
> good way?
> 
> Duncan Murdoch

__
R-devel@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel


[Rd] Possibly useful idea

2016-01-11 Thread ProfJCNash
I've not worked changing underlying computational infrastructure, but
developers who do might want to use ideas from FlexiBLAS. Apologies in
advance if this is well-known.

Best JN

> From: Martin Koehler koehl...@mpi-magdeburg.mpg.de
> Date: January 07, 2016
> Subject: FlexiBLAS Version 1.3.0 Release
>
> It is our pleasure to announce the new release of FlexiBLAS. We do not
> provide yet another BLAS implementation in itself, but rather present
> an easy to use method for switching between different BLAS
> implementations while debugging or benchmarking other codes.
>
> Highlights since the release of the initial version (v1.0.0) are:
> - support for BLAS-like extensions known from OpenBLAS and Intel MKL,
> - switch the BLAS backend at program runtime,
> - integration in GNU Octave,
> - and a redesigned command line tool to manage the configuration.
>
> Feel free to check the current version at
> http://www.mpi-magdeburg.mpg.de/projects/flexiblas/
> and drop us a line if you have any comments or requests about it.
>
> Further information on the realization can be found in LAWN 284
> http://www.netlib.org/lapack/lawnspdf/lawn284.pdf
> or updated version of the Manuscript (coming soon) available from the
> project web page.
>
> The library is available under GPLv3. We have tested it in production
> environments on several platforms and hope you will find it useful for
> your research, as well.
>
> Martin Köhler and Jens Saak
>

__
R-devel@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel

[Rd] Fortran issues. Was CRAN packages maintained by you

2016-09-03 Thread ProfJCNash

If there is going to be a review of Fortran sources, then there's quite
a bit of checking and testing work as well. As someone who actually
worked with some of the NPL and Argonne and other people, and
occasionally contributed some code, I'm willing to try to help out with
this. However, I will wait to be asked about specific routines.

Note that Yihui Xie and I added a Fortran engine to knitr while at the
UseR!2014. One of my motivations for this was to allow for documentation
of the Fortran code before those of us with greying/missing hair
evaporated. So far not much usage I believe, but this may be a good use
of that possibility.

Best, JN

__
R-devel@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel


Re: [Rd] optim(?, method=?L-BFGS-B?) stops with an error

2016-10-09 Thread ProfJCNash
I'll not copy all the previous material on this thread to avoid overload.

The summary is that all the methods Spencer has tried have some issues.

The bad news: This is not uncommon with optimization methods, in part because 
the problems are "hard",
in part because getting them implemented and linked to an interfacing approach 
like R is very tedious
and prone to omissions and errors.

The good news: I've been working on a revision to optimx, having noted the 
implementation issues just
mentioned. There is now a package optimr on CRAN, but that's just to reserve 
the name. The real package
is optimrx on R-forge (dependencies can fail, then the poor maintainer gets 
"your package doesn't work",
with no hope of fixing it). Moreover, Harry Joe recently pointed out to me a 
bug and in the last few
weeks I think I've resolved issues where Rvmmin and other packages got NA 
results when numerical gradient
approximations were used in certain ways.

optimrx came about because I realized that optimx() has just enough difference 
in syntax from optim()
to be a nuisance and was heavy to maintain. Also I wanted parameter scaling to 
work for all methods,
as in optim(). However, Ravi's efforts easily convinced me that trying multiple 
methods was a
good idea, so there is an opm() function. We also had an option for 
polyalgorithms and at one point
for multiple starts. I've put them in polyopt() and multistart() -- the 
combination in optimx was
driving me nuts when doing any work on the code. Ravi, I hope this doesn't 
offend. The optimx
ideas are still there, but the restructuring will, I hope, lead to easier 
maintenance and development.
As the package is very new, I fully expect there are some deficiencies, and ask 
that users send
executable examples so I can address same.

optimrx doesn't (yet) have nloptr. It's on the todo list, but I've not been 
able despite many tries to
get any response from its maintainer (Jelmer Ypma), who seems to have largely 
dropped out of R work, though
there was a fairly recent minor adjustment on Github. However, no communication 
is a
worry, as nloptr and also ipoptr are important tools that could use support. 
I've offered, but I don't
have C++ expertise. If anyone is willing to work with me, this can be moved 
forward soon.

Spencer: Can you prepare your problem in a way that the optimization bit is 
replaceable and send to
me? I'll see if I can figure out what is the actual source of the error as well 
as figure out what
methods "work" and how well.

Note that the Rtnmin package (a translation I made of my brother's Matlab code) 
also will handle
bounds, but optimrx probably makes the call easier. If you send the example, 
I'll make sure it gets
tried also.

Best, JN

__
R-devel@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel