Are there available some cross-validation method for LPA object??
Linda
Rispetta l’ambiente: non stampare questa mail se non è necessario.
Respect the environment: print this email only if necessary.
__
R-help@r-project.org mailing list -- To UNSUBSCR
> I am planning to implement Nadaraya-Watson regression model, with
I'm not sure what you mean by "implement".
Write a package, fit a model, or something else...
Reading your whole post, I get the impression you want mid-level
"building blocks", so you customize the model fitting process, in some
Hi,
This question is general- I have a data set of n observations, consisting
of a single response variable y and p regressor variables.( n ~50, p~3 or
4).
I am planning to implement Nadaraya-Watson regression model, with
bandwidths optimized via cross-validation.
For cross-validation, I will need
Dear R-experts,
Doing cross-validation for 2 robust regressions (HBR and fast Tau). I can't get
the 2 errors rates (RMSE and MAPE). The problem is to predict the response on
the testing data. I get 2 error messages.
Here below the reproducible (fictional example) R code.
#install.packages("MLm
> On Aug 23, 2017, at 10:59 AM, Elahe chalabi via R-help
> wrote:
>
> Any responds?!
When I look at the original post a I see a question about a function named
`rfcv` but do not see a `library` call to load such a function. I also see a
reference to a help page or vignette, perhaps?, from th
Any responds?!
On Wednesday, August 23, 2017 5:50 AM, Elahe chalabi via R-help
wrote:
Hi all,
I would like to do cross validation in random forest using rfcv function. As
the documentation for this package says:
rfcv(trainx, trainy, cv.fold=5, scale="log", step=0.5, mtry=function(p) ma
Hi all,
I would like to do cross validation in random forest using rfcv function. As
the documentation for this package says:
rfcv(trainx, trainy, cv.fold=5, scale="log", step=0.5, mtry=function(p) max(1,
floor(sqrt(p))), recursive=FALSE, ...)
however I don't know how to build trianx and tr
Hi all,
I would like to do cross validation in random forest using rfcv function. As
the documentation for this package says:
rfcv(trainx, trainy, cv.fold=5, scale="log", step=0.5, mtry=function(p) max(1,
floor(sqrt(p))), recursive=FALSE, ...)
however I don't know how to build trianx and train
1) Helpdesk implies people whose job it is to provide support. R-help is a
mailing list in which users help each other when they have spare time.
2) You sent an email to the R-help mailing list, not to Lara, whoever that is.
I suggest you figure out what her email address is and send your quest
Lara:
I see you sent this email to the R helpdesk a really long time ago, but I was
just wondering if you ever got an answer to this question. I was just thinking
that I would build my own cross validation function, but if you figured out a
way to do this automatically, could you let me know?
Dear R-team
I did a model selection by AIC which explain me the habitat use of my
animals in six different study sites (See attached files:
cross_val_CORINE04032014.csv and cross_val_CORINE04032014.r). Sites were
used as random factor because they are distributed over
> How do i make a loop so that the process could be repeated several time,
> producing randomly ROC curve and under ROC values?
Using the caret package
http://caret.r-forge.r-project.org/
--
Max
__
R-help@r-project.org mailing list
https://stat.ethz
This code is untested, since you did not provide any example data. But it
may help you get started.
Jean
mydata <- read.csv(file.choose(), header=TRUE)
library(ROCR) # ROC curve and assessment of my prediction
plot(0:1, 0:1, type="n", xlab="False positive rate", ylab="True positive
rate")
abli
Guys,
I select 70% of my data and keep 30% of it for model validation.
mydata <- read.csv(file.choose(), header=TRUE)
select <- sample(nrow(mydata), nrow(mydata) * .7)
data70 <- mydata[select,] # select
data30 <- mydata[-select,] # testing
temp.glm <- glm(Death~Temperature, data=data70,
family=
Hi Guilherme,
On Sun, Apr 14, 2013 at 11:48 PM, Guilherme Ferraz de Arruda
wrote:
> Hi,
> I need to classify, using Naive Bayes and Bayes Networks, and estimate
> their performance using cross validation.
> How can I do this?
> I tried the bnlearn package for Bayes Networks, althought I need to
Hi,
I need to classify, using Naive Bayes and Bayes Networks, and estimate
their performance using cross validation.
How can I do this?
I tried the bnlearn package for Bayes Networks, althought I need to get
more indexes, not only the error rate (precision, sensitivity, ...).
I also tried the *e10
Good morning.
I am using package e1071 to develop a SVM model. My code is:
x <- subset(dataset, select = -Score)
y <- dataset$Score
model <- svm(x, y,cross=10)
print(model)
summary(model)
As 10-CV produces 10 models, I need two things:
1) To have access to each model from 10-CV.
Hi,
I've written a logistic function using nls and I'd like to do cross
validation for this. Is there a package for that? Below is an example of my
data and the function. N terms are presence/absence data and the response is
succesful/failed data.
y1<-sample(0:1,100,replace=T)
N1<-sample(0:1,100
I am using cv.glmnet from glmnet package for logistic regression.
my dataset is very imbalanced, 5% sample from one group, the rest from the
other. I'm wondering when doing cv.glmnet for choosing lambda, is every fold
having the same ratio for two groups(every fold has 5% sample from one
group, the
Please report bugs in packages to the corresponding package maintainer
(perhaps suggesting a fix if you have an idea how to do that).
Uwe Ligges
On 14.02.2012 12:42, Martin Batholdy wrote:
Hi,
according to ?rvm the relevance vector machine function as implemented in the
kernlab-package
has a
Hi,
according to ?rvm the relevance vector machine function as implemented in the
kernlab-package
has an argument 'cross' with which you can perform k-fold cross validation.
However, when I try to add a 10-fold cross validation I get the following error
message:
Error in match.arg(type, c("C-
On 31/12/2011 12:34, Israel Saeta Pérez wrote:
Hello list,
I'm trying to generate classifiers for a certain task using several
methods, one of them being decision trees. The doubts come when I want to
estimate the cross-validation error of the generated tree:
tree<- rpart(y~., data=data.frame(x
Hello list,
I'm trying to generate classifiers for a certain task using several
methods, one of them being decision trees. The doubts come when I want to
estimate the cross-validation error of the generated tree:
tree <- rpart(y~., data=data.frame(xsel, y), cp=0.1)
ptree <- prune(tree,
cp=tre
Hi there,
I really tried hard to understand and find my own solution, but now I
think I have to ask for your help.
I already developed some script code for my problem but I doubt that it
is correct.
I have the following problem:
Image you develop a logistic regression model with a binary out
On Sat, 19 Mar 2011, Penny B wrote:
I am trying to find out what type of sampling scheme is used to select the 10
subsets in 10-fold cross-validation process used in rpart to choose the best
tree. Is it simple random sampling? Is there any documentation available on
this?
Not SRS (and least in
I assume you mean rpart::xpred.rpart ? The beauty of R means that you
can look at the source. For the simple case (where xval is a single
number) the code does indeed do simple random sampling
xgroups<- sample(rep(1:xval, length = nobs), nobs, replace = FALSE)
If you want another sampling,
I am trying to find out what type of sampling scheme is used to select the 10
subsets in 10-fold cross-validation process used in rpart to choose the best
tree. Is it simple random sampling? Is there any documentation available on
this?
Thanks, Penny.
--
View this message in context:
http://r.78
Dear community,
I have fitted a model using comands above, (rlm, lmrob or lmRob). I don't
have new data to validate de models obtained. I was wondering if exists
something similar to CVlm in robust regression. In case there isn't, any
suggestion for validation would be appreciated.
Thanks, u...
On 1/7/2011 12:40 PM, Jon Olav Skoien wrote:
Pearl,
The error suggests that there is something wrong with x2, and that
there is a difference between the row names of the coordinates and the
data. If you call
str(x2)
see if the first element of @coords is different from NULL, as this
can caus
Pearl,
The error suggests that there is something wrong with x2, and that there
is a difference between the row names of the coordinates and the data.
If you call
str(x2)
see if the first element of @coords is different from NULL, as this can
cause some problems when cross-validating. If it i
ear ALL,
The last part of my thesis analysis is the cross validation. Right now I am
having difficulty using the cross validation of gstat. Below are my commands
with the tsport_ace as the variable:
nfold <- 3
part <- sample(1:nfold, 69, replace = TRUE)
sel <- (part != 1)
m.model <- x2[sel, ]
m
thank you so much for your help. if i am not wrong then createDataPartition
can be used to create stratified random splits of a data set.
is there other way to do that?
Thank you
--
View this message in context:
http://r.789695.n4.nabble.com/cross-validation-using-e1071-SVM-tp3055335p3057684.
Neeti,
I'm pretty sure that the error is related to the confusionMAtrix call,
which is in the caret package, not e1071.
The error message is pretty clear: you need to pas in two factor
objects that have the same levels. You can check by running the
commands:
str(pred_true1)
str(species_tes
could anyone help me with my last problem. if the question is not clear
please let me know
thank you
Hi everyone
I am trying to do cross validation (10 fold CV) by using e1071:svm method.
I
know that there is an option (cross) for cross validation but still I
wanted to make a funct
@Francial Giscard LIBENGUE please post your query again so that with
different subject
--
View this message in context:
http://r.789695.n4.nabble.com/cross-validation-using-e1071-SVM-tp3055335p3055831.html
Sent from the R help mailing list archive at Nabble.com.
_
Hi everyone,
Can you help me to plot Gamma(x/h+1) and Beta(x/h+1,(1-x)/h+1)?I want write
x<-seq(0,3,0.1)
thank
2010/11/23 Neeti
>
> Hi everyone
>
> I am trying to do cross validation (10 fold CV) by using e1071:svm method.
> I
> know that there is an option (cross) for cross validation but sti
Hi everyone
I am trying to do cross validation (10 fold CV) by using e1071:svm method. I
know that there is an option (“cross”) for cross validation but still I
wanted to make a function to Generate cross-validation indices using pls:
cvsegments method.
#
at's the word... imbue it."
- Jubal Early, Firefly
From:
Shiyao Liu
To:
r-help@r-project.org
Date:
11/03/2010 09:04 PM
Subject:
[R] cross-validation for choosing regression trees
Sent by:
r-help-boun...@r-project.org
Dear All,
We came across a problem when using the "tree&
Dear All,
We came across a problem when using the "tree" package to analyze our data
set.
First, in the "tree" function, if we use the default value "mindev=0.01",
the resulting regression tree has a single node. So, we set "mindev=0", and
obtain a tree with 931 terminal nodes.
However, when we
From ?svm:
cross if a integer value k>0 is specified, a k-fold cross validation on
the training data is performed to assess the quality of the model: the
accuracy rate for classification and the Mean Squared Error for regression
Uwe Ligges
On 15.06.2010 23:14, Amy Hessen wrote:
hi,
cou
hi,
could you please tell me what kind of cross validation that SVM of e1071 uses?
Cheers,
Amy
_
View photos of singles in your area! Looking for a hot date?
[[alternative HTML
Install the caret package and see ?train. There is also:
http://cran.r-project.org/web/packages/caret/vignettes/caretTrain.pdf
http://www.jstatsoft.org/v28/i05/paper
Max
On Tue, Jun 8, 2010 at 5:34 AM, azam jaafari wrote:
> Hi
>
> I want to do leave-one-out cross-validation for multinom
As far as my knowledge goes, nnet doesn't have a built-in function for
crossvalidation. Coding it yourself is not hard though. Nnet is used
in this book : http://www.stats.ox.ac.uk/pub/MASS4/ , which contains
enough examples on how to do so.
See also the crossval function in the bootstrap package.
Hi
I want to do leave-one-out cross-validation for multinomial logistic regression
in R. I did multinomial logistic reg. by package nnet in R. How I do
validation? by which function?
response variable has 7 levels
please help me
Thanks alot
Azam
[[alternative HTML version d
Inline below:
Bert Gunter
Genentech Nonclinical Statistics
-Original Message-
From: r-help-boun...@r-project.org [mailto:r-help-boun...@r-project.org] On
Behalf Of Steve Lianoglou
Sent: Friday, April 02, 2010 2:34 PM
To: Jay
Cc: r-help@r-project.org
Subject: Re: [R] Cross-validation for
Hi,
On Fri, Apr 2, 2010 at 9:14 AM, Jay wrote:
> If my aim is to select a good subset of parameters for my final logit
> model built using glm(). What is the best way to cross-validate the
> results so that they are reliable?
>
> Let's say that I have a large dataset of 1000's of observations. I
en maximum likelihood
estimation is used within each model.
Jay
Sent by: r-help-boun...@r-project.org
04/02/2010 09:14 AM
To
r-help@r-project.org
cc
Subject
[R] Cross-validation for parameter selection (glm/logit)
If my aim is to select a good subset of parameters for my final logit
m
If my aim is to select a good subset of parameters for my final logit
model built using glm(). What is the best way to cross-validate the
results so that they are reliable?
Let's say that I have a large dataset of 1000's of observations. I
split this data into two groups, one that I use for traini
> The cross-validation in the pls package does not propose a number of
> factors as optimum, you have to select this yourself. (The reason for
> this is that there is AFAIK no theoretically founded and widely accepted
> way of doing this automatically. I'd be happy to learn otherwise.)
The caret
Kjaere Bjørn-Helge,
>
> > can anyone give an example how to use cross-validation in the plsr
> package.
>
> There are examples in the references cited on
> http://mevik.net/work/software/pls.html
>
> > I miss to find the number of factors proposed by cross-validation as
> > optimum.
>
> T
Peter Tillmann writes:
> can anyone give an example how to use cross-validation in the plsr package.
There are examples in the references cited on
http://mevik.net/work/software/pls.html
> I miss to find the number of factors proposed by cross-validation as
> optimum.
The cross-validation in t
Dear readers,
can anyone give an example how to use cross-validation in the plsr package.
I miss to find the number of factors proposed by cross-validation as
optimum.
Thank you
Peter
--
View this message in context:
http://n4.nabble.com/cross-validation-in-plsr-package-tp1563815p1563815.htm
Thanks Frank and Steve.
I rewrite the R code as follows.
# m is the number of fold to split sample, n is the loop number of cross
validation
library(caret)
calcvnb<-function(formula,dat,m,n)
{
cvnb<-rep(0,2)
dim(cvnb)<-c(200,100)
for (i in 1:n)
{
group<-rep(0,length=110)
sg<-createFolds(d
Take a look at the validate.lrm function in the rms package.
Note that the use of threshold probabilities results in an improper
scoring rule which will mislead you. Also note that you need to repeat
10-fold CV 50-100 times for precision, and that at each repeat you have
to start from zero in
Hi,
On Thu, Jan 21, 2010 at 8:55 AM, zhu yao wrote:
> Hi, everyone:
>
> I ask for help about translating a stata program into R.
>
> The program perform cross validation as it stated.
>
> #1. Randomly divide the data set into 10 sets of equal size, ensuring equal
> numbers of events in each set
>
Hi, everyone:
I ask for help about translating a stata program into R.
The program perform cross validation as it stated.
#1. Randomly divide the data set into 10 sets of equal size, ensuring equal
numbers of events in each set
#2. Fit the model leaving out the 1st set
#3. Apply the fitted model
Elaine,
That's a fair answer, but completely not what I meant. I was hoping
that you would elaborate on "the species data of species distribution
models". What types of inputs and output for this particular modeling
application etc.
> Is it the same with the function inside caret, ipred, and e107
Dear,
Thanks for the warmful help on New Year's EVE.
Cross-validation is used to validate the predictive quality of the training
data with testing data.
As for the amount,
the cross-validation (cv) is supposed to be based on k-fold
cross-validation,
k-1 for the training and 1 for the testing.
You might want to be more specific about what you (exactly) intend to
do. Reading the posting guide might help you get better answers.
There are a few packages and functions to do what (I think) you
desire. There is the train function in the caret package, the errorest
function in ipred and
Dear,
I wanna make cross-validation for the species data of species distribution
models.
Please kindly suggest any package containing cross validation suiting the
purpose.
Thank you.
Elaine
[[alternative HTML version deleted]]
__
R-help@r-pro
Dear r-helpers,
I estimated a generalized additive model (GAM) using Hastie's package GAM.
Example:
gam1 <- gam(vegetation ~ s(slope), family = binomial, data=aufnahmen_0708,
trace=TRUE)
pred <- predict(gam1, type = "response")
vegetation is a categorial, slope a numerical variable.
Now I want
Dear R users,
I know cross-validation does not work in rpart with user defined split
functions. As Terry Therneau suggested, one can use the xpred.rpart function
and then summarize the matrix of the predicted values into a single
"goodness" value.
I need only a confirmation: set for example xva
Hi all
I have developed a zero-inflated negative binomial model using the
zeroinfl function from the pscl package, which I have carried out model
selection based on AIC and have used likelihood ratio tests (lrtest from
the lmtest package) to compare the nested models [My end model contains
2 fac
I have reviewed all the scripts that appear http://cran.es.r-project.org/ and
I cann´t find any suitable for cross-validation with a model of the form y =
aX^(b). exp(cZ). Please can someone help me?
Thanks, a lot of!!
--
View this message in context:
http://www.nabble.com/cross-validation-tp226
This may be somewhat useful, but I might have more later.
http://florence.acadiau.ca/collab/hugh_public/index.php?title=R:CheckBinFit
(the code below is copied from the URL above)
CheckBinFit <- function(y,phat,nq=20,new=T,...) {
if(is.factor(y)) y <- as.double(y)
y <- y-mean(y)
y[y>0
Hi all,
I'd like to do cross-validation on lm and get the resulting lift curve/table
(or, alternatively, the estimates on 100% of my data with which I can get
lift).
If such a thing doesn't exist, could it be derived using cv.lm, or would we
need to start from scratch?
Thanks!
--
Eric Siegel,
Hello everyone,
I have a data set that looks like the following:
Year Days to the beginning of YearValue
1 30 100
1 60200
1..
Hi,
I was trying to do cross-validation using the crossval function (bootstrap
package), with the following code:
-
theta.fit <- function(x,y){
model <- svm(x,y,kernel = "linear")
Good Day All,
I have a negative binomial model that I created using the function
glm.nb() with the MASS library and I am performing a cross-validation
using the function cv.glm() from the boot library. I am really
interested in determining the performance of this model so I can have
confiden
Hello,
We would like to perform a cross validation on linear mixed models (lme) and
wonder if anyone knows of something analogous to cv.glm for such models?
Thanks, Mark
[[alternative HTML version deleted]]
__
R-help@r-project.org mailing list
Hello,
We would like to perform a cross validation on a linear mixed model (lme)
and wonder if anyone has found something analogous to cv.glm for such
models?
Thanks, Mark
[[alternative HTML version deleted]]
__
R-help@r-project.org mailing li
-- begin included message
I'm having a problem with custom functions in rpart, and before I tear my
hair out trying to fix it, I want to make sure it's actually a problem. It
seems that, when you write custom functions for rpart (init, split and eval)
then rpart no longer cross-validates the resul
Hello list,
I'm having a problem with custom functions in rpart, and before I tear my
hair out trying to fix it, I want to make sure it's actually a problem. It
seems that, when you write custom functions for rpart (init, split and eval)
then rpart no longer cross-validates the resulting tree to
1) cv.glm is not 'in R', it is part of contributed package 'boot'. Please
give credit where it is due.
2) There is nothing 'cross' about your 'home-made cross validation'.
cv.glm is support software for a book, so please consult it for the
definition used of cross-validation, or MASS (the boo
Folks; I am having a problem with the cv.glm and would appreciate someone
shedding some light here. It seems obvious but I cannot get it. I did read
the manual, but I could not get more insight. This is a database containing
3363 records and I am trying a cross-validation to understand the process.
JStainer wrote:
> an example from my R table will calculating the average LOOCV for two
> treatments ALL and AML
>
> table
> ALL AML
> 11.2 .3
> 2.87.3
> 31.1.5
> 41.2.7
> 53.21.2
> 61.11.1
> 7.90 .99
> 81.1.32
> 92.
Hi,
I must have accidentally deleted my previous post. I am having a really
difficult time calculating the LOOCV (leave out cross validation).
table in excel
genes ALL AML p.value
1 1.2 .3 .01
2 .87.3 .03
3 1.1.5 .05
4 1.2
JStainer wrote:
>
> Hi,
>
> I am trying to find out the best way to calculate the average LOOCV in R
> for several classifier for, KNN, centroid classification, DLDA and SVM.
>
> I have four types of diseases and 62 samples.
>
> Is there a R code available to do this?
>
>
>
--
View thi
an example from my R table will calculating the average LOOCV for two
treatments ALL and AML
table
ALL AML
11.2 .3
2.87.3
31.1.5
41.2.7
53.21.2
61.11.1
7.90 .99
81.1.32
92.1 1.2
JStainer wrote:
>
> Hi,
>
>
an example from my R table will calculating the average LOOCV for two
treatments ALL and AML
table
ALL AML
11.2 .3
2.87.3
31.1.5
41.2.7
53.21.2
61.11.1
7.90 .99
81.1.32
92.1 1.2
JStainer wrote:
>
> Hi,
>
>
Hi,
I am trying to find out the best way to calculate the average LOOCV in R for
several classifier for, KNN, centroid classification, DLDA and SVM.
I have four types of diseases and 62 samples.
Is there a R code available to do this?
--
View this message in context:
http://www.nabble.com/
http://www.burns-stat.com/pages/Tutor/bootstrap_resampling.html
may be of some use to you.
Patrick Burns
[EMAIL PROTECTED]
+44 (0)20 8525 0696
http://www.burns-stat.com
(home of S Poetry and "A Guide for the Unwilling S User")
Carla Rebelo wrote:
>Hello,
>
>How can I do a cross validation in R
Hello,
How can I do a cross validation in R?
Thank You!
__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-
Hello All,
I'm writing a custom rpart function, and I'm wondering about
cross-validation. Specifically, why isn't my splitting function being
called more often with the xval increased? One would expect that,
with xval=10 compared to xval=1, that the prior would call the
splitting function mo
Is the cross validation procedure implemented in lda() from the MASS
package internal or external?
Thanks,
Bijun Tan
Cooper Union
__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://
85 matches
Mail list logo