,1:3110], collapse="+")))
> > Error in colnames(mydata)[, 3111] : incorrect number of dimensions
>
> Best,
> Soheila
>
> On Wed, Mar 25, 2015 at 11:12 AM, Soheila Khodakarim <
> lkhodaka...@gmail.com> wrote:
>
>> Hi Charles,
>> Many thanks for your help
hidden = 4, lifesign =
> "minimal", linear.output = FALSE, threshold = 0.1)
>
> I saw this error
>
> Error in neurons[[i]] %*% weights[[i]] : non-conformable arguments
>
> :(:(:(
>
> What should I do now??
>
> Regards,
> Soheila
>
>
> On Tue, Mar
Hi Soheila,
You are using the formula argument incorrectly. The neuralnet function has
a separate argument for data aptly names 'data'. You can review the
arguments by looking at the documentation with ?neuralnet.
As I cannot reproduce your data the following is not tested but I think
should w
of Chlorophyll a can make the
> Eutrophication in lake along with other algeas.
> So I think they are dependent variables.
> Regards.
>
>
>
>
> On Thu, 1/22/15, Charles Determan Jr wrote:
>
> Subject: Re: [R] Neural Network
I don't know about any courses but I recommend the cookbook for R website:
http://www.cookbook-r.com/Graphs/
There are many examples implementing ggplot2 for different types of plots.
Hope this helps,
On Thu, Jan 22, 2015 at 12:14 PM, Erin Hodgess
wrote:
> Hello!
>
> Are there any ggplot cour
> 2) Is it possible to predict the Eutro. by these variables?
>
>
> Many thanks for your help.
> Regards,
>
>
>
>
>
>
>
>
> On Wed, 1/21/15, Charles Determan Jr wrote:
>
> Subject: Re: [R] Neural Network
> To
Javad,
You question is a little too broad to be answered definitively. Also, this
is not a code writing service. You should make a meaningful attempt and we
are here to help when you get stuck.
1. If you want to know if you can do neural nets, the answer is yes. The
three packages most commonl
Hi Davide,
You really shouldn't post on multiple forums. Please see my response on
SO,
http://stackoverflow.com/questions/27990932/r-deepnet-package-how-to-add-more-hidden-layers-to-my-neural-network,
where I tell you that you can add additional layers by simply adding to the
hidden vector.
Rega
NA
> [2,] NA NA
> The ervalue itself loses the values , I think and hence A does not have
> it.
>
> --
> *From:* Charles Determan Jr [deter...@umn.edu]
> *Sent:* Wednesday, November 19, 2014 10:04 PM
>
> *To:* Amit Thombre
> *Cc:*
al,maxval,maxval,maxval,maxval,2)) # depdending
> on the max value set the , stores the error value.array size
> ervalue<-array(, c(maxval,maxval,maxval,maxval,maxval,maxval, 2)) #
> depdending on the max value set the , stores the error value.array size
> erval1<-array(, c(maxval,maxval,maxval,maxval,maxval,maxval, 2)) #
>
on the max value set the , stores the error value.array size
> ervalue<-array(, c(maxval,maxval,maxval,maxval,maxval,maxval, 2)) #
> depdending on the max value set the , stores the error value.array size
> erval1<-array(, c(maxval,maxval,maxval,maxval,maxval,maxval, 2)) #
> depdend
Amit,
Your question isn't necessarily complete. You haven't provided a
reproducible example of your data or an error message. At first glance you
aren't passing anything to your 'far' function except for 'p' and yet it
uses i,j,k,l,m,n,testsize1, and act1. You should generally try to avoid
glob
You can use grep with some basic regex, index your dataframe, and colSums
colSums(df[,grep("*6574*|*7584*|*85*", colnames(df))])
colSums(df[,grep("f6574*|f7584*|f85*", colnames(df))])
Regards,
Dr. Charles Determan
On Mon, Oct 13, 2014 at 7:57 AM, Kuma Raj wrote:
> I want to sum columns based
Do you have an example of what you would like your output to look like? It
is a little difficult to fully understand what you are looking for. You
only have 18 values but are looking to fill at 10x8 matrix (i.e. 80
values). If you can clarify better we may be better able to help you.
Charles
hat you want, this
> may or may not be enough.
>
> Regards,
> Yihui
> --
> Yihui Xie
> Web: http://yihui.name
>
>
> On Wed, Sep 3, 2014 at 7:12 AM, Charles Determan Jr
> wrote:
> > Thank you for checking Yihui, on the off chance are you familiar with any
> >
guess it is unlikely to
> make it work in (the current development version of) shiny. It is not
> in the official list of plugins, either:
> http://www.datatables.net/extensions/index
>
> Regards,
> Yihui
> --
> Yihui Xie
> Web: http://yihui.name
>
>
> On Tue, Se
Greetings,
I am currently exploring some capabilities of the 'Shiny' package. I am
currently working with the most recent version of 'shiny' from the rstudio
github repository (version - 0.10.1.9006) in order to use the most up to
date datatables plugin. Using the ggplot2 diamonds dataset, I can
high correlation coefficient between
> variables in the first population and also put a high correlation
> coefficient between variables in the second population and no correlation
> between two populations because i want to use multiple group structural
> equation models.
>
>
Thanoon,
You should still send the question to the R help list even when I helped
you with the code you are currently using. I will not always know the best
way or even how to proceed with some questions. As for to your question
with the code below.
Firstly, there is no 'phi' method for cor in
cannot open the connection
>
> Could you please give me some advice on how to fix it?
>
> Many thanks~
>
>
> On Sat, Jun 7, 2014 at 6:41 AM, Charles Determan Jr
> wrote:
>
>> REPLY TO ALL FOR THE R-HELP LIST!!!
>>
>> I apologize for the bluntness but yo
uot;Error in mvrnorm(1, c(0, 0, 0), phi1, tol = 1e-06, empirical =
> FALSE, : incompatible arguments"
>
> Regards
>
>
>
>
> On 5 June 2014 16:45, Charles Determan Jr wrote:
>
>> Hello again Thanoon,
>>
>> Once again, you should send these reques
-- Forwarded message --
From: thanoon younis
Date: Thursday, June 5, 2014
Subject: error in R program
To: Charles Determan Jr
many thanks to you Dr. Charles
Really i have a problem with simulation data in xi and now i have this
erro r "Error in mvrnorm(1, c(0, 0, 0),
Hello again Thanoon,
Once again, you should send these request not to me but to the r-help
list. You are far more likely to get help from the greater R community
than just me. Furthermore, it is not entirely clear where your error is.
It is courteous to provide only the code that is run up to th
Greetings,
I would like to randomly remove elements from a numeric vector but with
different probabilities for higher numbers.
For example:
dat <- sample(seq(10), 100, replace=T)
# now I would like to say randomly remove elements but with a higher chance
of removing elements >= 5 and even great
Kafi,
I'm not sure why you contacted me directly so I have also forwarded this to
the r-help list. I am unsure as to what your problem is. At first glance,
I noticed you are missing a parentheses in the WL3 line near the end but
that is just after a quick scan of your code. Please be more speci
Thanoon,
My reply to your previous post should be more than enough for you to
accomplish your goal. Please look over that script again:
ords <- seq(4)
p <- 10
N <- 1000
percent_change <- 0.9
R <- as.data.frame(replicate(p, sample(ords, N, replace = T)))
or alternatively as Mr. Barradas suggest
> an interrelationships between variables)
>
> regards
> thanoon
>
>
> On 4 April 2014 18:42, Charles Determan Jr wrote:
>
>> Hi Thanoon,
>>
>> How about this?
>> # replicate p=10 times random sampling n=1000 from a vector containing
>> your ordinal
Hi Thanoon,
How about this?
# replicate p=10 times random sampling n=1000 from a vector containing your
ordinal categories (1,2,3,4)
R <- replicate(10, sample(as.vector(seq(4)), 1000, replace = T))
Cheers,
Charles
On Fri, Apr 4, 2014 at 7:10 AM, thanoon younis
wrote:
> dear sir
> i want to si
I would suggest using summaryBy()
library(doBy)
# sample data with you specifications
subject <- as.factor(rep(seq(13), each = 5))
state <- as.factor(sample(c(1:8), 65, replace = TRUE))
condition <- as.factor(sample(c(1:10), 65, replace = TRUE))
latency <- runif(65, min=750, max = 1100)
dat <- da
If you just need a count of how many of each number you can just use
table().
> tmp <- c(111,106,117,108,120,108,108,116,113)
> table(tmp)
tmp
106 108 111 113 116 117 120
1 3 1 1 1 1 1
On Thu, Nov 21, 2013 at 9:10 AM, b. alzahrani wrote:
>
> hi guys
>
> Assume I have this data
Here is another solution that is a bit more flexible
tmp <- seq(8)
# split into your desired groups
max.groups <- 2
tmp.g <- split(tmp, ceiling(seq_along(tmp)/max.groups))
# do repeats, unlist, numeric index
as.numeric(unlist(rep(tmp.g, each = 2)))
Hope this works for you,
Charles
On Mon, Nov
Katherine,
There are multiple ways to do this and I highly recommend you look into a
basic R manual or search the forums. One quick example would be:
mysub <- subset(mydat, basel_asset_class > 2)
Cheers,
Charles
On Thu, Oct 17, 2013 at 1:55 AM, Katherine Gobin
wrote:
> Dear Forum,
>
> I have
it is directly doing:
> as.numeric() without the as.character()
> For ex:
> as.numeric(dat[,2])
> #[1] 3 4 1 2 5
>
>
>
>
>
> On Thursday, October 10, 2013 9:33 AM, Charles Determan Jr <
> deter...@umn.edu> wrote:
>
> I'm not honestly sure
ke this or is it something else?
> data.matrix(dat) #
> a coef coef.l coef.h
> 1 13 4 2
> 2 24 5 4
> 3 31 1 1
> 4 42 2 3
> 5 55 3 5
>
>
> A.K.
>
>
>
>
>
>
> On Thursday, Oc
data.matrix() should do the job for you
Charles
On Thu, Oct 10, 2013 at 8:02 AM, arun wrote:
> Hi,
> It is not clear whether all the variables are factor or only a few are..
>
> dat<- read.table(text="acoef
> coef.l coef.h
> 1 1 0.005657825001254 0.00300612956
Filipe,
When you chose a different alternative argument you are asking a different
null hypothesis. You are looking at a two-tailed, lesser than, and greater
than hypotheses. Which one you chose is dependent upon your initial
question. Are you asking generically if your two populations (a and b
Greetings,
I am not sure if this question should be posted on the development mailing
list but perhaps it is general enough for this mailing list. I am
currently developing an R package and there are other packages that use
some internal functions that I would also like to utilize (e.g. reformat
If there isn't multiple sheets you can use the 'gdata' package and
read.xls().
Otherwise you could re-save the file as a csv file and load that file with
read.csv() assuming not multiple sheets again which a csv cannot contain.
Regards,
Charles
On Wed, Sep 11, 2013 at 8:01 AM, Charles Thuo wro
Greetings,
I have recently been exploring the 'glmnet' package and subsequently
cv.glmnet. The basic code as follows:
model <- cv.glmnet(variables, group, family="multinomial", alpha=.5,
standardize=F)
I understand that cv.glmnet does k-fold cross-validation to return a value
of lambda. Howeve
papers, I often
> have to remind authors of this...
>
> Best
> Stephan
>
>
> On 26.08.2013 21:56, Charles Determan Jr wrote:
>
>> Greetings,
>>
>> I am familiar with the function cite('packageName') which provides the
>> output generated from the
Greetings,
I am familiar with the function cite('packageName') which provides the
output generated from the DESCRIPTION file. In most cases this is
sufficient but I was wondering if there are contributing authors (in
addition to the primary) also listed on the CRAN page. Is there a proper
way to
-
>
> - Original Message -
> *From:* Charles Determan Jr
> *To:* Silvano
> *Cc:* r-help@r-project.org
> *Sent:* Friday, August 23, 2013 11:25 AM
> *Subject:* Re: [R] Randomization
>
> Hi Silvano,
>
> How about this?
>
> id <- seq(80)
>
Greetings,
This is more of an explanation question but I was using the colAUC function
on the iris dataset and everything works smoothly. This provides the AUC
for each pairwise comparison. I decided to do the actual subset for one of
the comparisons and the numbers are different (.9326 v. .9152
Hi Silvano,
How about this?
id <- seq(80)
weight <- runif(80)
# randomize 4 groups with 'sample' function
group <- sample(rep(seq(4),20))
dat <- cbind(id, weight, group)
# ordered dataset by group
res <- data.frame(dat[order(group),])
# get mean and variance for each group
aggregate(res$weight
Hi Jenny,
Firstly, to my knowledge you cannot assign the output of cat to an object
(i.e. it only prints it).
Second, you can just add the 'collapse' option of the paste function.
individual.proj.quote <- paste(individual.proj, collapse = ",")
if you really want the quotes
individual.proj.quote
what assumptions you are making to get your power.
>
>
> On Tue, Jul 9, 2013 at 2:18 PM, Charles Determan Jr wrote:
>
>> Greetings,
>>
>> To calculate power for an ANOVA test I know I can use the pwr.anova.test()
>> from the pwr package. Is there a similar
Greetings,
To calculate power for an ANOVA test I know I can use the pwr.anova.test()
from the pwr package. Is there a similar function for the nonparamentric
equivalent, Kruskal-Wallis? I have been searching but haven't come up with
anything.
Thanks,
--
Charles Determan
Integrated Bioscience
Hi Thomas,
If you put the list.files statement inside the write function you won't
have the indices.
Try:
write(list.files(pattern="*"), file="my_files.txt")
Cheers,
Charles
On Mon, Jul 1, 2013 at 2:03 PM, Thomas Grzybowski <
thomasgrzybow...@gmail.com> wrote:
> Hi.
>
> list.files(pattern =
Greetings R users,
I have a rather specific question I hope someone could assist me with.
I have been using the topGO package for some Gene Ontology analysis of some
RNA-seq data. As such I use a organism database from the biomaRt library.
I can create a topGOdata object with the following comma
If you are using the list as simply a collection of data frames a simple
example to accomplish what you are describing is this:
data(iris)
data(mtcars)
y=list(iris, mtcars)
#return Sepal.Length column from first data frame in list
#list[[number of list component]][number of column]
y[[1]][1]
Chee
Are you sure the file is in your current working directory? Often people
simply put the full path such as "/Users/Name/RBS.csv"
Cheers,
On Fri, Apr 19, 2013 at 9:30 AM, Gafar Matanmi Oyeyemi
wrote:
> I am trying to read a csv file using the code;
> contol <- read.csv("RBS.csv")
> This is the e
One statistical point beyond A.K.'s well done response. As you should well
know, Kruskal-Wallis is a non-parametric equivalent of ANOVA. However, you
only have two groups and do not require an ANOVA approach. You could
simply use a Mann-Whitney U (aka.. independent Wilcoxon) test using
wilcox.te
10:50 PM, Charles Determan Jr wrote:
>
> > Generic question... I am familiar with generic power calculations in R,
> > however a lot of the data I primarily work with is multivariate. Is
> there
> > any package/function that you would recommend to conduct such power
> > a
Generic question... I am familiar with generic power calculations in R,
however a lot of the data I primarily work with is multivariate. Is there
any package/function that you would recommend to conduct such power
analysis? Any recommendations would be appreciated.
Thank you for your time,
Char
(4), 279285.
>
> Traditionally, the symbol 'R' is used for the Pearson correlation
> coefficient and one way to calculate R^2 is... R^2.
>
> Max
>
>
> On Sun, Mar 3, 2013 at 3:16 PM, Charles Determan Jr wrote:
>
>> I was under the impression that in PLS analysis, R2 was
l.
>
> However, I don't think that communicating R^2 is effective. Other metrics
> (e.g. accuracy, Kappa, area under the ROC curve, etc) are designed to
> measure the ability of a model to classify and work well. With 3+
> categories, I tend to use Kappa.
>
> Max
>
>
>
Sat, Mar 2, 2013 at 5:21 PM, Charles Determan Jr wrote:
>
>> I have discovered on of my errors. The timematrix was unnecessary and an
>> unfortunate habit I brought from another package. The following provides
>> the same R2 values as it should, however, I still don't know how
ng1=iris[inTrain1,]
datvars=training1[,1:4]
dat.sc=scale(datvars)
pls.dat=plsr(as.numeric(training1$Species)~dat.sc,
ncomp=3, method="oscorespls", data=training1)
x=crossval(pls.dat, segments=10)
summary(x)
summary(plsFit2)
Regards,
Charles
On Sat, Mar 2, 2013 at 3:55 PM, Charles De
Greetings,
I have been exploring the use of the caret package to conduct some plsda
modeling. Previously, I have come across methods that result in a R2 and
Q2 for the model. Using the 'iris' data set, I wanted to see if I could
accomplish this with the caret package. I use the following code:
Greetings,
I am exploring some random forest analysis methods and have come upon one
aspect I don't fully understand from any manual. The code of interest is
as follows from the randomForest package:
myiris=cbind(iris[1:4], matrix(runif(508*nrow(iris)),nrow(iris),508))
This would be following b
17 17
> 58 25 18 18
> 59 26 19 19
> 60 27 20 20
> 61 28 21 21
> 62 29 22 22
> 63 30 23 23
>
>
> On Mon, Jan
Greetings R users,
I am trying to renumber my groups within the file shown below. The groups
are currently set as 8,9,10,etc. I would like to renumber this as
1,2,3,etc. I have searched the help files and only come across using the
rownames to renumber the values but I need to match values. An
Hello,
I am trying to reformat some data so that it is organized by group in the
columns. The data currently looks like this:
group X3.Hydroxybutyrate X3.Hydroxyisovalerate ADP
347 4 4e-04 3e-04 5e-04
353 3 5e-04
)
> m2 <- matrix(rpois(9, 5), nrow = 3, dimnames = list(NULL, paste0("var",
> 4:6)))
> L <- list(m1, m2)
> names(L) <- paste0("matrix", 1:2)
> L
>
> Dennis
>
> On Thu, Oct 25, 2012 at 8:51 PM, Charles Determan Jr wrote:
>
>> A genera
A general question that I have been pursuing for some time but have set
aside. When finishing some analysis, I can have multiple matrices that
have specific column names. Ideally, I would like to combine these
separate matrices for a final output as a csv file.
A generic example:
Matrix 1
var1A
This is more of a general question without data. After doing 'survdiff',
from the 'survival' package, on strata including four groups (so 4 curves
on a Kaplan Meier curve) you get a chi squared p-value whether to reject
the null hypothesis or not. Is there a method to followup with pairwise
testi
Thank you for all your responses, I assure you this is not homework. I am
a graduate student and my classes are complete. I am trying multiple
different ways to analyze data and my lab requests different types of
scripts to accomplish various tasks. I am the most computer savy in the
lab so it c
Hello,
I am trying to set up a loop that can run the survdiff function with the
ultimate goal to generate a csv file with the p-values reported. However,
whenever I try a loop I get an error such as "invalid type (list) for
variable 'survival_data_variables[i]".
This is a subset of my data:
str
still got
the error. Is this something to just ignore and add a component in the
loop saying there isn't four groups with that variable?
Regards
On Wed, Oct 17, 2012 at 3:58 PM, David Winsemius wrote:
>
> On Oct 17, 2012, at 1:52 PM, Charles Determan Jr wrote:
>
> Hi A.K.
>&g
99] (97,98] (97,98] (98,99] (98,99] (96,97] (97,98]
> #[71] (97,98] (97,98] (98,99] (97,98] (97,98] (97,98] (98,99]
> #[78] (97,98] (97,98]
> #Levels: (-Inf,96] (96,97] (97,98] (98,99]
> A.K.
>
>
>
>
> - Original Message -
> From: Charles De
To R users,
I am trying to use cut2 function from the 'Hmisc' library. However, when I
try and run the function on the following variable, I get an error message
(displayed below). I suspect it is because of the NA but I have no idea
how to address the error. Many thanks to any insights.
struc
Greetings R users,
My goal is to generate quartile groups of each variable in my data set. I
would like each experiment to have its designated group added as a
subsequent column. I can accomplish this individually with the following
code:
brks <- with(data_variables,
cut2(var2, g=
periment
> idx <- dat$Time_of_end != ''
> dat$End_of_Experiment <- dat$Start_of_Experiment + 48*60*60
> dat$End_of_Experiment[idx] <-
> as.POSIXct(strptime(paste(dat$**Start_date, dat$Time_of_end)[idx],
> "%m/%d/%Y %H:%M:%S"))
> dat
>
>
> Hope this
Greetings,
My data set has dates and times that I am working with. Some of the times
in Time_of_end are blank. This is supposed to dictate that the particular
experiment lasted 48 hours. I would like to add 48 hours to the start
Start_of_Experiment for another column as End_of_Experiment includ
> -------
> Sent from my phone. Please excuse my brevity.
>
> Charles Determan Jr wrote:
>
> >Hello R users,
> >
> >This is more of a convenience question that I hope others might find
> >u
Hello R users,
This is more of a convenience question that I hope others might find useful
if there is a better answer. I work with large datasets that requires
multiple parsing stages for different analysis. For example, compare group
3 vs. group 4. A more complicated comparison would be time
sing something obvious]
>
> yourFunc <- function(x){
> dsx <- deparse(substitute(x))
> x <- length(unique(x))
> names(x) <- dsx
> x
> }
>
> yourFunc(ID)
>
> yourFunc(ID^2)
>
> yourFunc(ID[ID==2])
>
>
> etc.
>
> Hope this help
, 2012 at 11:50 AM, Steve Friedman wrote:
> ?table
> On May 25, 2012 11:46 AM, "Charles Determan Jr" wrote:
>
>> Hello,
>>
>> Simple question that I am stuck on and can't seem to find an answer in the
>> help files currently. I have a list which
ue(ID))
>
> Michael
>
> On Fri, May 25, 2012 at 11:38 AM, Charles Determan Jr
> wrote:
> > Hello,
> >
> > Simple question that I am stuck on and can't seem to find an answer in
> the
> > help files currently. I have a list which contains repeated
Hello,
Simple question that I am stuck on and can't seem to find an answer in the
help files currently. I have a list which contains repeated ID's. I would
like to have R count the number of ID's. For example:
ID=c(1,1,1,1,2,2,2,2,3,3,3,3)
as.data.frame(ID)
Clearly, there are 3 groups. How w
erc_table")
> > identical(getNativeSymbolInfo(entry_point),
> getNativeSymbolInfo("inner_perc_table"))
> [1] TRUE
> > identical(getNativeSymbolInfo(entry_points[2]),
> getNativeSymbolInfo("inner_perc_table"))
> [1] TRUE
>
> Bill Dunlap
>
ction work?
Regards,
Charles
On Thu, May 24, 2012 at 11:51 AM, Duncan Murdoch
wrote:
> On 24/05/2012 11:35 AM, Charles Determan Jr wrote:
>
>> Hello,
>>
>> Does anyone on this list know what inner_perc_table is or where it is
>> typically found? I am trying to modify
ssed while doing this?
Thanks,
Charles
On Thu, May 24, 2012 at 11:51 AM, Duncan Murdoch
wrote:
> On 24/05/2012 11:35 AM, Charles Determan Jr wrote:
>
>> Hello,
>>
>> Does anyone on this list know what inner_perc_table is or where it is
>> typically found? I am try
Hello,
Does anyone on this list know what inner_perc_table is or where it is
typically found? I am trying to modify some source code and it is used
with the .C() function. When I try and run it, it states that
'inner_perc_table is not found'. It is only called in such a way and isn't
defined at
Greetings again R users,
Some of you will likely recognize me but I hope you can help me once
more. I have tried the mixed model mailing list for this question but have
yet to find a solution. As such I hope someone will have another idea.
I have previously attempted to replicate the UN, CS, an
Greetings R users,
My interest in the Q2cum score comes my endeavor to replicate SIMCAP
PLS-DA analysis in R. I use the exact same dataset. After doing the
analysis in R, I can get the exact same R2Ycum. However, the Q2cum is
significantly off. Adding the Q2cum of the 1st and 2nd component
com
Greetings R users,
My interest in the Q2cum score comes my endeavor to replicate SIMCAP PLS-DA
analysis in R. I use the exact same dataset. After doing the analysis in
R, I can get the exact same R2Ycum. However, the Q2cum is significantly
off. Adding the Q2cum of the 1st and 2nd component com
Greetings R users,
I have a curious problem. I read in a csv file (subset shown below) as
normal
data=read.table("C:/Users/Chaz/Desktop/test.csv",sep=",",header=TRUE,
na.strings=".")
However, the numbers from the dataset are not registered as numeric:
is.numeric(data$Mesh)
[1] FALSE
When I try
Greetings R users,
I have been hoping someone would be familiar with this topic. I understand
fully everything on this list is from the good graces of those who wish to
help. Thanks to those who have helped in multiple circumstances. However,
I wanted to post this question once more. I hope so
Greetings R users,
I have been working on running plsda and I would like to have the R2 and Q2
values. I know the function R2 from the 'pls' package will generate both
R2 and Q2 but they are for each separate class. Is there a way to get the
cumulative R2 and Q2 for the whole model?
R2(pls.new,
Greetings R users,
I have been working on running plsda and I would like to have the R2 and Q2
values. I know the function R2 from the 'pls' package will generate both
R2 and Q2 but they are for each separate class. Is there a way to get the
cumulative R2 and Q2?
R2(pls.new, estimate="all")
Re
91 matches
Mail list logo