On Jul 5, 2013, at 08:49 , Ben Bolker wrote:
> Fernando Marmolejo Ramos adelaide.edu.au>
> writes:
>
>>
>
> [snip]
>
>> is it appropriate to use a Log likelihood ratio (G-test) test of
> independence when dealing with repeated
>> categorical responses (e.g. 2 by 2 table) instead of the McNe
Hi
I'm have trouble building a custom package in R. Building the package on my
colleagues computer (who made the R code content) has been (and is still)
working fine for a long time, but I simply can't duplicate the setup right
somehow (even with his help)
I have installed the same versions of so
Hi, I have to run almost 120 stations files of temperatura (mx and min),
separated, and rain. I am following the instructions on RHtests tutorial as on
http://www.cmc.org.ve/mediawiki/index.php?title=Preparando_los_datos link. Bit
I have no success on running on multiple files. I have the .ls fi
Hi everyone,
I have observations of the habitats of birds that I consider independent. I
also have observations of habitats from several forest fragments with
several observations within each fragment that are non-independent.
I'm curious to know the best strategy to code these observations for
a
Hi Anika,
?merge() is a better solution.
To get the row.names intact, you could do:
carbon.fit<- within(carbon.fit,{x<-round(x,10);y<- round(y,10)}) #Using Bill's
solution
dat1<- data.frame(x=round(xt,10),y=round(yt,10))
carbon.fit1<-
data.frame(carbon.fit,rNames=row.names(carbon.fit),stringsAs
Hi All
I have a huge matrix m (10276 X 10276) dimension with same column name and
row names. (its a gene correlation matrix). I have another text file which
has 2700 names, basically locus ID of genes, which are also
rownames/colnames in m. Now I want to select all those columns from m whose
names
Hello,
I'm not sure whether it is dur to the HTML version or not, but there is a
problem with quotation marks in your first line.
Regards,
Pascal
2013/7/5 RODRIGUEZ MORENO VICTOR MANUEL
> Hi, I have to run almost 120 stations files of temperatura (mx and min),
> separated, and rain. I am foll
On 05-07-2013, at 09:53, Jannetta Steyn wrote:
>
>
> >
> > I don't quite know how to explain the "doesn't work" in more detail without
> > any visual aid.
>
> You said that R got into an indefinite loop, whatever that maybe.
>
>
> > When I change the solver to ode45 the script never stops r
Hi,
I am trying to remove duplicate Patient numbers in a clinical record, I used
unique
menPatients[1:40,1]
[1] abr1160(C)/001 ABR1363(A)/001 ABR1363(A)/001 ABR1363(A)/001 abr1772(B)/001
[6] AFR0003/001AFR0003/001afr0290(C)/001 afr1861(B)/001 Aga0007/001
[11] AGA1548(A)/001 AGA1548(A)
use 'match' to convert the names to column indices and then use that for
indexing
indx <- match(subCols, names(yourMatrix)
mySubset <- yourMatrix[, indx]
Sent from my iPad
On Jul 5, 2013, at 2:22, Chirag Gupta wrote:
> Hi All
>
> I have a huge matrix m (10276 X 10276) dimension with same col
Revolution Analytics staff write about R every weekday at the Revolutions blog:
http://blog.revolutionanalytics.com
and every month I post a summary of articles from the previous month
of particular interest to readers of r-help.
In case you missed them, here are some articles related to R from t
Dear Wolfgang and other readers of the r-help list,
Thank you very much for your suggestion. Unfortunately, the data that I
have can not be described with a table such as the one you have made,
because there's no identical trial under both treatment 1 and treatment
2. To explain, let me explai
Dear R community,
I have to approach a time serie as a linear combination of many time
series in a dataset. Of course, I would like that this linear combination
is optimal (I would like to use the minimal number of times series)
although I can loose some information. Moreover, each variable of th
Hi,
testUnique <- unique(testData[!is.na(testData)])
or
testUnique <- unique(na.omit(testData))
And probably some other solutions.
Regards,
Pascal
2013/7/5 Pancho Mulongeni
> Hi,
> I am trying to remove duplicate Patient numbers in a clinical record, I
> used unique
> menPatients[1:40,1]
>
Hi all,
After running kruskal.test I have got results (p<0,005) pointing to reject
the hypothesis that the samples were draw from the same population.
Howerver when I run the kruskalmc there are no significant differences in
any of the multiple comparisons. Is that possible? Some clarification?
T
Hello,
Your data example is difficult to read into an R session. Next time,
post the output of ?dput. Like this:
dput(menPatients[1:40, 1]) # post the output of this
The help page for unique says that "Missing values are regarded as
equal" so you should expect one NA to still be present i
Yes thanks, this is what I ended up doing, but I though there would be a
'internal' way to disregard NAs in unique.
Thanks for the tip on dput
-Original Message-
From: Rui Barradas [mailto:ruipbarra...@sapo.pt]
Sent: 05 July 2013 11:39
To: Pancho Mulongeni
Cc: r-help@r-project.org
Subjec
Humber,
Have a look at this:
http://r.789695.n4.nabble.com/Multiple-Comparisons-Kruskal-Wallis-Test-kruskal-agricolae-and-kruskalmc-pgirmess-don-t-yield-the-sa-td4639004.html
Hope it helps.
Kind regards,
José
Prof. José Iparraguirre
Chief Economist
Age UK
-Original Message-
From: r-he
On 5 July 2013 09:44, Berend Hasselman wrote:
>
> On 05-07-2013, at 09:53, Jannetta Steyn wrote:
>
> >
> >
> > >
> > > I don't quite know how to explain the "doesn't work" in more detail
> without
> > > any visual aid.
> >
> > You said that R got into an indefinite loop, whatever that maybe.
> >
> Dear R users,
> Please help me with some documentation for newbie about R
> programming, algorithms, create iterative C++ function (like
> for, while, if , etc).
Start at
http://www.R-project.org/posting-guide.html
and especiallly the resources cited towards the end of that page, just after
> I only get an error message, when trying to build a binary
> file (or runnning /R CMD check /), not the
> standard tar.gz. Here is what the output looks like in the
> command prompt:
> ...
> I have gotten the impression that generally the error with
> the "Error in file..." is a problem wi
Thank you Prof. José Iparraguirre. Maybe I am wrong but I think the issues
are not the same. His data doesn't showed significant differences after
kruskal.test(), that was not my case. Anyway follow below my the results
I've got and the database.
Thank you,
#
> kruskal.test(data$r
Thank you Mr. Kane for your time, I finally achieve my objective with the
package reshape2
n5dt<-last(dezdiff,5)
n5dt <- as.data.frame(t(n5dt))
n5dt$id <- c(1:35)
subm5dt<-melt(n5dt, id="id", c("2013-06-28", "2013-07-01", "2013-07-02",
"2013-07-03", "2013-07-04") )
names(subm5dt) <- c("Observat
Dear all,
I'm trying to perform a non-parametric multivariate repeated measures
analysis of 9 ordered variables (scale 0-3) at two time points. So
basically a multivariate repeated measure GLM for ordered variables.
While the package repolr can model one ordered variable over time, I have
not fou
Hi Ellison
Thanks for the reply. The test package build I did using the skeleton package
was in the same folder as the one I'm using now for the custom 'slo' package,
so that is not the problem.
I just investigated whether it was an issue with program rights regarding
downloading and installin
On Jul 5, 2013, at 15:00 , Humber Andrade wrote:
> Thank you Prof. José Iparraguirre. Maybe I am wrong but I think the issues
> are not the same. His data doesn't showed significant differences after
> kruskal.test(), that was not my case. Anyway follow below my the results
> I've got and the dat
Thank you Dr. Dalgaard. I understood. I know that this list is not to
discuss statistics but I would be very glad if you or someone else can give
me some opinion on how to proceed. The kruskal.test says there are
differences but the multiple comparisons do not point out what are the
differences. Ca
> Thanks for the reply. The test package build I did using the
> skeleton package was in the same folder as the one I'm using
> now for the custom 'slo' package, so that is not the problem.
>
Have you checked the detailed logs to see exactly which file is causing the
trouble?
**
I've got a data set of about 5,000 observations with a zero-inflated count
response variable, two predictor variables and a variable which is effectively
an area of opportunity, which I want to use as an offset term (all continuous).
I want to explore the association between these variables, in
On Jul 4, 2013, at 8:14 PM, Eric Archer - NOAA Federal wrote:
> I have a character vector that I'm using to label ticks in a dotchart. Some
> of the elements in the vector have an asterisk (*) where a Greek Delta
> needs to be placed when the plot is generated. Here's a simple example:
>
> x <-
David,
That's perfect! I just didn't think to use 'parse'. Thanks!
Cheers,
eric
On Fri, Jul 5, 2013 at 8:20 AM, David Winsemius wrote:
>
> On Jul 4, 2013, at 8:14 PM, Eric Archer - NOAA Federal wrote:
>
> > I have a character vector that I'm using to label ticks in a dotchart.
> Some
> > of th
On Jul 5, 2013, at 1:04 AM, Pascal Oettli wrote:
> Hello,
>
> I'm not sure whether it is dur to the HTML version or not, but there is a
> problem with quotation marks in your first line.
I'm guessing the OP was using Word as an editor, a practice that is doomed to
frustration.
--
David.
>
>
Thank you very much to all of you for your help. This model now works as it
should (I believe). This is the final code:
rm(list=ls())
library(deSolve)
ST <- function(time, init, parms) {
with(as.list(c(init, parms)),{
#functions to calculate activation m and inactivation h of the current
On 05-07-2013, at 17:37, Jannetta Steyn wrote:
> Thank you very much to all of you for your help. This model now works as it
> should (I believe). This is the final code:
>
> rm(list=ls())
>
> library(deSolve)
>
> ST <- function(time, init, parms) {
> with(as.list(c(init, parms)),{
>
>
Aah. Thanks. I remember someone mentioned a named vector and I meant to ask
what that was, but I forgot. I have now fixed it.
On 5 July 2013 17:18, Berend Hasselman wrote:
>
> On 05-07-2013, at 17:37, Jannetta Steyn wrote:
>
> > Thank you very much to all of you for your help. This model now
Dear R community
I currently try to get post hoc multiple comparisons with the package
multcomp from a cox mixed-effects model, where the survival is explained
by two variables (cover with levels: nocover and cover; treatment with
levels: tx, uv, meta), whose interaction is significant.
I read H
On Jul 5, 2013, at 16:34 , Humber Andrade wrote:
> Thank you Dr. Dalgaard. I understood. I know that this list is not to discuss
> statistics but I would be very glad if you or someone else can give me some
> opinion on how to proceed. The kruskal.test says there are differences but
> the mult
Hi Bert, Dennis,
I'll agree that using a barchart was a poor choice. I was in fact using a
notched bwplot to show the median and confidence interval of the median. In
this case it's the median and confidence interval that I want to highlight,
and I find that the visual noise of the box and whisker
On Jul 5, 2013, at 11:15 AM, Shaun Jackman wrote:
> Hi Bert, Dennis,
>
> I'll agree that using a barchart was a poor choice. I was in fact using a
> notched bwplot to show the median and confidence interval of the median. In
> this case it's the median and confidence interval that I want to high
Be careful!
You are talking about 2 different varieties of apples here. As I read
it, the CI's in the cancer data, which I know is just for example
purposes, are CI's for the **individual means**; the notches in
boxplots are nonparametric and for 2 groups with roughly equal sample
sizes, "The ide
Yes! Thank you, David. That's exactly what I'm I'm looking for. For
the record, here's a couple pages leading to this answer:
http://www.hep.by/gnu/r-patched/r-faq/R-FAQ_89.html
http://latticeextra.r-forge.r-project.org/man/segplot.html
http://rgm3.lab.nig.ac.jp/RGM/r_function?p=latticeExtra&f=seg
Hmm. Interesting point, Bert. I don't know whether the notches show
the 95% confidence interval or the median, or the 95% confidence
interval that two non-overlapping notches have different medians.
You're saying it's the latter? Anyone know what the 95% confidence
interval of the median would be?
Hi,
May be this helps:
If you had showed your solution, it would be easier to compare.
res<-data.frame(lapply(sapply(MyDF[,c(2,4)],function(x)
{x1<-which(c(0,diff(x))<0);x1[length(x1)==0]<-0;x1}),`[`,1))
res
# TNH BIX
#1 3 9
#Speed
set.seed(24)
MyDFNew<-
data.frame(TNH=sample(0:1,1e6,
On 7/3/2013 6:38 PM, Henrique Dallazuanna wrote:
> Try this:
>
> as.formula(sprintf(" ~ %s", do.call(paste, c(lapply(mutual(3), paste,
> collapse = ":"), sep = "+"
>
Thanks for this. I encapsulated this as a function, loglin2formula()
and a related function,
loglin2string() to give a chara
Hello,
I have a data frame with several columns.
I'd like to select some subset *and* order by another field at the same time.
Example:
a b c
1 2 3
3 3 4
2 4 5
1 3 4
etc
I want to select all rows where b=3 and then order by a.
To
Hello,
Maybe like this?
subset(x[order(x$a), ], b == 3)
Hope this helps,
Rui Barradas
Em 05-07-2013 20:33, Noah Silverman escreveu:
Hello,
I have a data frame with several columns.
I'd like to select some subset *and* order by another field at the same time.
Example:
a b c
1
That would work, but is painfully slow. It forces a new sort of the data with
every query. I have 200,000 rows and need almost a hundred queries.
Thanks,
-N
On Jul 5, 2013, at 12:43 PM, Rui Barradas wrote:
> Hello,
>
> Maybe like this?
>
> subset(x[order(x$a), ], b == 3)
>
>
> Hope thi
Hello,
If time is one of the problems, precompute an ordered index, and use it
every time you want the df sorted. But that would mean you can't do it
in a single operation.
iord <- order(x$a)
subset(x[iord, ], b == 3)
Rui Barradas
Em 05-07-2013 20:47, Noah Silverman escreveu:
That would w
It may be that single and efficient are opposing goals. Two steps
lets you create the subset and then just order each query.
Alternatively, if the data do not change often, create an ordered
version and query that.
David Carlson
Original Message-
From: r-help-boun...@r-project.org
[mailt
> Call:
> loglm(formula = form, data = x) ### I want formula = ~Hair:Eye + Sex
> here
Since your function made the call
loglm(form, data=x)
the 'call' component of output is going to show 'form', not '~ Hair:Eye + Sex'.
You can use bquote to pre-evaluate the formula=form argument to get
This is not an R question. Read the references.
Bert
Sent from my iPhone -- please excuse typos.
On Jul 5, 2013, at 12:15 PM, Shaun Jackman wrote:
> Hmm. Interesting point, Bert. I don't know whether the notches show
> the 95% confidence interval or the median, or the 95% confidence
> interva
Hello,
When I run the below syntax:
Trial<-read.table("Trial.txt",header=TRUE)
Trial
save.image(file="Trial.RData")
load("Trial.RData")
fit<-logistf(data=Trial, y~x1+x2)
summary(fit)
AIC(fit)
I am getting the below error:
> AIC(fit)
Error in UseMethod("logLik") :
no applicable method for 'log
I created a table like this:
Analysis of Variance Table
Response: dati
Df Sum Sq Mean Sq F value Pr(>F)
groups 2114 57.00 76 4.134e-11 ***
Residuals 24 180.75
---
Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
and saved
Off list I sent the OP a note that wrapnls() from nlmrt calls nls after
nlxb finishes. This is not, of course, super-efficient, but returns the
nls-structured answer.
JN
On 13-07-05 06:00 AM, r-help-requ...@r-project.org wrote:
Message: 49
Date: Fri, 5 Jul 2013 08:30:39 +0700
From: Robbie Weter
How to extract the Std.err and the alpha estimated value from the geeglm
function in R.
--
View this message in context:
http://r.789695.n4.nabble.com/geeglm-tp4670936.html
Sent from the R help mailing list archive at Nabble.com.
__
R-help@r-project.
Hi here i have a dataframe called MyDF.
a<-c(1,1,1,1,1,0,0,0,1,1)
b<-c(1,1,0,1,1,0,0,0,1,1)
c<-c(1,1,1,1,1,1,1,0,1,1)
d<-c(1,1,1,1,1,1,1,1,0,1)
MyDF<-data.frame(DWATT=a,TNH=b,CSGV=c,BIX=d)
My requirement is, here i need a function - to get for a particular row
number(s), when particular column(s)
I would suggest seeing if you can use the `lapply` or `sapply` function,
that way most of the details of creating the object are taken care of for
you. If you want the list named then you can use the `names` or `setNames`
function, or the "USE.NAMES" argument to `sapply`.
On Thu, Jul 4, 2013 at
I'm trying to format a given character vector as an expression with Greek
symbols to be used in labeling axis ticks. Thanks to some help from David
Winsemius, I've learned how to make the substitution and place the Greek
symbols in, however I've run into another problem: Some of my labels have
comm
David Carlson tamu.edu> writes:
>
> It may be that single and efficient are opposing goals. Two steps
> lets you create the subset and then just order each query.
> Alternatively, if the data do not change often, create an ordered
> version and query that.
>
I don't know the data.table pack
Hi,
May be this helps:
dat1<- read.table(text="
P1_prom Nom
1 -6.17 Pt_00187
2 -6.17 Pt_00187
3 -6.17 Pt_00187
4 -6.17 Pt_00187
5 -6.17 Pt_00187
6 -6.17 Pt_01418
7 -5.77 Pt_01418
8 -5.37 Pt_01418
9 -4.97 Pt_01418
10 -4.57 Pt_01418
",sep="",header=TRUE,stringsAsFactors=FALSE)
library(zoo)
dat1
Hi,
May be this helps.
dat1<- read.table(text="
Col1,Col2
High value,9
Low value,0
High value,7
Low value,0
Low value,0
No data,0
High value,8
No data,0
",sep=",",header=TRUE,stringsAsFactors=FALSE)
dat1$Col2[dat1$Col1=="No data"]<- NA
dat1
# Col1 Col2
#1 High value 9
#2 Low value
Hi,
I am attempting to evaluate the prediction error of a coxph model that was
built after feature selection with glmnet.
In the preprocessing stage I used na.omit (dataset) to remove NAs.
I reconstructed all my factor variables into binary variables with dummies
(using model.matrix)
I then used
62 matches
Mail list logo