Hi
> Hi All,
>
> I am a beginner in programming in r and please do forgive me if my
question
> seems to be silly and sometimes not understandable.
>
> 1. we have a list of elements in a list say:
>
> ls<-list("N","E","E","N","P","E","M","Q","E","M")
>
> 2. We have an another list of tables i
Hi all,
I am utilizing integrate() to get the area below a function that shape
like an exponential, getting very low for higher values of the
horizontal coordinate y.
In the code below, I present the tricky way I can improve the result
obtained. I compare this result with the one I get from Mathe
Hi all,
I am trying to do a generalized estimating equation (GEE) with the "geepack"
package and I am not 100% sure what exactly the "id" argument means. It
seems to be an important argument because results differ considerably
defining different clusters.
I have a data set of counts (poisson dist
> Gordon Robertson
> on Wed, 24 Aug 2011 22:21:22 -0700 writes:
> I'm fairly new to the silhouette functionality in the
> cluster package, so apologize if I'm asking something
> naive. If I run the 'agnes(ruspini)' example from the
> silhouette section of the cluster
You need to tell use why you want to use a GEE model. From your use
of corstr = "ar1" I would surmise you think the counts are serially
correlated during a year (despite the presence of a 'month' main
effect), in which case the id is 'site'.
All 'id' does is to partition the data into cluster
On 08/28/2011 04:07 AM, karthicklakshman wrote:
Dear R community,
With an advantage of being "NEW" to R, I would like to post a very basic
query here,
I am in need of representing gene expression data which ranges from -0.09 to
+4, on plot "segment". please find below the data df, the expressio
chuan_zl wrote:
> Dear All.
>
> I am Chuan. I am beginner for R.I facing some problem in remove
element from
> vector.I have a vector with size 238 element as follow(a part)
>
> [1] 0 18 24 33 44..[238] 255
>
> Let the vector label as "x",I want remove element "0
Hi All,
1) Is it possible to set the options such that R opens a new script editor
every time I start the R and 2) specify the size of windows.
Thanks for the suggestion and Best regards,
Krishna
[[alternative HTML version deleted]]
__
thanks for your answer!
For the butterfly counts we used butterfly bait traps. They were not visible
counts. I read several ecological papers that treat species or individuals
counts as Poisson applying GLM rather than e.g. repeated measures ANOVA. I
assumed that the monthly collection out of a sp
Hi Experts,
I was trying to write a data frame which has a header row,
from R to Excel disk file using RODBC ( RODBC_1.3-1) package. I met with an
issue:- If in sqlSave(), I set a parameter "colnames=FALSE" then I get
first row as header in excel file. If 'colnames=TRUE' then it giv
On 08/29/2011 08:03 PM, SNV Krishna wrote:
Hi All,
1) Is it possible to set the options such that R opens a new script editor
every time I start the R and 2) specify the size of windows.
Hi Krishna,
You can start an editor like this:
system("my_editor",wait=FALSE)
where "my_editor" is the na
This can be of interest:
http://moderntoolmaking.blogspot.com/2011/08/25-more-ways-to-bring-data-into-r.html
On Sun, Aug 28, 2011 at 3:00 PM, R. Michael Weylandt <
michael.weyla...@gmail.com> wrote:
> "I have simplified the code only to download the sp500 index."
>
> Perhaps you have, but you h
I recommend reading the posting guide and providing a reproducible example.
---
Jeff Newmiller The . . Go Live...
DCN: Basics: ##.#. ##.#. Live Go...
Live: OO#.. Dead: OO#.. Playing
Research Engineer (Solar/Batteries O.
I did a simple little simulation of a binary variable in a two armed trial. I
was quite surprised by the number of p-values delivered by the fisher.test
function which was >1(!). Of course, under the null hypothesis you expect a
fair number of outcomes with the same number of event in both arms
By the "null distribution" do you mean that the assignment of each
observation to a column is equal? If so, the function sample() might
serve your needs. For example:
rows <- 3
cols <- 4
rowtot <- 100
m <- matrix(NA, nrow=rows, ncol=cols)
for(i in seq(rows)) {
m[i, ] <- tabulate(samp
Dear R users
When I use OPTIM with BFGS, I've got a significant result without an error
message. However, when I use OPTIMX with BFGS( or spg), I've got the
following an error message.
> optim
Thank you so much!!!
Could you tell me also how to change the size of the chart? There is not
enough space below the chart to add the arrows!
2011/8/28 Uwe Ligges-3 [via R] <
ml-node+3774557-1567708350-262...@n4.nabble.com>
>
>
> On 26.08.2011 15:50, Paola Tellaroli wrote:
> > I lied, that was
I've tried to create a heatmap from only a single row data, but I've found the
error told that the data should always have more than one row. So, could you
suggest me please, how to create a single row heat map, by the way.
Advanced thanks for your helps.
Thitipong
Dear Jim,
Thank you very much for your code.
There is no problem with
df[df[,2]>0,3]<-color.scale(df[df[,2]>0,2],c(1,0),1,c(0,1)) but the other
has an error message if there is a negative value, like
> df[df[,2]<0,3]<-color.scale(df[df[,2]<0,2],1,c(1,0),c(1,0))
Error in rgb(reds, greens, blues) :
Hello all,
I am looking for theories and statistical analyses where the defaults
employed in R and SAS are different. As a result, the outputs under
the defaults should (at least slightly) differ for the same input.
Could anyone kindly point any such instance?
Thanks
Nikhil
___
Hello all,
I am looking for theories and statistical analyses where the defaults
employed in R and SAS are different. As a result, the outputs under
the defaults should (at least slightly) differ for the same input.
Could anyone kindly point any such instance?
Thanks
Nikhil
___
Dear all,
I have encountered problem when developing application. My linear regression
does give different results depending on architecture.
Following example describes my problem perfectly.
xxx <- data.frame(a=c(0.2,0.2,0.2,0.2,0.2),b=c(7,8,9,10,11))
lm(a~b,xxx)
summary(lm(a~b,xxx)
Dear R users
When I use OPTIMX with BFGS, I've got the following error message.
-
> optimx(par=theta0, fn=obj.fy, gr=gr.fy, method="BFGS")
Error: Gradient function might be wrong - check it!
See ?par and its "mar" argument.
> Could you tell me also how to change the size of the chart? There is not
> enough space below the chart to add the arrows!
__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do rea
Hi All,
Here is the short description of my problem.
>mydata ###my data.frame
age height weight
12 97 30
14 95 32
17 12050
I used a following method from RODBC package. ver 1.3.1, to save as excel
file.
sqlSave(channel,* mydata*, tablename="Shee
On 29/08/2011 8:54 AM, Smart Guy wrote:
Hi All,
Here is the short description of my problem.
>mydata ###my data.frame
age height weight
12 97 30
14 95 32
17 12050
I used a following method from RODBC package. ver 1.3.1, to save as excel
file.
Hi!
On 08/24/2011 07:46 PM, David Winsemius wrote:
I was looking for an elegant solution ;) In the real case I have double
values and this would be quite inefficient then.
Still no r-code:
Then what about rank(order(...) , further-ties.method-argument) ?
I think that, as order() always giv
Hello-
Sorry to ask a basic question, but I've spent many hours on this now
and seem to be missing something.
I have a loop that looks like this:
mainmat=data.frame(matrix(data=0, ncol=92, nrow=length(predata$Words_MH)))
for(i in 1:length(predata$Words_MH)){
for(j in 1:92){
mai
Jim et. al:
This is the second time I've seen this "advice" recently. Use logical
indexing: which(), though not wrong, is superfluous:
x[ !x %in% c(0,255)] will do, rather than:
> If you want to remove the specific values 0 and 255 from your vector, try:
>
> x<-x[-which(x %in% c(0,255))]
>
> J
If you are talking about weights that are the frequencies in each cell, you
can use xtabs():
df <- data.frame(Var1=c("Absent", "Present", "Absent", "Present"),
Var2=c("Absent", "Absent", "Present", "Present"), Freq=c(17, 6, 3, 12))
df
xtabs(Freq~Var1+Var2, data=df)
-
Henrik,
Your last suggestion did not work for me. It seems like it does not allow me
to create a ClassB object with 3 arguments:
> setConstructorS3("ClassA", function(A=15, x=NA) {
+ extend(Object(), "ClassA",
+.size = A,
+.x=x
+ )
+ })
> setConstructorS3("ClassB", function(..., bData=
On 29-Aug-11 11:44:28, Öhagen Patrik wrote:
> I did a simple little simulation of a binary variable in a two armed
> trial. I was quite surprised by the number of p-values delivered by the
> fisher.test function which was >1(!). Of course, under the null
> hypothesis you expect a fair number of out
On 29.08.2011 13:11, Paola Tellaroli wrote:
Thank you so much!!!
Could you tell me also how to change the size of the chart? There is not
enough space below the chart to add the arrows!
Please read the whole help page for ?par
You will find a way how to increase the size of the margins (usin
On 29.08.2011 13:54, AdamMarczak wrote:
Dear all,
I have encountered problem when developing application. My linear regression
does give different results depending on architecture.
Following example describes my problem perfectly.
xxx<- data.frame(a=c(0.2,0.2,0.2,0.2,0.2),b=c(7,8,9,10,
Correction. My solution didn't work either Didn't return the correct
values. Can you post an example that takes three arguments? I'm working on
how to do this now.
thanks...sorry. I"m new to R and R.oo.
Ben
On Mon, Aug 29, 2011 at 8:35 AM, Ben qant wrote:
> Henrik,
>
> Your last suggestion
Why I am getting
Error in integrate(f, x1, x1 + dx) :
maximum number of subdivisions reached
and can I avoid this?
func <- function(y, a, rate, sad){
f3 <- function(z){
f1 <- function(y,a,n){
dpois(y,a*n)
}
f2 <- function(n,rate){
dexp(n,rate)
}
f <- function(n){
f1(y,a,n
Can't help, code runs fine on my machine once you change "valu" to "value."
Are you sure it fails in a vanilla run of R and isn't caused by any other
choices you have made along the way?
Michael
PS -- Here's the code
func <- function(y, a, rate, sad){
f3 <- function(z){
f1 <- functio
Hi everybody,
I'm interested in evaluating the effect of a continuous variable on the mean
and/or the variance of my response variable. I have built functions
expliciting these and used the 'mle2' function to estimate the coefficients,
as follows:
func.1 <- function(m=62.9, c0=8.84, c1=-1.6)
Thank you friend for suggestion.
--
View this message in context:
http://r.789695.n4.nabble.com/Asking-Favor-For-Remove-element-with-Particular-Value-In-Vector-tp3772779p3776432.html
Sent from the R help mailing list archive at Nabble.com.
__
R-help@r-
Hi,
Thank you Duncan, you showed me how to assign a specific color NA values in
the levelplot. However, I'm still not satisfied with the result of the code
you provided. In the data frame I provided in the first post, there's one
plant with level=0 (at x=8, y=1), and many other plants have level=1
Hi all,
I am trying to do a barplot in ggplot2 and want to make sure that the legend
order is consistent with the bar order, that is the legend order is orig and
match; and the bars are ordered in the same way. It seems to me that I can only
control one of them. Any idea?
library(ggplot2)
df <
Thank you very much,friend.
--
View this message in context:
http://r.789695.n4.nabble.com/Asking-Favor-For-Remove-element-with-Particular-Value-In-Vector-tp3772779p3776427.html
Sent from the R help mailing list archive at Nabble.com.
__
R-help@r-proje
Petr, Jorge, Daniel,
Yes you could also use outer() instead of expand.grid().
This is quite useful to know.
Also I didn't know you could turn a matrix into a vector by setting its
dimensions to NULL like that. I always used as.vector( m ).
And (as I've just discovered) you can use it to reconfi
In fact by fiddling with the "at" and "colorkey" options, I was able to get
the result I expected. Now the colors are assigned correctly, as well as the
colorkey. Here's my code:
# see data in the original
data$level[is.na(data$level)] <- 10 # assign a value above the scale to NA
values
levelplo
Thank you very much,friend.
--
View this message in context:
http://r.789695.n4.nabble.com/Asking-Favor-For-Remove-element-with-Particular-Value-In-Vector-tp3772779p3776430.html
Sent from the R help mailing list archive at Nabble.com.
__
R-help@r-proje
Hi there
I'm trying to configure R to get access to the internet.
Using the Internet Explorer a proxy .pac script is used.
Reading some older threads I found that I can use the --internet2 option.
When choosing a mirror I get the error: "407 Proxy Authentication Required".
This seems reasonable
Thank you very much,friend.
--
View this message in context:
http://r.789695.n4.nabble.com/Asking-Favor-For-Remove-element-with-Particular-Value-In-Vector-tp3772779p3776435.html
Sent from the R help mailing list archive at Nabble.com.
__
R-help@r-proje
Hello R users
I'm using MuMIn but for some reason I'm not getting the adjusted confidence
interval and uncoditional SE whe I use model.avg().
I took into consideration the steps provided by Grueber et al (2011)
Multimodel inference in ecology and evolution: challenges and solutions in
JEB.
I cre
On Aug 29, 2011, at 9:15 AM, Campbell, Desmond wrote:
Petr, Jorge, Daniel,
Yes you could also use outer() instead of expand.grid().
This is quite useful to know.
Also I didn't know you could turn a matrix into a vector by setting
its dimensions to NULL like that. I always used as.vector( m
On 29/08/2011 9:23 AM, behave wrote:
Hi there
I'm trying to configure R to get access to the internet.
Using the Internet Explorer a proxy .pac script is used.
Reading some older threads I found that I can use the --internet2 option.
When choosing a mirror I get the error: "407 Proxy Authentica
It doesn't help to post this twice, but it may help to know why this is of
interest.
Frank
n wrote:
>
> Hello all,
>
> I am looking for theories and statistical analyses where the defaults
> employed in R and SAS are different. As a result, the outputs under
> the defaults should (at least sligh
Simon,
Though we're please to see another use of bigmemory, it really isn't
clear that it is gaining you
anything in your example; anything like as.big.matrix(matrix(...))
still consumes full RAM for both
the inner matrix() and the new big.matrix -- is the filebacking really
necessary. It also do
Hi,
comments below.
On Mon, Aug 29, 2011 at 8:12 AM, Ben qant wrote:
> Correction. My solution didn't work either Didn't return the correct
> values. Can you post an example that takes three arguments? I'm working on
> how to do this now.
> thanks...sorry. I"m new to R and R.oo.
>
> Ben
>
>
> -Original Message-
> From: r-help-boun...@r-project.org [mailto:r-help-boun...@r-project.org] On
> Behalf Of Bert Gunter
> Sent: Monday, August 29, 2011 7:07 AM
> To: Jim Lemon
> Cc: r-help@r-project.org
> Subject: Re: [R] Asking Favor For "Remove element with Particular Value In
> Vect
Ooops,
sorry!
The problem occurs when
func(1:2,0.1,0.1,sad=Exp)
On Mon, Aug 29, 2011 at 12:27 PM, R. Michael Weylandt
wrote:
> Can't help, code runs fine on my machine once you change "valu" to "value."
> Are you sure it fails in a vanilla run of R and isn't caused by any other
> choices you h
You are somewhere in Circles 3 and 4 of
'The R Inferno'.
If you have a function to apply over more
than one argument, then 'mapply' will do
that.
But you don't need to do that -- you can do
the operation you want efficiently:
*) create your resulting matrix with all zeros,
no reason for this to
On Aug 27, 2011, at 3:37 PM, Simon Zehnder wrote:
Dear R users,
I am using R right now for a simulation of a model that needs a lot of
memory. Therefore I use the *bigmemory* package and - to make it
faster -
the *doMC* package. See my code posted on http://pastebin.com/dFRGdNrG
Now, if I
Hi David,
Unfortunately I need to use the "should have been" frequencies if the sample
corresponded perfectly in terms of some "reference" variables to the
population.
That is, if in my sample I observe V1_R1=10%, V1_R2=50%, V3_R3=40% while the
same known population distribution is V1_R1=20%,
Do you mean things like treatment of categorical variables in regression
procedures (which have different defaults in different procedures in SAS),
and different default as to the reference category in logistic regression?
Jeremy
On 29 August 2011 04:46, n wrote:
> Hello all,
>
> I am looking
Hi:
The bars *are* ordered in the same way, but when you use coord_flip(),
the left category goes on top and the right category goes on the
bottom. Is this what you want?
ggplot(df, aes(x = name, y = value, fill = type)) +
geom_bar(position = position_dodge()) +
coord_flip() +
scale_fill_ma
The grconvertX and grconvertY functions may be helpful in finding the endpoints
to use.
--
Gregory (Greg) L. Snow Ph.D.
Statistical Data Center
Intermountain Healthcare
greg.s...@imail.org
801.408.8111
> -Original Message-
> From: r-help-boun...@r-project.org [mailto:r-help-bounces@r-
On Aug 29, 2011, at 15:39 , Sebastian Bauer wrote:
>>
>> > rr <- data.frame(a = c(1,1,1,1,2), b=c(1,2,2,3,1))
>>
>> > ave(order(rr$a, rr$b), rr$a, rr$b )
>> [1] 1.0 2.5 2.5 4.0 5.0
>
> Actually, this may be a solution I was looking for! Note that it assumes that
> rr to be sorted already (hen
If your main goal is to look at a data frame and you are ok with scrolling,
then look at the View function (note capitalization) as an alternative to just
printing the data frame.
--
Gregory (Greg) L. Snow Ph.D.
Statistical Data Center
Intermountain Healthcare
greg.s...@imail.org
801.408.8111
Hi:
integrate() is not a vectorized function. This appears to work:
sapply(1:2, function(x) func(x, 0.1, 0.1, sad = Exp))
[1] 0.250 0.125
In this case, sapply() is a disguised for loop.
HTH,
Dennis
On Mon, Aug 29, 2011 at 9:45 AM, . . wrote:
> Ooops,
>
> sorry!
>
> The problem occurs when
>
>
Hi,
when I have made a decision tree with rpart, is it possible to "apply"
this tree to a new set of data in order to find out the distribution
of observations? Ideally I would like to plot my original tree, with
the counts (at each node) of the new data.
Reagards,
Jay
_
Hi, beginner to R and was having some problems scraping data from tables in
html using the XML package. I have included some code below.
I am trying to loop through a series of html pages, each of which contains a
single table from which I want to scrape data. However, some of the pages
are blank
Dear All
Sorry for this simple question, I could not solve it by spending days.
My data looks like this:
# data
set.seed(1234)
clvar <- c( rep(1, 10), rep(2, 10), rep(3, 10), rep(4, 10)) # I have 100
level for this factor var;
yvar <- rnorm(40, 10,6);
var1 <- rnorm(40, 10,4); var2 <- rnorm(40,
Thanks!
"This problem isn't uniquely defined. Are you willing to generate more samples
than you need and then throw away extreme values? Or do you want to 'censor'
extreme values (i.e. set values <= 1 to 1 and values >=7 to 7)?"
I'd like the retain a normal distribution so I wouldn't want to
?tryCatch
HTH,
Dennis
On Mon, Aug 29, 2011 at 9:04 AM, s1oliver wrote:
> Hi, beginner to R and was having some problems scraping data from tables in
> html using the XML package. I have included some code below.
>
> I am trying to loop through a series of html pages, each of which contains a
> s
Hello All,
I have a data frame consisting of 4 columns (id1, id2, y, pred)
where pred is the predicted value based on the glm function and my data frame
is called "all". "data" is another data frame that has all data but I want to
put together some important columns from my original data frame
? predict.rpart
Weidong Gu
On Mon, Aug 29, 2011 at 12:49 PM, Jay wrote:
> Hi,
>
> when I have made a decision tree with rpart, is it possible to "apply"
> this tree to a new set of data in order to find out the distribution
> of observations? Ideally I would like to plot my original tree, with
>
On Aug 29, 2011, at 2:40 PM, Andra Isan wrote:
Hello All,
I have a data frame consisting of 4 columns (id1, id2, y, pred)
where pred is the predicted value based on the glm function and my
data frame is called "all". "data" is another data frame that has
all data but I want to put together
Dear all,
I'm forecasting health services utilization using Lee-Carter method.
I have a routine to run LC method in R package, and I understood all steps
to model and forecasting the rates by this method, except two things:
1) how to adjust the estimated admission rates by the total number of
adm
Why doesn't this work?
x = zoo(1:5, as.Date('2001-01-01')+1:5)
x[as.Date('2001-01-05')]
x[as.Date('2001-01-05')] = 0
x
I think this is especially bad because it doesn't cause an error. It lets
you do something to x, but then you can't see x again to see what it did.
[[alternative HTML
Billy.Requena gmail.com> writes:
>
> Hi everybody,
>
> I'm interested in evaluating the effect of a continuous variable on the mean
> and/or the variance of my response variable. I have built functions
> expliciting these and used the 'mle2' function to estimate the coefficients,
> as follows:
How exactly do you mean it doesn't work? Copied from my GUI:
> x = zoo(1:5, as.Date('2001-01-01')+1:5)
> x[as.Date('2001-01-05')]
2001-01-05
4
> x[as.Date('2001-01-05')] = 0
> x
2001-01-02 2001-01-03 2001-01-04 2001-01-05 2001-01-06
1 2 3 0 5
On Aug 29, 2011, at 2:45 PM, Gene Leynes wrote:
Why doesn't this work?
x = zoo(1:5, as.Date('2001-01-01')+1:5)
x[as.Date('2001-01-05')]
x[as.Date('2001-01-05')] = 0
x
I think this is especially bad because it doesn't cause an error.
It lets
you do something to x, but then you can't see x
Hello everyone,
I working in a public health project and we have created a Decision Tree for
categorical variables usign the package rpart. Our goal is to develop a model
(Using the ROC tool) in order to predict presence/ausent of diabetes and get a
better understanding of what are the import
Hi:
This is straightforward to do with the plyr package:
# install.packages('plyr')
library('plyr')
set.seed(1234)
df <- data.frame(clvar = rep(1:4, each = 10), yvar = rnorm(40, 10, 6),
var1 = rnorm(40, 10, 4), var2 = rnorm(40, 10, 4),
var3 = rnorm(40, 5, 2), var
Michael Parent ufl.edu> writes:
>
> Thanks!
>
> "This problem isn't uniquely defined. Are you
> willing to generate more samples than you need and then throw
> away extreme values? Or do you want to 'censor'
> extreme values (i.e. set values <= 1 to 1 and values >=7 to 7)?"
>
> I'd like the
You can do this using function lmList() from package nlme, without
having to split the data frames, e.g.,
library(nlme)
mlis <- lmList(yvar ~ . - clvar | clvar, data = df)
mlis
summary(mlis)
I hope it helps.
Best,
Dimitris
On 8/29/2011 5:37 PM, Nilaya Sharma wrote:
Dear All
Sorry for th
Although I'm not sure what you're talking about with pop-up windows...
Weird, this is what I'm getting in either R 2.13.0 or R 2.12.0:
> library(zoo)
Warning: package 'zoo' was built under R version 2.13.1
> x = zoo(1:5, as.Date('2001-01-01')+1:5)
> x[as.Date('2001-01-05')]
2001-01-05
4
On Aug 29, 2011, at 3:02 PM, Gene Leynes wrote:
Although I'm not sure what you're talking about with pop-up windows...
I got (as expected) assignment, so I assumed you were not expecting
assignment.
Weird, this is what I'm getting in either R 2.13.0 or R 2.12.0:
> library(zoo)
Warning
Hi:
Dimitris' solution is appropriate, but it needs to be mentioned that
the approach I offered earlier in this thread differs from the
lmList() approach. lmList() uses a pooled measure of error MSE (which
you can see at the bottom of the output from summary(mlis) ), whereas
the plyr approach subd
I would recommend using the new Bayesian package 'LaplacesDemon' available
on CRAN.
Ben Bolker
Sent by: r-help-boun...@r-project.org
08/29/2011 02:50 PM
To
cc
Subject
Re: [R] Bayesian functions for mle2 object
Billy.Requena gmail.com> writes:
>
> Hi everybody,
>
> I'm interested
hi, R-users
I have a data.frame for example test$newdataday24 and test$newdataday48
I can plot them by
plot(test$newdataday24)
but now i want to plot different data by define a variable to describe them
dayno<-c(24,48)
newnam<-paste("test$newdataday",dayno,sep="")
plot(newnam[1])
but i failed,
Hmm, I don't know what this means as trouble shooting, but I get the
following:
1) After library(zoo)
Attaching package: 'zoo'
The following object(s) are masked from 'package:base':
as.Date
and then for the first str(x)
zoo series from 2001-01-02 to 2001-01-06
Data: int [1:5] 1 2 3 4
try:
newnam<-paste('newdatadat',dayno,sep='')
plot(test[[newnam[1]]])
On Mon, Aug 29, 2011 at 12:29 PM, Jie TANG wrote:
> hi, R-users
> I have a data.frame for example test$newdataday24 and test$newdataday48
> I can plot them by
> plot(test$newdataday24)
> but now i want to plot different
well, if a pooled estimate of the residual standard error is not
desirable, then you just need to set argument 'pool' of lmList() to
FALSE, e.g.,
mlis <- lmList(yvar ~ . - clvar | clvar, data = df, pool = FALSE)
summary(mlis)
Best,
Dimitris
On 8/29/2011 9:20 PM, Dennis Murphy wrote:
Hi:
thank you , it works .
another problem is if can could define a variable to express the data.frame?
for example :
datanam<-c("newdata","newdata2")
plot(datanam[1][[newnam[1]]])
2011/8/30 Justin Haynes
> try:
>
> newnam<-paste('newdatadat',dayno,sep='')
>
> plot(test[[newnam[1]]])
>
>
> On Mon,
On 29/08/2011 3:52 PM, Jie TANG wrote:
thank you , it works .
another problem is if can could define a variable to express the data.frame?
for example :
datanam<-c("newdata","newdata2")
plot(datanam[1][[newnam[1]]])
Use get():
plot(get(datanam[1])[[newnam[1]]]))
Duncan Murdoch
2011/8/30 J
On Mon, Aug 29, 2011 at 2:45 PM, Gene Leynes wrote:
> Why doesn't this work?
>
> x = zoo(1:5, as.Date('2001-01-01')+1:5)
> x[as.Date('2001-01-05')]
> x[as.Date('2001-01-05')] = 0
> x
>
Make sure you have the most recent version of zoo which is this:
> packageVersion("zoo")
[1] ‘1.7.4’
--
Stat
I tried that, while I find the documentation a bit short, but the only
result I get from this is a probability distribution of my data (I'm
building a tree with 2 classes). How do I plot a tree where the counts
are show in each step/node?
BR,
Jay
On Aug 29, 9:40 pm, Weidong Gu wrote:
> ? predict
This seems like a very strange error.
In trying to troubleshoot this further I looked at the structure of x. The
new x has the length of the Index (2001-01-05 = 11327).
> library(zoo)
> x = zoo(1:5, as.Date('2001-01-01')+1:5)
> str(x)
zoo series from 2001-01-02 to 2001-01-06
Data: int [1:5]
Michael,
By the way, although I replied to David's email, I was responding to you as
well. Your results were exactly what I was expecting, but I didn't get your
results.
On Mon, Aug 29, 2011 at 1:51 PM, R. Michael Weylandt <
michael.weyla...@gmail.com> wrote:
> How exactly do you mean it doesn
While R has library TSP to help solve traveling salesperson problems, does
anyone know if it has any libraries to help solve multiple traveling
salesperson problems? For instance, suppose one is planning school bus
routes and one has multiple buses. Thank you for your time.
--
View this message
Hi,
I have to deal with a huge .txt table (~485.577 rows and 469 columns, > 1.5
Go file)
I used the read.table function
> tmp=read.table("data.txt", header=TRUE, sep="\t", fill=TRUE,
na.strings="NA", comment.char="", stringsAsFactors = FALSE)
However, I encounter troubles in interpreting some "\t"
Greetings,
I am having trouble getting the function reformulate_ATSP_as_TSP to work for
me. I have provided a simple example of some of the code I've been using.
In particular, I'm not sure why I'm getting the error
"Error in dimnames(tsp) <- list(lab, lab) :
length of 'dimnames' [1] not e
Dear Group,
Has anyone conducted CR analysis using survey data i.e. with sampling
weights. I've already looked up CRAN task view in survival analysis. Any
lead is appreciated. Thanks!
Kel
--
View this message in context:
http://r.789695.n4.nabble.com/Competing-Risk-with-survey-data-tp3777189
On 29/08/2011 3:29 PM, Imbeaud (Inserm U674) wrote:
Hi,
I have to deal with a huge .txt table (~485.577 rows and 469 columns,> 1.5
Go file)
I used the read.table function
> tmp=read.table("data.txt", header=TRUE, sep="\t", fill=TRUE,
na.strings="NA", comment.char="", stringsAsFactors = FALSE)
1 - 100 of 140 matches
Mail list logo