On Jun 15, 2010, at 1:12 AM, James McCreight wrote:
Hi R'ers-
I'll believe this can be done and that I'm just not clever enough.
I'm trying to do this in 2D, but i think we have the same problem in
1D.
#Assume you have some 1D time series
d<-rnorm(100)
#Say you want to get the average ov
On Jun 14, 2010, at 11:46 PM, david hilton shanabrook wrote:
basically I need to create a sliding window in a string. a way to
explain this is:
v <-
c
("a
","b
","c
","d
","e
","f
","g
","h
","i
","j","k","l","m","n","o","p","q","r","s","t","u","v","w","x","y")
window <- 5
shif
Hi R'ers-
I'll believe this can be done and that I'm just not clever enough.
I'm trying to do this in 2D, but i think we have the same problem in 1D.
#Assume you have some 1D time series
d<-rnorm(100)
#Say you want to get the average over intervals of different lengths
#like from time 10 to tim
Tena koe David
Something like:
matrix(v[1:win + rep(seq(0, (length(v)-5), shift), each=win)], ncol=win,
byrow=TRUE)
should work (I haven't tested it fully). Note it gives a different
answer to your m since I think the last line of your m is incorrect.
HTH
Peter Alspach
> -Original M
On Jun 14, 2010, at 10:06 PM, skan wrote:
Hello
Can someone explain me the difference between aggregate and merge,
please?
I've read the help on both commands but I don't understant the
difference.
Merge adds data from one dataframe to another based on a matching
process. Aggregate,
> -Original Message-
> From: r-help-boun...@r-project.org
> [mailto:r-help-boun...@r-project.org] On Behalf Of James Rome
> Sent: Monday, June 14, 2010 2:27 PM
> To: Don MacQueen
> Cc: r-help@r-project.org
> Subject: Re: [R] Subtracting POSIXct data/times
>
> That fixed it. Dumb me. I had
basically I need to create a sliding window in a string. a way to explain this
is:
> v <-
> c("a","b","c","d","e","f","g","h","i","j","k","l","m","n","o","p","q","r","s","t","u","v","w","x","y")
> window <- 5
> shift <- 2
I want a matrix of characters with "window" columns filled with "v" by f
Maybe he wants to compile it to an exe file in order to make it faster.
--
View this message in context:
http://r.789695.n4.nabble.com/executable-script-tp839859p2255307.html
Sent from the R help mailing list archive at Nabble.com.
__
R-help@r-project
True, but all he said was that he wanted to auto-launch his program by
double-clicking it.
I don't know of any ways to speed up R other than to write the slower
functions in C and then call them in your R programs. But I'm not
sure that's what he had in mind.
On Mon, Jun 14, 2010 at 9:19 PM,
Hello
Can someone explain me the difference between aggregate and merge, please?
I've read the help on both commands but I don't understant the difference.
thanks
--
View this message in context:
http://r.789695.n4.nabble.com/aggregate-vs-merge-tp2255300p2255300.html
Sent from the R help maili
On 2010-06-14 13:25, Moimt wrote:
HI,
I am a new user of R and want to analyse some data using npmc. My data have
several levels of factor (Site, Year and Season) and several variable
(Percentages).
I have tried to use npmc but I always get an error message. My data are in a
table following thi
On Jun 14, 2010, at 6:48 PM, Yesha Patel wrote:
Hi all,
I want to get results for cox proportional hazards on SNP data. I'm
trying
to get HRs, CI's, & p-values for each individual SNP - this is
represented
by cov[,i]. When I run my code, I get the following error: subscript
out of
bound
On Mon, Jun 14, 2010 at 6:13 PM, steven mosher wrote:
> The zoo package as a merge function which merges a set of zoo objects
> result<-merge(zoo1,zoo2,...)
>
> Assume your zoo objects are already collected in a list
>
> # make a phony list to illustrate the situation. ( hat tip to david W for
> c
That fixed it. Dumb me. I had assumed that the subtraction of the raw
POSIXcts would always give the same results.
Thanks,
Jim
On 6/14/10 5:22 PM, Don MacQueen wrote:
> See the help page for the difftime() function, which will tell you how
> to specify the units of the differences.
> (when you do
You need to learn how to debug your program. When the error occurs,
look at the value of the indices in the offending statement -- you
will find that the problem is subscript out of bounds. Put the
following statement in your code:
options(error=utils::recover)
and then do ?browser to learn ho
On Jun 14, 2010, at 6:46 PM, David Winsemius wrote:
On Jun 14, 2010, at 6:19 PM, Subodh Acharya wrote:
Hi everyone,
This might be a very petty thing but Its not working for me.
I want to export an output to a txt file but without indexing.
Unfortunately we cannot see the structure of "outp
Hi all,
I want to get results for cox proportional hazards on SNP data. I'm trying
to get HRs, CI's, & p-values for each individual SNP - this is represented
by cov[,i]. When I run my code, I get the following error: subscript out of
bounds. I don't know why I am getting this error.
I have looked
On Jun 14, 2010, at 6:19 PM, Subodh Acharya wrote:
Hi everyone,
This might be a very petty thing but Its not working for me.
I want to export an output to a txt file but without indexing.
Unfortunately we cannot see the structure of "output" but from the
result you are getting it looks like
On Mon, 14 Jun 2010, array chip wrote:
Thanks Charles for the reproducible codes. I started this question
because I was asked to take a look at such dataset, but I have doubt if
it's meaningful to do a LR with 50 variables. I haven't got the dataset
yet, thus have not tried any code. But again
Hi everyone,
This might be a very petty thing but Its not working for me.
I want to export an output to a txt file but without indexing.
Here is what I have tried to do
outfile<- function(Time, var.names, output) {
var.names = c(names(para))
for(i in 1: ncol(output)){
cat(length(var.names)
The zoo package as a merge function which merges a set of zoo objects
result<-merge(zoo1,zoo2,...)
Assume your zoo objects are already collected in a list
# make a phony list to illustrate the situation. ( hat tip to david W for
constructing a list in a loop)
ddat <- as.list(rep("", 20))
ytd<-s
Thanks very much, Duncan. I understand this better now. It takes a bit
of getting used to but the prospect of some day getting graphic elements
to help should make it worthwhile. (And I guess it saves on disk space
by only generating the help pages as needed.)
Cheers, Murray
On 15/06/2010 4:
On Jun 14, 2010, at 4:45 PM, Douglas M. Hultstrand wrote:
Hello,
I currently splitting a file into individual files (time series each
separated into one file), the file I read in skips the first four
lines and extracts the data columns I need. I was wondering if
there is a way for R to
See the help page for the difftime() function, which will tell you
how to specify the units of the differences.
(when you don't specify, it chooses the units according to some rules)
-Don
At 4:24 PM -0400 6/14/10, James Rome wrote:
I have two dataframe columns of POXIXct data/times that includ
HI,
I am a new user of R and want to analyse some data using npmc. My data have
several levels of factor (Site, Year and Season) and several variable
(Percentages).
I have tried to use npmc but I always get an error message. My data are in a
table following this example:
SiteYEarSeason
Hello,
I have an ncdf file with different variables for dimensions and dates
where the dimensions are as follows, where X=i and Y=j creating a 88
by 188 set of cells. For each cell there are 12 readings for DO taken
at 2 hour intervals and recoded by according to the Julian calendar
under
Josef,
I think all you need to do is use the transpose of your data matrix. So if
your dataset is called mydata:
barplot(t(as.matrix(x)),beside=T)
--
View this message in context:
http://r.789695.n4.nabble.com/how-to-make-a-barplot-similar-to-Excel-s-clustered-column-chart-tp2254979p2255008.h
Hello,
I currently splitting a file into individual files (time series each
separated into one file), the file I read in skips the first four lines
and extracts the data columns I need. I was wondering if there is a way
for R to automatically scan and separate the files based on the head
inf
josef.kar...@phila.gov wrote:
> I have a matrix with 12 rows (one for each month), 2 columns (baseflow,
> runoff). I would like to make a barplot similar to Excel’s “clustered
> column chart�.
> Here is my matrix ‘x’
>
> 8.25875413.300710
> 10.180953 10.760465
> 11.012
Ottar Kvindesland wrote:
Hi,
I am collecting replies from a survey and counts replies by the table()
function. The function below carries two
data frames and counts the observations of the findings in the first
parameter vector given the value of the second as shown in the code below.
My trou
Ottar Kvindesland wrote:
> Hi,
>
> I am collecting replies from a survey and counts replies by the table()
> function. The function below carries two
> data frames and counts the observations of the findings in the first
> parameter vector given the value of the second as shown in the code below.
I have two dataframe columns of POXIXct data/times that include seconds.
I got them into this format using for example
zsort$ETA <- as.POSIXct(as.character(zsort$ETA), format="%m/%d/%Y %H:%M:%S")
My problem is that when I subtract the two columns, sometimes the
difference is given in seconds, and
I have a matrix with 12 rows (one for each month), 2 columns (baseflow,
runoff). I would like to make a barplot similar to Excelâs âclustered
column chartâ.
Here is my matrix âxâ
8.25875413.300710
10.180953 10.760465
11.012184 13.954887
10.910870 13.839839
Hi,
I am collecting replies from a survey and counts replies by the table()
function. The function below carries two
data frames and counts the observations of the findings in the first
parameter vector given the value of the second as shown in the code below.
My trouble is that the vector kp_vec
You can normally get through the firewall by using
the internet2 option.
Use ??internet for the exact function name. I am not at my computer now so I
can't check for you.
Rich
__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r
Was wondering if anyone has any experience installing the RExcel package
by hand. I think I have all the files needed, but our firewall here
prevents RExcelInstaller from going through the internet to get them
like it "wants" to do, and it just gives up. Any ideas? Thanks.
--Sam
__
Thanks Charles for the reproducible codes. I started this question because I
was asked to take a look at such dataset, but I have doubt if it's meaningful
to do a LR with 50 variables. I haven't got the dataset yet, thus have not
tried any code. But again for sharing some simulation code.
have
On Mon, 14 Jun 2010, Patrick Burns wrote:
On 14/06/2010 17:50, jim holtman wrote:
load('adresse/filename.R')
Or:
attach('adresse/filename.R')
The difference between 'load' and 'attach'
is that 'load' puts the contents of the file
into your workspace (global environment, first
Not necessar
In Python, it is literally this easy:
import rpy2.robjects as robjects
robjects.r("""
source("C:/YOUR R FILE GOES HERE ")
""")
Type the name of your R source code into this script and save it as a Python
script (add the suffix .py), and then you can run by double-clicking. If
Try this:
transform(x, DELTA = NULL, value = rev(c(5, 5 - cumsum(rev(DELTA[-1])
On Mon, Jun 14, 2010 at 12:29 PM, n.via...@libero.it wrote:
>
> Dear list,
> I have the following problem, what i'm trying to do is to built a function
> which does the following calculationg in a recursive way:
On 14/06/2010 17:50, jim holtman wrote:
load('adresse/filename.R')
Or:
attach('adresse/filename.R')
The difference between 'load' and 'attach'
is that 'load' puts the contents of the file
into your workspace (global environment, first
location on the search list), while 'attach'
creates a new
Hi!
Do you mean something like this (df is your original data frame):
--- cut here ---
df1<-df
df1[[1]]<-paste("R",df[[1]],sep="_")
colnames(df1)<-c("SERIES","YEAR","value")
df1$value[ df1$YEAR==2009 ]<-5
for (i in c(2009:2007)) { df1$value[ df1$YEAR==(i-1) ]<-( df1$value[
df1$YEAR==i ]-df$DELTA
On Mon, Jun 14, 2010 at 7:16 AM, Red Roo wrote:
> Looking for a recommended package that handles prime number computations.
>
The gmp package (http://crantastic.org/packages/gmp) has some good tools for
prime numbers. I've used the is.prime function before; it's stochastic (in
the sense that it
Thanks a lot!
Huapeng
-Original Message-
From: foolish.andr...@gmail.com [mailto:foolish.andr...@gmail.com] On Behalf Of
Felix Andrews
Sent: Friday, June 11, 2010 8:23 PM
To: Chen, Huapeng FOR:EX
Cc: r-help@r-project.org
Subject: Re: [R] Overlay of barchart and xyplot
Hi,
I have an exa
On Jun 14, 2010, at 1:10 PM, David Winsemius wrote:
On Jun 14, 2010, at 12:32 PM, Assa Yeroslaviz wrote:
I thought unique delete the whole line.
I don't really need the row names, but I thought of it as a way of
getting
the unique items.
Is there a way of deleting whole lines completely
On Jun 14, 2010, at 12:32 PM, Assa Yeroslaviz wrote:
I thought unique delete the whole line.
I don't really need the row names, but I thought of it as a way of
getting
the unique items.
Is there a way of deleting whole lines completely according to their
identifiers?
What I really need are
No.
Binary "workspace" data are saved by default with the .Rdata extension and
are "opened" (actually have their contents added to the current workspace)
by load().
.R text files and would need to be sourced:
source('adresse/filename.R')
Bert Gunter
Genentech Nonclinical Biostatistics
load('adresse/filename.R')
On Mon, Jun 14, 2010 at 12:41 PM, wrote:
> Hi all
> I saved the result of my code as a file, like
>> save(namefunction,file="adresse/filename.R").
> I want to open the filename. Could you please help me how I can open the
> filename and see the result.
>
> best
> Kha
Hi all
I saved the result of my code as a file, like
> save(namefunction,file="adresse/filename.R").
I want to open the filename. Could you please help me how I can open the
filename and see the result.
best
Khazaei
__
R-help@r-project.org mailing list
Another possibility:
rowSums(table(x) > 0)
On Sun, Jun 13, 2010 at 3:08 PM, Erik Iverson wrote:
> I think ?tapply will help here. But *please* read the posting guide and
> provide minimal, reproducible examples!
>
>
> Birdnerd wrote:
>
>> I have a data frame with two factors (sampling 'unit'
If you want to keep only the rows that are unique in the first column
then do the following:
workComb1 <- subset(workComb, !duplicated(ProbeID))
On Mon, Jun 14, 2010 at 11:20 AM, Assa Yeroslaviz wrote:
> well, the problem is basically elsewhere. I have a data frame with
> expression data and dou
I thought unique delete the whole line.
I don't really need the row names, but I thought of it as a way of getting
the unique items.
Is there a way of deleting whole lines completely according to their
identifiers?
What I really need are unique values on the first column.
Assa
On Mon, Jun 14, 2
First start by putting it in a function so you can specify the
parameters you want to change.
On Mon, Jun 14, 2010 at 11:54 AM, wrote:
>
> Hello,
>
> I'd like to automate this script a bit more and cycle several
> parameters(both the species and the metric). For example where AnnualDepth
> occu
Hi Silvano,
Silvano wrote:
Hi,
I'm using Sweave to prepare a descriptive report.
Are at least 20 tables built with xtable command of kind:
<>=
q5 = factor(Q5, label=c("Não", "Sim"))
(q5.tab = cbind(table(q5)))
@
<>=
xtable(q5.tab, align="l|c", caption.placement = "top", table.placement='H')
@
Hi Dennis,
Thanks for this suggestion (which I got to run!), as this code makes
intuitive sense, whereas not all the other suggestions were that
straightforward. I'm relatively new to programming in R and am very
appreciative that you and others take time to help out where you can.
Sincerely,
Sa
Hello,
I'd like to automate this script a bit more and cycle several
parameters(both the species and the metric). For example where AnnualDepth
occurs, I need to process about 12 metrics so instead of writing this
entire script 12 times once for each metric I'd like to be able to
automatically
On Jun 14, 2010, at 8:09 AM, Silvano wrote:
> Hi,
>
> I'm using Sweave to prepare a descriptive report.
> Are at least 20 tables built with xtable command of kind:
>
> <>=
> q5 = factor(Q5, label=c("Não", "Sim"))
> (q5.tab = cbind(table(q5)))
> @
>
> <>=
> xtable(q5.tab, align="l|c", caption.pl
Murray Jorgensen wrote:
I have just installed R 2.11.1 on my XP laptop.
I like html help for browsing but text help for on-the-fly look-ups. I
was a bit surprised when I was asked to choose between them during the
installation. I chose text, thinking I could fix the html help later,
which is
I write about R every weekday at the Revolutions blog:
http://blog.revolutionanalytics.com
and every month I post a summary of articles from the previous month
of particular interest to readers of r-help.
http://bit.ly/dn7DgR linked to 13 videos for learning R, from the
basics ("What is R?") to m
I don't think that I would use a barplot as the base, but rather just set up
the graph and add the lines where I wanted them. I still don't understand what
you want your graph to look like, or what question you are trying to answer
with it (part may be a language barrier). If you can give us a
On Mon, 14 Jun 2010, Joris Meys wrote:
Hi,
Marcs explanation is valid to a certain extent, but I don't agree with
his conclusion. I'd like to point out "the curse of
dimensionality"(Hughes effect) which starts to play rather quickly.
Ahem!
... minimal, self-contained, reproducible code ...
Your process does remove all the duplicate entries based on the
content of the two columns. After you do this, there are still
duplicate entries in the first column that you are trying to use as
rownames and therefore the error. Why to you want to use non-unique
entries as rownames? Do you reall
> Looking for a recommended package that handles prime number computations.
I'm not sure whether this would be helpful to you, but Sage
(http://www.sagemath.org) has excellent number theory support and
several ways to interface with R (which is included in the
distribution of Sage). I use it myse
Hello Enrico,
One thing I notice between your two calls is that in the second you
specify data=dados, but you do not in the first. When I try to do
something similar to your formulae using one of my longitudinal
datasets, I get the same results whether or not I put the formula for
random in a lis
Dear list,
I have the following problem, what i'm trying to do is to built a function
which does the following calculationg in a recursive way:
I have a data frame more or less like this:
variableyear DELTA
EC01 2006/
EC01 2007 10
Hi R users,
I am estimating a multilevel model using lmer. My dataset has missing
values and I am using MICE package to make Multiple Imputations.
Everything works good until i reach the POOLING stage using the pool()
function. I am able to get a summary of the pooled fixed effects but not the
Hi,
First of all, thank you for you reply. It was very helpfull.
I have another problem: I have changed the locale to pt_pt.iso885...@euro. Now
the problem that I reported earlier doesnt occur .
print("dúvida")
[1] "dúvida"
My system information now is the following:
Sys.getl
Hi,
I am doing a longitudinal data set fit using lme.
I used two forms of the lme command and I am
getting two different outputs.
FIRST
out<-lme(Altura~Idade+Idade2+sexo+status+Idade:sexo+Idade:status+Idade2:sexo+Idade2:status,
random=(list(ident=~Idade+Idade2)))
SECOND
out<-lme(Altura~Idade+Ida
Hi,
I'm using Sweave to prepare a descriptive report.
Are at least 20 tables built with xtable command of kind:
<>=
q5 = factor(Q5, label=c("Não", "Sim"))
(q5.tab = cbind(table(q5)))
@
<>=
xtable(q5.tab, align="l|c", caption.placement = "top",
table.placement='H')
@
I'm getting the following
Thank you, with the matrix for the responses (here my 101 timepoints), it takes
less than 30 minutes for 1000 pemutations, whereas before it takes 2h30!
Best regards,
Mélissa
> Message du 10/06/10 18:52
> De : "Douglas Bates"
> A : "melissa"
> Copie à : r-help@r-project.org
> Objet : Re
On Mon, 2010-06-14 at 07:16 -0700, Red Roo wrote:
> Looking for a recommended package that handles prime number computations.
>
> Tried the following unsuccessfully:
> primeFactors() in the R.basic package failed to install.
>
> primes() and primlist are broken in Schoolmath pkg on CRAN.
> My ana
Joris,
There are two separate issues here:
1. Can you consider an LR model with 50 covariates?
2. Should you have 50 covariates in your LR model?
The answer to 1 is certainly yes, given what I noted below as a general working
framework. I have personally been involved with the development and
I think the real issue is why the fit is being
done. If it is solely to interpolate and condense
the dataset, the number of variables is not an important issue.
If the issue is developing a model that will
capture causality, it is hard to believe that can
be accomplished with 50+ variables. W
On Jun 14, 2010, at 10:42 AM, SHANE MILLER, BLOOMBERG/ 731 LEXIN wrote:
Hi,
Suppose I analyze a log to create a histogram:
event E1 occurred N1 times
event E2 occurred N2 times
...
... for m total events
...
event Em occurred Nm times
The total number of occurrences is: T = SumNj
Hi,
Suppose I analyze a log to create a histogram:
event E1 occurred N1 times
event E2 occurred N2 times
...
... for m total events
...
event Em occurred Nm times
The total number of occurrences is: T = SumNj
j=1..m
I want to give this histogram
Ah, I overlooked that possibility.
You can do following :
not <- attr(fm$model,"na.action")
if( ! is.null(not)){ # only drop the NA values if there are any left
out of the model
cluster <- cluster[-not]
dat <- dat[-not,]
}
with(dat,{
On Mon, Jun 14, 2010 at 4:30 PM, edmund jones
Dear all,
(this first part of the email I sent to John earlier today, but forgot to put it
to the list as well)
Dear John,
> Hi, this is not R technical question per se. I know there are many excellent
> statisticians in this list, so here my questions: I have dataset with ~1800
> observations
On Mon, Jun 14, 2010 at 6:24 PM, Martin Maechler
wrote:
> Dear Deepayan,
>
> this is in reply to a message almost 6 months ago :
>
>> Deepayan Sarkar
[...]
> > Thanks, I was going to say the same thing, except that it would be
> (1)
> > conceptually simpler just to add the 'i' and 'j'
The first thing that I would recommend is to avoid the "formula
interface" to models. The internals that R uses to create matrices
form a formula+data set are not efficient. If you had a large number
of variables, I would have automatically pointed to that as a source
of issues. cforest and ctree o
Looking for a recommended package that handles prime number computations.
Tried the following unsuccessfully:
primeFactors() in the R.basic package failed to install.
primes() and primlist are broken in Schoolmath pkg on CRAN.
My analysis can be found here http://j.mp/9BNI9q
Not sure what the pro
Try:
gsub(".$", "", c('01asap05a', '02ee04b'))
On Mon, Jun 14, 2010 at 10:47 AM, glaporta wrote:
>
> Dear R experts,
> is there a simple way to remove the last char of a text string?
> substr() function use as parameter start end only... but my strings are of
> different length...
> 01asap05a
On 14.06.2010 15:39, Katya Mauff wrote:
Hi-I have tried that offline, -my browser opens and says: "Offline mode
Firefox is currently in offline mode and can't browse the Web.
Uncheck "Work Offline" in the File menu, then try again"
Well, go "online" with Firefox which means firefox can acces
Sure. You can use nchar() to find out how long the string is.
> teststring <- "01asap05a"
> substr(teststring, 1, nchar(teststring)-1)
[1] "01asap05"
On Mon, Jun 14, 2010 at 9:47 AM, glaporta wrote:
>
> Dear R experts,
> is there a simple way to remove the last char of a text string?
> substr()
On Mon, Jun 14, 2010 at 3:47 PM, glaporta wrote:
>
> Dear R experts,
> is there a simple way to remove the last char of a text string?
> substr() function use as parameter start end only... but my strings are of
> different length...
> 01asap05a -> 01asap05
> 02ee04b -> 02ee04
> Thank you all,
> G
Dear R experts,
is there a simple way to remove the last char of a text string?
substr() function use as parameter start end only... but my strings are of
different length...
01asap05a -> 01asap05
02ee04b -> 02ee04
Thank you all,
Gianandrea
--
View this message in context:
http://r.789695.n4.nab
If the IP number is something like 127.0.0.1:x then you are on
your local computer.
Cheers
Joris
On Mon, Jun 14, 2010 at 1:33 PM, Murray Jorgensen wrote:
> I have just installed R 2.11.1 on my XP laptop.
>
> I like html help for browsing but text help for on-the-fly look-ups. I was a
> bit su
Hi-I have tried that offline, -my browser opens and says: "Offline mode
Firefox is currently in offline mode and can't browse the Web.
Uncheck "Work Offline" in the File menu, then try again"
I can access ?help that way, or pages I've been to before having gone offline,
but nothing new.
>>> Uwe
Hi,
Marcs explanation is valid to a certain extent, but I don't agree with
his conclusion. I'd like to point out "the curse of
dimensionality"(Hughes effect) which starts to play rather quickly.
The curse of dimensionality is easily demonstrated looking at the
proximity between your datapoints. S
On Mon, 14 Jun 2010, Katya Mauff wrote:
Hi all
Apologies if this is a trivial question- I have searched the lists
and the online help files etc but have not managed to find anything.
I recently downloaded the latest version of R, which has the help
type set to htmlhelp as default (according
Put the rownames as another column in your dataframe so that it
remains with the data. After merging, you can then use it as the
"rownames"
On Mon, Jun 14, 2010 at 9:25 AM, Assa Yeroslaviz wrote:
> Hi,
>
> is it possible to merge two data frames while preserving the row names of
> the bigger dat
One thing you might do is to transform the data into a format that is
easier to combine; I like using 'merge':
> mynames=cbind(c('a','b'),c(11,22))
> lst=list(a=c(1,2), b=5)
> mynames
[,1] [,2]
[1,] "a" "11"
[2,] "b" "22"
> lst
$a
[1] 1 2
$b
[1] 5
> mynames.df <- as.data.frame(mynames)
>
On 14.06.2010 14:11, Katya Mauff wrote:
Hi all
Apologies if this is a trivial question- I have searched the lists and the
online help files etc but have not managed to find anything. I recently
downloaded the latest version of R, which has the help type set to htmlhelp as
default (according
Hi,
is it possible to merge two data frames while preserving the row names of
the bigger data frame?
I have two data frames which i would like to combine. While doing so I
always loose the row names. When I try to append this, I get the error
message, that I have non-unique names. This although
I think I found the solution !
> cc<-factor(cars)
> dd<-factor(driver)
> MODEL<-y~cc+dd+additive
> summary(aov(MODEL,data=DATA))
On 14 Jun, 2010, at 2:52 PM, Andrea Bernasconi DG wrote:
> Hi R help,
>
> Hi R help,
>
> Which is the easiest (most elegant) way to force "aov" to treat numerical
>
Hi,
See ?factor
e.g.: DATA$driver <- factor(DATA$driver)
See also the level= argument if you want to change the order of your levels.
HTH,
Ivan
Le 6/14/2010 14:52, Andrea Bernasconi DG a écrit :
> Hi R help,
>
> Hi R help,
>
> Which is the easiest (most elegant) way to force "aov" to treat numer
On Jun 13, 2010, at 10:20 PM, array chip wrote:
> Hi, this is not R technical question per se. I know there are many excellent
> statisticians in this list, so here my questions: I have dataset with ~1800
> observations and 50 independent variables, so there are about 35 samples per
> variable.
Dear Deepayan,
this is in reply to a message almost 6 months ago :
> Deepayan Sarkar
> on Sun, 17 Jan 2010 01:39:21 -0800 writes:
> On Sat, Jan 16, 2010 at 11:56 PM, Peter Ehlers wrote:
>> Marius Hofert wrote:
>>>
>>> Dear ExpeRts,
>>>
>>> I have the scatt
Hi R help,
Hi R help,
Which is the easiest (most elegant) way to force "aov" to treat numerical
variables as categorical ?
Sincerely, Andrea Bernasconi DG
PROBLEM EXAMPLE
I consider the latin squares example described at page 157 of the book:
Statistics for Experimenters: Design, Innovation,
Try this:
cbind(mynames[rep(seq(nrow(mynames)), sapply(lst, length)),], unlist(lst))
On Mon, Jun 14, 2010 at 9:06 AM, Yuan Jian wrote:
> Hello,
>
> I could not find a clear solution for the follow question. please allow me
> to ask. thanks
>
> mynames=cbind(c('a','b'),c(11,22))
> lst=list(a=c(1
http://www.google.com/#hl=en&source=hp&q=R+big+data+sets&aq=f&aqi=g1&aql=&oq=&gs_rfai=&fp=686584f57664
Cheers
Joris
On Mon, Jun 14, 2010 at 12:07 PM, Meenakshi
wrote:
>
> HI,
>
> I want to import 1.5G CSV file in R.
> But the following error comes:
>
> 'Victor allocation 12.4 size'
>
> How t
1 - 100 of 122 matches
Mail list logo