Hi,
Try:
data_m <- read.table(text="Abortusovis07918 Agona08561 Anatum08125 Arizonae65S
Braenderup08488
1 S5305B_IGR S5305B_IGR S5305B_IGR S5305B_IGR S5305B_IGR
2 S5305A_IGR S5300A_IGR S5305A_IGR S5300A_IGR S5300A_IGR
3 S5300A_IGR S5300B_IGR S5300A_IGR S5300B_IGR S5300B_IGR
4
On Nov 27, 2013, at 2:39 PM, yetik serbest wrote:
> Hi Everyone,
>
> I am trying to import many CSV files to their own matrices. Example,
> alaska_93.csv to alaska. When I execute the following, for each csv.file
> separately it is successful.
>
> singleCSVFile2Matrix <- function(x,path) {
Hi,
One way would be:
set.seed(42)
dat1 <-
as.data.frame(matrix(sample(c(1:5,NA),50,replace=TRUE,prob=c(10,15,15,20,30,10)),ncol=5))
set.seed(49)
dat1[!is.na(dat1)][ match(
sample(seq(dat1[!is.na(dat1)]),length(dat1[!is.na(dat1)])*(0.20)),seq(dat1[!is.na(dat1)]))]
<- NA
length(dat1[is.na(dat1
Hi!
I'm new in R and I'm writing you asking for some guidance. I had
analyzed a comparative genomic microarray data of /56 Salmonella/
strains to identify absent genes in each of the serovars, and finally I
got a matrix that looks like that:
> data[1:5,1:5]
Abortusovis07918 Agona08561 Anat
Hi ,
If it is like:
vec1 <-
c("10.20.30.01","10.20.30.02","10.20.30.40","10.20.30.41","10.20.30.45","10.20.30.254","10.20.30.255","10.20.30.256","10.20.30.313")
vec2 <-
as.numeric(paste0(gsub("^\\d{2}\\.\\d{2}\\.(\\d{2}\\.).*","\\1",vec1),sprintf("%03d",as.numeric(gsub("^\\d{2}\\.\\d{2}\\.\\
Hi there,
I'm generally more a Stata user than a R user, but I need to computed
something, and I am not able to do it with Stata 13. So, here I am!
I have a database that has multiple imputations (imputations are already
done) with a complex sample design (Strate and Weight).
Is it possible, i
HI,
>From the dput() version of df.1, it looks like you want:
cumsum(df.1[,4]=="Yes")/seq_len(nrow(df.1))
[1] 0.000 0.500 0.333 0.250 0.400 0.333 0.4285714
[8] 0.500 0.444 0.500
A.K.
On Thursday, November 28, 2013 11:26 AM, Burhan ul haq wrote:
Hi,
My obj
Hi Halim,
For the first two questions, you may try:
colsum1 <- colSums(volyrdc1)
min(which(colsum1>=18))
#[1] 29
#or
head(which(colsum1>=18),1)
#140
# 29
colsum1[substr(colsum1,6,7)=="00"] ## this is not very clear
305
45.37004
#or
colsum1[colsum1>=18][substr(colsum1[colsum1>=18],6,7)
#Or
paste(dat[,3],dat[,2],dat[,1],sep=".")
#[1] "4.1.2011" "5.2.2012" "6.3.2013"
#
as.character(interaction(dat[,3:1]))
paste(sprintf("%02d",dat[,3]),sprintf("%02d",dat[,2]),dat[,1],sep=".")
#[1] "04.01.2011" "05.02.2012" "06.03.2013"
A.K.
On Thursday, November 28, 2013 10:18 AM, Rui Bar
Hi,
Try:
dat1 <- data.frame(years=rep(1991:1992,12), months=rep(1:12,2),days= rep(1,24))
dat1$day <-
format(as.Date(paste(dat1[,1],sprintf("%02d",dat1[,2]),sprintf("%02d",dat1[,3]),sep="."),"%Y.%m.%d"),"%d.%m.%Y")
A.K.
On Thursday, November 28, 2013 8:56 AM, eliza botto
wrote:
Dear Users of
Hi,
My objective is to calculate "Relative (Cumulative) Frequency of Event
Occurrence" - something as follows:
Sample.Number 1st.Fly 2nd.Fly Did.E.occur? Relative.Cum.Frequency.of.E
1 G B No 0.000
2 B B Yes 0.500
3 B G No 0.333
4 G B No 0.250
5 G G Yes 0.400
6 G B No 0.333
7 B B Yes 0.429
8 G G
Thnx rui,
Eliza
> Date: Thu, 28 Nov 2013 15:16:35 +
> From: ruipbarra...@sapo.pt
> To: eliza_bo...@hotmail.com; r-help@r-project.org
> Subject: Re: [R] date format
>
> Hello,
>
> Maybe something like the following.
>
> dat <- data.frame( = 2011:2013, mm = 1:3, dd = 4:6)
>
> apply(dat,
Hello,
Maybe something like the following.
dat <- data.frame( = 2011:2013, mm = 1:3, dd = 4:6)
apply(dat, 1, function(x) paste(rev(x), collapse = "."))
Hope this helps,
Rui Barradas
Em 28-11-2013 13:54, eliza botto escreveu:
Dear Users of R,
I have a data frame with three column, the f
Dear bert, arun and philipps,Thanks for your help. It worked perfectly fine for
me.:D
Eliza
> Date: Thu, 28 Nov 2013 16:09:58 +0100
> From: wev...@web.de
> To: eliza_bo...@hotmail.com; r-help@r-project.org
> Subject: Re: [R] date format
>
> Hi Eliza,
>
> # you can use paste to create a new vect
eliza botto hotmail.com> writes:
>
> Dear Users of R,
> I have a data frame with three column, the first column contains years,
the second one months and third one,
> the days (cbind( mm dd)). I want to combine them so that i have one
column with the date format as (dd.mm.).
> Is there a
Jim, et. al:
rowSums(a, na.rm=TRUE) ## Fast!
tells you whether you have 0, 1, or >= 1 TRUE in each row.
This can then be combined with the ifelse() conditions to get what the
OP seems to want. As you said, it's clunky, and is just a minor
simplification. But, then again, her logic seemed somewhat
M Elo luukku.com> writes:
>
> Dear All,
>
> I'm using betadisper {vegan} and I'm interested not only in the dispersion
> within the group but also the distances between the groups. With betadisper
> I get distances to group centroids but is it possible to get distances to
> other groups centroi
Dear Users of R,
I have a data frame with three column, the first column contains years, the
second one months and third one, the days (cbind( mm dd)). I want to
combine them so that i have one column with the date format as (dd.mm.).
Is there a way of doing that.
Thanks in advance,
Eliza
Dear All,
I'm using betadisper {vegan} and I'm interested not only in the dispersion
within the group but also the distances between the groups. With betadisper
I get distances to group centroids but is it possible to get distances to
other groups centroids?
It might be possible to do it by hand
Dear all,
please follow the link to the question that I posted on StackOverflow about
my R code with ODE
http://stackoverflow.com/questions/20218065/ode-does-not-reach-steady-state-and-increase-exponentially
I am trying to write a code for a differential equation that should give me
the biomass of
Hi everybody,
first, I'm not high skilled about R, so please: be understandable!!
I would like to create an artificial neural network with R but I don't know
its parameters jet (number of layers, number of neurons,...).
I downloaded the package ANN and I use the function "ANNGA", but I'm afraid
I
Hi Andrea,
A "cleaner" alternative to Jim's suggestion is something like
a.df <- as.data.frame(a)
group1 <- (a.df$col1 == 1) & apply(a.df[,c("col2","col3","col4")], 2,
function(x) any(x == 1 | is.na(x)))
group2 <- (a.df$col1 == 1) & apply(a.df[,c("col2","col3","col4")], 1,
function(x) all(x ==
On 11/28/2013 04:33 AM, Andrea Lamont wrote:
Hello:
This seems like an obvious question, but I am having trouble answering it.
I am new to R, so I apologize if its too simple to be posting. I have
searched for solutions to no avail.
I have data that I am trying to set up for further analysis ("
Hi,
Sorry for continuous bothering. Continuum of the previous problem...
I have the following matrices and vectors,
dcmat<-matrix(c(0.13,0.61,0.25,0.00,0.00,0.00,0.52,0.37,0.09,0.00,0.00,0.00,
0.58,0.30,0.11,0.00,0.00,0.00,0.46,0.22,0.00,0.00,0.00,0.00,
0.09),nr
See in-line below.
On 11/28/13 20:50, jpm miao wrote:
Hi,
I would like to fit my data with a 4th order polynomial. Now I have only
5 data point, I should have a polynomial that exactly pass the five point
Then I would like to compute the "fitted" or "predict" value with a
relatively l
HI,
Just tried ncvTest() and durbinWatsonTest() from library(car)
f4 <- function(meanmod, dta, varmod) {
assign(".dta", dta, envir=.GlobalEnv)
assign(".meanmod", meanmod, envir=.GlobalEnv)
m1 <- lm(.meanmod, .dta)
ans <- ncvTest(m1, varmod)
remove(".dta", envir=.GlobalEnv)
remove(".meanmod", env
Hi,
No problem,
You could try:
library(tseries)
res6 <- do.call(rbind,lapply(lst1[sapply(lst1,function(x)
!(all(rowSums(is.na(x))>0)))],function(x) {resid <-
residuals(lm(rate~.,data=x)); unlist(jarque.bera.test(resid)[1:3])}) )
A.K.
On Wednesday, November 27, 2013 7:47 PM, Tomasz Schabe
Hi,
2. You need to tell which package you are using.
3. Does this work for you?
capture.output(lst2,file="nooldor.txt")
4.
lst2
<- lapply(lst1[sapply(lst1,function(x)
!(all(rowSums(is.na(x))>0)))],function(x)
print(summary(lm(rate~.,data=x))) ###prints the output on R console
A.K.
Hi,
Hi,
Try:
set.seed(49)
dat1 <- as.data.frame(matrix(sample(c(NA,1:50),41082*15,replace=TRUE),ncol=15))
dat1$indx <- as.numeric(gl(334*123,123,334*123))
names(dat1)[1] <- "rate"
lst1 <- split(dat1[,-16],dat1[,16])
any(sapply(lst1,nrow)!=123)
#[1] FALSE
lst2 <- lapply(lst1,function(x) summary(lm(rat
Hi,
You may try something like:
set.seed(49)
dat1 <- as.data.frame(matrix(sample(1:300,41082*15,replace=TRUE),ncol=15))
#created only 15 columns as shown in your model
dat1$indx <- as.numeric(gl(334*123,123,334*123))
names(dat1)[1] <- "rate"
lst1 <- split(dat1[,-16],dat1[,16])
any(sapply(lst1,nr
Hi,
lst1[[1]][,2] <- NA
lst2 <- lapply(lst1,function(x) summary(lm(rate~.,data=x)))
Error in lm.fit(x, y, offset = offset, singular.ok = singular.ok, ...) :
0 (non-NA) cases
lst2 <- lapply(lst1[sapply(lst1,function(x)
!(all(rowSums(is.na(x))>0)))],function(x) summary(lm(rate~.,data=x)) )
A.
On Thu, 28 Nov 2013, jpm miao wrote:
Hi,
I would like to fit my data with a 4th order polynomial. Now I have only
5 data point, I should have a polynomial that exactly pass the five point
Then I would like to compute the "fitted" or "predict" value with a
relatively large x dataset. How ca
32 matches
Mail list logo