Thank you so much for this elegant solution, Jeff.
Philip
On 2020-11-12 02:20, Jeff Newmiller wrote:
I am not a data.table afficiando, but here is how I would do it with
dplyr/tidyr:
library(dplyr)
library(tidyr)
do_per_REL <- function( DF ) {
rng <- range( DF$REF1 ) # watch out for missing
I am not a data.table afficiando, but here is how I would do it with
dplyr/tidyr:
library(dplyr)
library(tidyr)
do_per_REL <- function( DF ) {
rng <- range( DF$REF1 ) # watch out for missing months?
DF <- ( data.frame( REF1 = seq( rng[ 1 ], rng[ 2 ], by = "month" ) )
%>% left_join
There is no "perhaps" about it. Nonsense phrases like "similar to logit, where
I dont [sic] lose normality of the data" that lead into off-topic discussions
of why one introduces transformations in the first place are perfect examples
of why questions like this belong on a statistical theory dis
this might work for you
newy <- sign(oldy)*f(abs(oldy))
where f() is a monotonic transformation, perhaps a power function.
On Sun, Jan 20, 2019 at 11:08 AM Adrian Johnson
wrote:
>
> I apologize, I forgot to mention another key operation.
> in my matrix -1 to <0 has a different meaning while va
.html
and provide commented, minimal, self-contained
-Original Message-
From: R-help [mailto:r-help-boun...@r-project.org] On Behalf Of Adrian Johnson
Sent: Sunday, January 20, 2019 10:08 AM
To: r-help
Subject: Re: [R] data transformation
I apologize, I forgot to mention another key
I apologize, I forgot to mention another key operation.
in my matrix -1 to <0 has a different meaning while values between >0
to 1 has a different set of meaning. So If I do logit transformation
some of the positives becomes negative (values < 0.5 etc.). In such
case, the resulting transformed ma
),function(x) which(!!x[,3]))
}
Bill Dunlap
Spotfire, TIBCO Software
wdunlap tibco.com
> -Original Message-
> From: r-help-boun...@r-project.org [mailto:r-help-boun...@r-project.org] On
> Behalf
> Of arun
> Sent: Tuesday, November 12, 2013 2:13 PM
> To: R help
> Sub
Hi Anindya,
You may try:
dat1 <- read.table(text="ID Week Event_Occurence
A 1 0
A 2 0
A 3 1
A 4 0
B 1 1
B 2 0
B 3 0
B 4 1",sep="",header=TRUE,stringsAsFactors=FALSE)
with(dat1,tapply(as.logical(Event_Occurence),ID,FUN=which ))
#or
lapply(split(dat1,dat1$ID),function(x) which(!!x[,3]))
A.K
On 09/28/2011 01:13 PM, pip56789 wrote:
Hi,
I have a few methodological and implementation questions for ya'll. Thank
you in advance for your help. I have a dataset that reflects people's
preference choices. I want to see if there's any kind of clustering effect
among certain preference choices
Seems your questions belong to rule mining for frequent item sets.
check arules package
Weidong Gu
On Tue, Sep 27, 2011 at 11:13 PM, pip56789 wrote:
> Hi,
>
> I have a few methodological and implementation questions for ya'll. Thank
> you in advance for your help. I have a dataset that reflects
On a methodological level, if the choices do not correspond on a cardinal or
at least ordinal scale, you don't want to use correlations. Instead you
should probably use Cramer's V, in particular if the choices are
multinomial. Whether the wide format is necessary will depend on the format
the funct
Dear Stuart,
See ?bcPower and ?powerTransform in the car package, the latter for
univariate and multivariate conditional and unconditional ML Box-Cox.
I hope this helps,
John
John Fox
Senator William McMaster
Professor of Social Statistics
Department of Sociol
There is the bct function in the TeachingDemos package that does Box-Cox
transforms (though you could also write your own fairly simply). The
lappy/sapply functions will apply a function to each column of a data frame.
--
Gregory (Greg) L. Snow Ph.D.
Statistical Data Center
Intermountain Healt
Try this:
> t(apply(x, 1, function(r) table(factor(r, levels = seq_len(max(x))
1 2 3 4 5 6 7 8 9 10
[1,] 1 0 1 0 0 0 0 0 0 0
[2,] 0 2 0 0 0 0 0 0 0 0
[3,] 0 0 0 1 0 0 1 0 0 0
[4,] 0 0 0 0 0 1 0 1 0 0
[5,] 0 0 0 0 1 0 0 0 0 1
If you use aaply in the plyr package instead of apply then
r-help-boun...@r-project.org wrote on 01/25/2010 02:39:32 PM:
> x <- read.table(textConnection("col1 col2
> 3 1
> 2 2
> 4 7
> 8 6
> 5 10"), header=TRUE)
>
> I want to rewrite it as below:
>
> var1 var2 var3 var4 var5 var6 var7 var8 var9 var10
> 1 0 1 0 0 0 0
Thank you so much.
Lisa
--
View this message in context:
http://n4.nabble.com/Data-transformation-tp1289899p1289915.html
Sent from the R help mailing list archive at Nabble.com.
__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listin
Hi,
On Mon, Jan 25, 2010 at 5:39 PM, Lisa wrote:
>
> Dear all,
>
> I have a dataset that looks like this:
>
> x <- read.table(textConnection("col1 col2
> 3 1
> 2 2
> 4 7
> 8 6
> 5 10"), header=TRUE)
>
> I want to rewrite it as below:
>
> var1 var2 var3 var4 var5 var6 var7 var8 var9 var10
> 1
Well, I have no idea how to get from one to the other. There's
col1 and col2 but no var1 var2 var3, etc. I thought perhaps col1
was the row index and col2 was the column index, but that doesn't
match up either, and not all the cell values are 1.
So you will need to explain more clearly what you in
>> (x.n <- cast(x.m, id ~ var, function(.dat){
> + if (length(.dat) == 0) return(0) # test for no data; return
> zero if that is the case
> + mean(.dat)
> + }))
Or fill = 0.
Hadley
--
http://had.co.nz/
__
R-help@r-project.org mailing list
h
That's what I want. Many thanks for your help.
Legen
jholtman wrote:
>
> Try this:
>
>> x <- read.table(textConnection("idcode1code2 p
> + 148 0.1
> + 157 0.9
> + 218 0.4
> + 262
Your script works very well. Thank you very much.
Legen
Henrique Dallazuanna wrote:
>
> Try this also:
>
> xtabs(rep(p, 2) ~ rep(id, 2) + sprintf("var%d", c(code1, code2)), data =
> x)
>
> On Wed, Nov 11, 2009 at 2:10 AM, legen wrote:
>>
>> Thank you for your kind help. Your script works v
Try this:
> x <- read.table(textConnection("idcode1code2 p
+ 148 0.1
+ 157 0.9
+ 218 0.4
+ 262 0.2
+ 243 0.6
+ 356 0.7
+
Try this also:
xtabs(rep(p, 2) ~ rep(id, 2) + sprintf("var%d", c(code1, code2)), data = x)
On Wed, Nov 11, 2009 at 2:10 AM, legen wrote:
>
> Thank you for your kind help. Your script works very well. Would you please
> show me how to change NaN to zero and column variables 1, 2, ..., 8 to var1,
Thank you for your kind help. Your script works very well. Would you please
show me how to change NaN to zero and column variables 1, 2, ..., 8 to var1,
var2, ..., var8? Thanks again.
Legen
jholtman wrote:
>
> Is this what you want:
>
>> x <- read.table(textConnection("idcode1code2
Is this what you want:
> x <- read.table(textConnection("idcode1code2 p
+ 148 0.1
+ 157 0.9
+ 218 0.4
+ 262 0.2
+ 243 0.6
+ 356
Roslina,
this code performs what you need:
dt = matrix((1:(58*12))/58/12,58) # some numbers
# if dt is a data.frame use dt = as.matrix(dt)
a = (1:12)/12 # some a coef
b = (12:1)/12 # some b coef
dtgam = matrix(pgamma(dt,a,b),58)
# dtgam is the transformation you're looking for
on 07/22/2008 11:24 AM Christian Hof wrote:
Dear all,
how can I, with R, transform a presence-only table (with the names of
the species (1st column), the lat information of the sites (2nd column)
and the lon information of the sites (3rd column)) into a
presence-absence (0/1) matrix of specie
Try this:
newx <- with(x, cbind(stack(x, select = grep("spec", names(x))), lat, lon))
newx[newx$values > 0, -1]
On 5/2/08, Christian Hof <[EMAIL PROTECTED]> wrote:
>
> Dear all,
> how can I, with R, transform a presence-absence (0/1) matrix of species
> occurrences into a presence-only table (3
Hi Christian,
Here's a way using the reshape package:
> dfr
site lat lon spec1 spec2 spec3 spec4
1 site1 10 11 1 0 1 0
2 site2 20 21 1 1 1 0
3 site3 30 31 0 1 1 1
> library(reshape)
> dfr <- melt(dfr[, -1], id=1:2, variable_name='species')
Christian,
You need to use reshape to convert to the 'long' format.
Check the help page ?reshape for details.
>dat <- read.table('clipboard', header=TRUE)
>dat
site lat lon spec1 spec2 spec3 spec4
1 site1 10 11 1 0 1 0
2 site2 20 21 1 1 1 0
3 site3 30 31
30 matches
Mail list logo