Às 20:31 de 11/12/2024, Sorkin, John escreveu:
I am trying to use the aggregate function to run a function, catsbydat2, that
produces the mean, minimum, maximum, and number of observations of the values
in a dataframe, inJan2Test, by levels of the dataframe variable MyDay. The
output should be
On Wed, 11 Dec 2024, Sorkin, John writes:
> I am trying to use the aggregate function to run a function, catsbydat2, that
> produces the mean, minimum, maximum, and number of observations of the values
> in a dataframe, inJan2Test, by levels of the dataframe variable MyDay. The
> output should
On Mon, 4 Sep 2023, Ivan Calandra wrote:
Thanks Rui for your help; that would be one possibility indeed.
But am I the only one who finds that behavior of aggregate() completely
unexpected and confusing? Especially considering that dplyr::summarise() and
doBy::summaryBy() deal with NAs differe
Ivan:
Just one perhaps extraneous comment.
You said that you were surprised that aggregate() and group_by() did not
have the same behavior. That is a misconception on your part. As you know,
the tidyverse recapitulates the functionality of many base R functions; but
it makes no claims to do so in
Haha, got it now, there is an na.action argument (which defaults to
na.omit) to aggregate() which is applied before calling mean(na.rm =
TRUE). Thank you Rui for pointing this out.
So running it with na.pass instead of na.omit gives the same results as
dplyr::group_by()+summarise():
aggregate(
Às 12:51 de 04/09/2023, Ivan Calandra escreveu:
Thanks Rui for your help; that would be one possibility indeed.
But am I the only one who finds that behavior of aggregate() completely
unexpected and confusing? Especially considering that dplyr::summarise()
and doBy::summaryBy() deal with NAs d
Thanks Rui for your help; that would be one possibility indeed.
But am I the only one who finds that behavior of aggregate() completely
unexpected and confusing? Especially considering that dplyr::summarise()
and doBy::summaryBy() deal with NAs differently, even though they all
use mean(na.rm
Às 10:44 de 04/09/2023, Ivan Calandra escreveu:
Dear useRs,
I have just stumbled across a behavior in aggregate() that I cannot
explain. Any help would be appreciated!
Sample data:
my_data <- structure(list(ID = c("FLINT-1", "FLINT-10", "FLINT-100",
"FLINT-101", "FLINT-102", "HORN-10", "HORN
Thanks Iago for the pointer.
It then means that na.rm = TRUE is not applied in the same way within
aggregate() as opposed to dplyr::group_by() + summarise(), right? Within
aggregate, it behaves like na.omit(), that is, it excludes the
incomplete cases (whole rows), whereas with group_by() + su
It seems that the issue are the missings. If in #1 you use the dataset
na.omit(my_data) instead of my_data, you get the same output that in #2 and in
#4, where all observations with missing data are removed since you are
including all the variables.
The second dataset has no issue since it ha
di Ancona, Ancona (AN)
Uff: +39 071 806 7743
E-mail: stefano.so...@regione.marche.it
---Oo-oO
Da: Bill Dunlap
Inviato: sabato 13 maggio 2023 22:38
A: Stefano Sofia
Cc: r-help@R-project.org
Oggetto: Re: [R] aggr
You don't have to bother with the subtracting from pi/2 bit ... just assume the
cartesian complex values are (y,x) instead of (x,y).
On May 13, 2023 1:38:51 PM PDT, Bill Dunlap wrote:
>I think that using complex numbers to represent the wind velocity makes
>this simpler. You would need to write
I think that using complex numbers to represent the wind velocity makes
this simpler. You would need to write some simple conversion functions
since wind directions are typically measured clockwise from north and the
argument of a complex number is measured counterclockwise from east. E.g.,
wind
Sorry Rui; if you run your code you will get:
Error in FUN(X[[i]], ...) : object 'ws' not found
Moreover, even if you did this:
aggregate(wd ~ day + month, data=df, FUN = my_fun, ws1 = df$ws)
the answer would be wrong is you need to include only the subsets of ws1
corresponding to the split defin
Às 15:51 de 13/05/2023, Stefano Sofia escreveu:
Dear list users,
I have to aggregate wind direction data (wd) using a function that requires
also a second input variable, wind speed (ws).
This is the function that I need to use:
my_fun <- function(wd1, ws1){
u_component <- -ws1*sin(2*pi*
Da: Eric Berger [ericjber...@gmail.com]
Inviato: martedì 22 settembre 2020 11.00
A: Jeff Newmiller
Cc: Stefano Sofia; r-help mailing list
Oggetto: Re: [R] aggregate semi-hourly data not 00-24 but 9-9
Thanks Jeff.
Stefano, per Jeff's comment, yo
Thanks Jeff.
Stefano, per Jeff's comment, you can replace the line
df1$data_POSIXminus9 <- df1$data_POSIX - lubridate::hours(9)
by
df1$data_POSIXminus9 <- df1$data_POSIX - as.difftime(9,units="hours")
On Mon, Sep 21, 2020 at 8:06 PM Jeff Newmiller wrote:
>
> The base R as.difftime function is
The base R as.difftime function is perfectly usable to create this offset
without pulling in lubridate.
On September 21, 2020 8:06:51 AM PDT, Eric Berger wrote:
>Hi Stefano,
>If you mean from 9am on one day to 9am on the following day, you can
>do a trick. Simply subtract 9hrs from each timestam
Hi Stefano,
If you mean from 9am on one day to 9am on the following day, you can
do a trick. Simply subtract 9hrs from each timestamp and then you want
midnight to midnight for these adjusted times, which you can get using
the method you followed.
I googled and found that lubridate::hours() can be
Thank you!
This is exactly what I was looking for!
Cheers!
On Wed, Feb 12, 2020 at 11:29 PM Jim Lemon wrote:
>
> Hi Stefan,
> How about this:
>
> sddf<-read.table(text="age x
> 45 1
> 45 2
> 46 1
> 47 3
> 47 3",
> header=TRUE)
> library(prettyR)
> sdtab<-xtab(age~x,sddf)
> sdtab$counts
Hi Stefan,
How about this:
sddf<-read.table(text="age x
45 1
45 2
46 1
47 3
47 3",
header=TRUE)
library(prettyR)
sdtab<-xtab(age~x,sddf)
sdtab$counts
Jim
On Thu, Feb 13, 2020 at 7:40 AM stefan.d...@gmail.com
wrote:
>
> Dear All,
>
> I have a seemingly standard problem to which I someh
Thank you, this is already very helpful.
But how do I get it in the form
age var_x=1 var_x=2 var_x=3
45 1 1 0
46 1 00
So it would be a data frame with 4 variables.
Cheers!
On Wed, Feb 12, 2020 at 10:25 PM William Dunlap wrote:
>
> Y
You didn't say how you wanted to use it as a data.frame, but here is one way
d <- data.frame(
check.names = FALSE,
age = c(45L, 45L, 46L, 47L, 47L),
x = c(1L, 2L, 1L, 3L, 3L))
with(d, as.data.frame(table(age,x)))
which gives:
age x Freq
1 45 11
2 46 11
3 47 10
4 45 2
well, if I think about, its actually a simple frequency table grouped
by age. but it should be usable a matrix or data frame.
On Wed, Feb 12, 2020 at 9:48 PM wrote:
>
> So a pivot table?
>
> On 12 Feb 2020 20:39, stefan.d...@gmail.com wrote:
>
> Dear All,
>
> I have a seemingly standard problem t
You can also use 'dplyr'
library(tidyverse)
result <- pcr %>%
group_by(Gene, Type, Rep) %>%
summarise(mean = mean(Ct),
sd = sd(Ct),
oth = sd(Ct) / sqrt(sd(Ct))
)
Jim Holtman
*Data Munger Guru*
*What is the problem that you are trying to solve?Tell me
Hi Cyrus,
Try this:
pcr<-data.frame(Ct=runif(66,10,20),Gene=rep(LETTERS[1:22],3),
Type=rep(c("Std","Unkn"),33),Rep=rep(1:3,each=22))
testagg<-aggregate(pcr$Ct,c(pcr["Gene"],pcr["Type"],pcr["Rep"]),
FUN=function(x){c(mean(x), sd(x), sd(x)/sqrt(sd(x)))})
nxcol<-dim(testagg$x)[2]
newxs<-paste("x",1
Hi,
if you are willing to use dplyr, you can do all in one line of code:
library(dplyr)
df<-data.frame(id=1:10,A=c(123,345,123,678,345,123,789,345,123,789))
df%>%group_by(unique_A=A)%>%summarise(list_id=paste(id,collapse=", "))->r
cheers
Am 06.06.2018 um 10:13 schrieb Massimo Bressan:
> #given
which() is unnecessary. Use logical subscripting:
... t$id[t$A ==x]
Further simplification can be gotten by using the with() function:
l <- with(t, sapply(unique(A), function(x) id[A ==x]))
Check this though -- there might be scoping issues.
Cheers,
Bert
On Thu, Jun 7, 2018, 6:49 AM Massimo
#ok, finally this is my final "best and more compact" solution of the problem
by merging different contributions (thanks to all indeed)
t<-data.frame(id=c(18,91,20,68,54,27,26,15,4,97),A=c(123,345,123,678,345,123,789,345,123,789))
l<-sapply(unique(t$A), function(x) t$id[which(t$A==x)])
r<-dat
vals<- lapply(idx, function(index) x$id[index])
data.frame(unique_A = uA, list_vals=unlist(lapply(vals, paste, collapse = ",
")))
best
Da: "Ben Tupper"
A: "Massimo Bressan"
Cc: "r-help"
Inviato: Giovedì, 7 giugno 2018 14:47:55
Oggetto: Re: [
Hi,
Does this do what you want? I had to change the id values to something more
obvious. It uses tibbles which allow each variable to be a list.
library(tibble)
library(dplyr)
x <- tibble(id=LETTERS[1:10],
A=c(123,345,123,678,345,123,789,345,123,789))
uA <- unique(x$
Using which() to subset t$id should do the trick:
sapply(levels(t$A), function(x) t$id[which(t$A==x)])
Ivan
--
Dr. Ivan Calandra
TraCEr, laboratory for Traceology and Controlled Experiments
MONREPOS Archaeological Research Centre and
Museum for Human Behavioural Evolution
Schloss Monrepos
56567
sorry, but by further looking at the example I just realised that the posted
solution it's not completely what I need because in fact I do not need to get
back the 'indices' but instead the corrisponding values of column A
#please consider this new example
t<-data.frame(id=c(18,91,20,68,54,27
thanks for the help
I'm posting here the complete solution
t<-data.frame(id=1:10,A=c(123,345,123,678,345,123,789,345,123,789))
t$A <- factor(t$A)
l<-sapply(levels(t$A), function(x) which(t$A==x))
r<-data.frame(list_id=unlist(lapply(l, paste, collapse = ", ")))
r<-cbind(unique_A=row.names(r)
Hi Massimo,
Something along those lines could help you I guess:
t$A <- factor(t$A)
sapply(levels(t$A), function(x) which(t$A==x))
You can then play with the output using paste()
Ivan
--
Dr. Ivan Calandra
TraCEr, laboratory for Traceology and Controlled Experiments
MONREPOS Archaeological Resea
Thank you again Pikal and Bert. Using lapply, as Bert suggested, was
the first thing that i thought of dealing with this question and was
mentioned in my original posting. I just did not know how to implement
it to get the results/form i want. Below is what i did but could not
get it to give me th
Then you need to rethink your data structure. Use a list instead of a data
frame. The components of a list can have different lengths, and the "apply"
family of functions (lapply(), etc.) can operate on them. Consult any good
R tutorial for details.
Cheers,
Bert
Bert Gunter
"The trouble with hav
1 0 0
I believe that in your dfn is typo in second row and first column and that with
your 3 data.frames the result should be 1.
Cheers
Petr
> -Original Message-----
> From: Ek Esawi [mailto:esaw...@gmail.com]
> Sent: Tuesday, February 27, 2018 2:54 PM
> To: PIKAL Petr ; r-help@r-pr
Thank you Pikal and Bert. My apology for posting parts of my previous
email in HTML. Bert's suggestion will work but i am wondering if there
is an alternative
especially in the case where the data frames are big; that is the
difference in lengths among them is large. Below is a list of sample
date
Hi
Your example is rather confusing - partly because HTML formating, partly
because weird coding.
You probably could concatenate your data frames e.g. by rbind or merge and
after that you could try to aggregate them somehow.
I could construct example data.frames myself but most probably they w
All columns in a data.frame **must** have the same length. So you cannot do
this unless empty values are filled with missings (NA's).
-- Bert
Bert Gunter
"The trouble with having an open mind is that people keep coming along and
sticking things into it."
-- Opus (aka Berkeley Breathed in his "Bl
Thank you for your response. Note that with R 3.4.3, I get the same
result with simplify=TRUE or simplify=FALSE.
My problem was the behaviour was different if I define my columns as
character or as numeric but for now some minutes I discovered there also
is a stringsAsFactors option in the fun
Don't use aggregate's simplify=TRUE when FUN() produces return
values of various dimensions. In your case, the shape of table(subset)'s
return value depends on the number of levels in the factor 'subset'.
If you make B a factor before splitting it by C, each split will have the
same number of leve
The normal input to a factory that builds cars is car parts. Feeding whole
trucks into such a factory is likely to yield odd-looking results.
Both aggregate and table do similar kinds of things, but yield differently
constructed outputs. The output of the table function is not well-suited to be
Hi again,
Here is a version cleaned up a bit. Too tired to do it last night.
mndf<-data.frame(st=seq(1483360938,by=1700,length=10),
et=seq(1483362938,by=1700,length=10),
store=c(rep("gap",5),rep("starbucks",5)),
zip=c(94000,94000,94100,94100,94200,94000,94000,94100,94100,94200),
store_id=seq(5
Hi Mark,
I think you might want something like this:
mndf<-data.frame(st=seq(1483360938,by=1700,length=10),
et=seq(1483362938,by=1700,length=10),
store=c(rep("gap",5),rep("starbucks",5)),
zip=c(94000,94000,94100,94100,94200,94000,94000,94100,94100,94200),
store_id=seq(50,59))
# orders the time
Milu,
To get the quickest help and keep everyone in the loop, you should cc the
help list.
I don't understand your question. If you want the mean GDP use the mean
function, if you want the sum of the GDP use the sum function.
Jean
On Fri, Jan 13, 2017 at 5:33 PM, Miluji Sb wrote:
> Dear Jean
;>> same result can be achieved by
> > >>>
> > >>> dat.ag<-aggregate(dat[ , c("DCE","DP")], by= list(dat$first.Name,
> dat$Name, dat$Department) , "I")
> > >>>
> > >>> Sorting according to the first row seems
; >>> Sorting according to the first row seems to be quite tricky. You could
> >>> probably get closer by using some combination of split and order and
> >>> arranging back chunks of data
> >>>
> >>> ooo1<-order(s
tricky. You could
> probably get closer by using some combination of split and order and
> arranging back chunks of data
> >>>
> >>> ooo1<-order(split(dat$DCE,interaction(dat$first.Name, dat$Name,
> dat$Department, drop=T))[[1]])
> >>> data.frame
dat$Name,
>>> dat$Department, drop=T))[[1]])
>>> data.frame(sapply(split(dat$DCE,interaction(dat$first.Name, dat$Name,
>>> dat$Department, drop=T)), rbind))[ooo1,]
>>> Ancient.Nation.QLH Amish.Wives.TAS Auction.Videos.YME
>>> 2
NA
> > 4 0.28 NA NA
> > 1 0.540.59 0.57
> > 3 0.54 0.59 0.57
> >
> > however I wonder why the order according to the first row is n
NA NA
> > 4 0.28 NA NA
> > 1 0.540.59 0.57
> > 3 0.540.59 0.57
> >
> > however I wonder why the order according to the fir
0.59 0.57
> 3 0.540.59 0.57
>
> however I wonder why the order according to the first row is necessary if all
> NAs are on correct positions?
>
> Cheers
> Petr
>
>
> > -Original Message-
> >
0.540.59 0.57
>
> however I wonder why the order according to the first row is necessary if
> all NAs are on correct positions?
>
> Cheers
> Petr
>
>
> > -Original Message-
> > From: R-help [mailto:r-help-boun...@r-project.org]
necessary if all
NAs are on correct positions?
Cheers
Petr
> -Original Message-
> From: R-help [mailto:r-help-boun...@r-project.org] On Behalf Of David
> Winsemius
> Sent: Friday, November 18, 2016 9:30 AM
> To: Karim Mezhoud
> Cc: r-help@r-project.org
> Subject: R
> On Nov 17, 2016, at 11:27 PM, Karim Mezhoud wrote:
>
> Dear all,
>
> the dat has missing values NA,
>
>first.Name Name Department DCE DP date
> 5 Auction VideosYME 0.57 0.56 2013-09-30
> 18 Amish WivesTAS 0.59 0.56 2013-09-30
> 34 Ancient Natio
38 3.2 S2 A
> S2B 22 3.2 S2 B
>
> David C
>
> -Original Message-
> From: Gang Chen [mailto:gangch...@gmail.com]
> Sent: Wednesday, August 24, 2016 2:51 PM
> To: David L Carlson
> Cc: r-help mailing list
> Subject: Re: [R] aggregate
>
> Thanks again for patiently o
t
Subject: Re: [R] aggregate
Thanks again for patiently offering great help, David! I just learned
dput() and paste0() now. Hopefully this is my last question.
Suppose a new dataframe is as below (one more numeric column):
myData <- structure(list(X = c(1, 2, 3, 4, 5, 6, 7, 8), Y = c(8, 7, 6,
5
paste0() function:
>
>> sapply(split(myData, paste0(myData$S, myData$Z)), function(x) crossprod(x[,
>> 1], x[, 2]))
> S1A S1B S2A S2B
> 22 38 38 22
>
> David C
>
> -Original Message-
> From: Gang Chen [mailto:gangch...@gmail.com]
> Sent: Wedne
38 22
David C
-Original Message-
From: Gang Chen [mailto:gangch...@gmail.com]
Sent: Wednesday, August 24, 2016 11:56 AM
To: David L Carlson
Cc: Jim Lemon; r-help mailing list
Subject: Re: [R] aggregate
Thanks a lot, David! I want to further expand the operation a little
bit. With a ne
], x[, 2])))
> Z CP
> A A 10
> B B 10
>
> David C
>
>
> -Original Message-
> From: Gang Chen [mailto:gangch...@gmail.com]
> Sent: Wednesday, August 24, 2016 10:17 AM
> To: David L Carlson
> Cc: Jim Lemon; r-help mailing list
> Subject: Re: [R] aggregate
com]
Sent: Wednesday, August 24, 2016 10:17 AM
To: David L Carlson
Cc: Jim Lemon; r-help mailing list
Subject: Re: [R] aggregate
Thank you all for the suggestions! Yes, I'm looking for the cross
product between the two columns of X and Y.
A follow-up question: what is a nice way to merge the o
ge Station, TX 77840-4352
>
>
> -----Original Message-
> From: R-help [mailto:r-help-boun...@r-project.org] On Behalf Of Jim Lemon
> Sent: Tuesday, August 23, 2016 6:02 PM
> To: Gang Chen; r-help mailing list
> Subject: Re: [R] aggregate
>
> Hi Gang Chen,
> I
David L Carlson
Department of Anthropology
Texas A&M University
College Station, TX 77840-4352
-Original Message-
From: R-help [mailto:r-help-boun...@r-project.org] On Behalf Of Jim Lemon
Sent: Tuesday, August 23, 2016 6:02 PM
To: Gang Chen; r-help mailing list
Subject: Re: [R] aggregate
Hi Gang Chen,
If I have the right idea:
for(zval in levels(myData$Z))
crossprod(as.matrix(myData[myData$Z==zval,c("X","Y")]))
Jim
On Wed, Aug 24, 2016 at 8:03 AM, Gang Chen wrote:
> This is a simple question: With a dataframe like the following
>
> myData <- data.frame(X=c(1, 2, 3, 4), Y=c(4, 3
> On Aug 23, 2016, at 3:03 PM, Gang Chen wrote:
>
> This is a simple question: With a dataframe like the following
>
> myData <- data.frame(X=c(1, 2, 3, 4), Y=c(4, 3, 2, 1), Z=c('A', 'A', 'B',
> 'B'))
>
> how can I get the cross product between X and Y for each level of
> factor Z? My difficu
Hi Jeff,
many thanks, that one is the Speedy Gonzalles out of all. Can also do some FUN
stuff.
aggregate.nx.ny.array.aperm <- function( dta, nx = 2, ny = 2, FUN=colMeans, ...
) {
# number of rows in result
nnr <- nrow( dta ) %/% ny
# number of columns in result
nnc <- ncol( dta ) %/% nx
#
If you don't need all that FUN flexibility, you can get this done way
faster with the aperm and colMeans functions:
tst <- matrix( seq.int( 1440 * 360 )
, ncol = 1440
, nrow = 360
)
tst.small <- matrix( seq.int( 8 * 4 )
, ncol = 8
For the record, the array.apply code can be fixed as below, but then it is
slower than the expand.grid version.
aggregate.nx.ny.array.apply <- function(dta,nx=2,ny=2, FUN=mean,...)
{
a <- array(dta, dim = c(ny, nrow( dta ) %/% ny, nx, ncol( dta ) %/% nx))
apply( a, c(2, 4), FUN, ... )
}
--
Hi all,
thanks for the suggestions, I did some timing tests, see below.
Unfortunately the aggregate.nx.ny.array.apply, does not produce the expected
result.
So the fastest seems to be the aggregate.nx.ny.expand.grid, though the double
for loop is not that much slower.
many thanks
Peter
> tst=m
> On Jul 27, 2016, at 12:02 PM, Jeff Newmiller wrote:
>
> An alternative (more compact, not necessarily faster, because apply is still
> a for loop inside):
>
> f <- function( m, nx, ny ) {
> # redefine the dimensions of my
> a <- array( m
> , dim = c( ny
>, n
An alternative (more compact, not necessarily faster, because apply is still a
for loop inside):
f <- function( m, nx, ny ) {
# redefine the dimensions of my
a <- array( m
, dim = c( ny
, nrow( m ) %/% ny
, ncol( m ) %/% nx )
)
This should be faster. It uses apply() across the blocks.
> ilon <- seq(1,8,nx)
> ilat <- seq(1,4,ny)
> cells <- as.matrix(expand.grid(ilat, ilon))
> blocks <- apply(cells, 1, function(x) tst[x[1]:(x[1]+1), x[2]:(x[2]+1)])
> block.means <- colMeans(blocks)
> tst_2x2 <- matrix(block.means, 2, 4)
>
Dear Jean,
Thank you so much for your reply and the solution, This does work. I was
wondering is this similar to 'rasterFromXYZ'? Thanks again!
Sincerely,
Milu
On Fri, Jul 22, 2016 at 3:06 PM, Adams, Jean wrote:
> Milu,
>
> Perhaps an approach like this would work. In the example below, I
>
Milu,
Perhaps an approach like this would work. In the example below, I
calculate the mean GDP for each 1 degree by 1 degree.
temp$long1 <- floor(temp$longitude)
temp$lat1 <- floor(temp$latitude)
temp1 <- aggregate(GDP ~ long1 + lat1, temp, mean)
long1 lat1GDP
1 -69 -55 0.90268640
Hi David,
Thank you so much for your help and others. Here is the code.
balok <- read.csv("G:/A_backup 11 mei 2015/DATA (D)/1 Universiti Malaysia
Pahang/ISM-3 2016 UM/Data/Hourly Rainfall/balok2.csv",header=TRUE)
head(balok, 10); tail(balok, 10)
str(balok)
## Introduce NAs for
balok$Rain.mm2 <-
> On Jul 13, 2016, at 3:21 AM, roslinazairimah zakaria
> wrote:
>
> Dear David,
>
> I got your point. How do I remove the data that contain "0.0?".
>
> I tried : balok <- cbind(balok3[,-5], balok3$Rain.mm[balok3$Rain.mm==0.0?] <-
> NA)
If you had done as I suggested, the items with factor
Behalf Of
> roslinazairimah zakaria
> Sent: Wednesday, July 13, 2016 12:22 PM
> To: David Winsemius
> Cc: r-help mailing list
> Subject: Re: [R] Aggregate rainfall data
>
> Dear David,
>
> I got your point. How do I remove the data that contain "0.0?".
>
&
use `gsub()` after the `as.character()` conversion to remove
everything but valid numeric components from the strings.
On Wed, Jul 13, 2016 at 6:21 AM, roslinazairimah zakaria
wrote:
> Dear David,
>
> I got your point. How do I remove the data that contain "0.0?".
>
> I tried : balok <- cbind(ba
Dear David,
I got your point. How do I remove the data that contain "0.0?".
I tried : balok <- cbind(balok3[,-5], balok3$Rain.mm[balok3$Rain.mm==0.0?]
<- NA)
However all the Rain.mm column all become NA.
day month year Time balok3$Rain.mm[balok3$Rain.mm == "0.0?"] <- NA
1 30 7 200
> On Jul 12, 2016, at 3:45 PM, roslinazairimah zakaria
> wrote:
>
> Dear R-users,
>
> I have these data:
>
> head(balok, 10); tail(balok, 10)
>Date Time Rain.mm
> 1 30/7/2008 9:00:00 0
> 2 30/7/2008 10:00:00 0
> 3 30/7/2008 11:00:00 0
> 4 30/7/2008 12:00:00
> On May 1, 2016, at 9:30 AM, Miluji Sb wrote:
>
> Dear Dennis,
>
> Thank you for your reply. I can use the dplyr/data.table packages to
> aggregate - its the matching FIPS codes to their states that I am having
> trouble. Thanks again.
So post some example code that demonstrate you paid atten
Dear Dennis,
Thank you for your reply. I can use the dplyr/data.table packages to
aggregate - its the matching FIPS codes to their states that I am having
trouble. Thanks again.
Sincerely,
Milu
On Sun, May 1, 2016 at 6:20 PM, Dennis Murphy wrote:
> Hi:
>
> Several such packages exist. Given t
Hello,
I'm cc'ing R-Help.
Sorry but your question was asked 3.5 years ago, I really don't
remember it. Can you please post a question to R-Help, with a
reproducible example that describes your problem?
Rui Barradas
Citando catalin roibu :
> Dear Rui,
>
> I helped me some time ago with
Using column names where you used column numbers would work:
example <- data.frame(
check.names = FALSE,
Nuclei = c(133L, 96L, 62L, 60L),
`Positive Nuclei` = c(96L, 70L, 52L, 50L),
Slide = factor(c("A1", "A1", "A2", "A2"), levels = c("A1", "A2")))
aggregate(example["Nuclei"], by=ex
So that's how that works! Thanks.
On Fri, Jan 22, 2016 at 1:32 PM, Joe Ceradini wrote:
> Does this do what you want?
>
> aggregate(Nuclei ~ Slide, example, sum)
>
> On Fri, Jan 22, 2016 at 12:20 PM, Ed Siefker wrote:
>>
>> Aggregate does the right thing with column names when passing it
>> nume
Does this do what you want?
aggregate(Nuclei ~ Slide, example, sum)
On Fri, Jan 22, 2016 at 12:20 PM, Ed Siefker wrote:
> Aggregate does the right thing with column names when passing it
> numerical coordinates.
> Given a dataframe like this:
>
> Nuclei Positive Nuclei Slide
> 1133
On Fri, Jan 22, 2016 at 01:20:59PM -0600, Ed Siefker wrote:
> Aggregate does the right thing with column names when passing it
> numerical coordinates.
> Given a dataframe like this:
>
> Nuclei Positive Nuclei Slide
> 1133 96A1
> 2 96 70A1
> 3 62
Hi Jim,
Thanks a lot! It works now. I didn't remember how to access the
datetimes in w10min. names(...) is the solution!
Rolf
Jim Lemon wrote:
Hi Rolf,
If I get the above, perhaps if you change the names of w10min after
applying the calculation:
raindata<-data.frame(value=round(runif(60,0,
Thanks everybody!
On Thu, Sep 17, 2015 at 6:57 PM, Rui Barradas wrote:
> In package reshape2
>
> Hope this helps,
>
> Rui Barradas
>
>
> Em 17-09-2015 17:03, Frank Schwidom escreveu:
>
>> Hi
>>
>> where can i find 'melt' and 'dcast' ?
>>
>> Regards
>>
>>
>> On Thu, Sep 17, 2015 at 08:22:10AM +00
In package reshape2
Hope this helps,
Rui Barradas
Em 17-09-2015 17:03, Frank Schwidom escreveu:
Hi
where can i find 'melt' and 'dcast' ?
Regards
On Thu, Sep 17, 2015 at 08:22:10AM +, PIKAL Petr wrote:
Hi
-Original Message-
From: R-help [mailto:r-help-boun...@r-project.org] O
Hi
where can i find 'melt' and 'dcast' ?
Regards
On Thu, Sep 17, 2015 at 08:22:10AM +, PIKAL Petr wrote:
> Hi
>
> > -Original Message-
> > From: R-help [mailto:r-help-boun...@r-project.org] On Behalf Of Kai Mx
> > Sent: Wednesday, September 16, 2015 10:43 PM
> > To: r-help mailing
Hi
res <- sapply( df1[ , -1], function( x) table(x)[as.character( 0:5)])
rownames( res) <- paste( sep='', 'result', 0:5)
res[ is.na( res)] <- 0
res
item1 item2 item3 item4 item5
result0 1 0 1 1 0
result1 1 2 0 0 0
result2 1 2 1 1
Hi
> -Original Message-
> From: R-help [mailto:r-help-boun...@r-project.org] On Behalf Of Kai Mx
> Sent: Wednesday, September 16, 2015 10:43 PM
> To: r-help mailing list
> Subject: [R] aggregate counting variable factors
>
> Hi everybody,
>
> >From a questionnaire, I have a dataset like t
Hi,
Please use ?dput() to show the datasets as one of the rows (Id "four") in first
dataset didn't show 11 elements.
df1 <- structure(list(Id = c("one", "one", "two", "two", "three", "three",
"three", "four", "five", "five"), col1 = c("a1", NA, "b1", "b1",
NA, NA, "c1", "d1", "e1", NA), col2
Thanks, Rui.
It works great.
Atem.
On Saturday, April 5, 2014 4:46 AM, Rui Barradas wrote:
Hello,
Maybe the following will do.
dat <- structure(...)
aggregate(dat[5:8], dat[c(1, 2, 4)], FUN = mean)
Hope this helps,
Rui Barradas
Em 05-04-2014 06:37, Zilefac Elvis escreveu:
> Hi,
>
> I hav
Hello,
Maybe the following will do.
dat <- structure(...)
aggregate(dat[5:8], dat[c(1, 2, 4)], FUN = mean)
Hope this helps,
Rui Barradas
Em 05-04-2014 06:37, Zilefac Elvis escreveu:
Hi,
I have daily data arranged by date and site. Keeping the number of columns as
there are, I will like t
Hi,
I have daily data arranged by date and site. Keeping the number of columns as
there are, I will like to aggregate (FUN=mean) from daily to monthly the
following data (only part is shown here) which starts in 1971 and ends in 1980.
structure(list(Year = c(1971, 1971, 1971, 1971, 1971, 1971,
You have been around long enough that we should not have to tell you how to
provide data in a reproducible manner... read ?dput.
---
Jeff NewmillerThe . . Go Live...
DCN:Basic
1 - 100 of 322 matches
Mail list logo