You could use:
library(dplyr)
library(tidyr)
x.df %>% group_by(Year, Group, Eye_Color) %>% summarize(n=n()) %>%
spread(Eye_Color,n, fill=0)
Source: local data frame [6 x 5]
Year Group blue brown green
1 2000 1 2 1 0
2 2000 2 0 0 2
3 2001 1 1
Use one of the various functions out there that generate markdown or html
tables, and set the chunk option results='asis'.
Google for "kable", "xtable", "pander", or "ascii"... there are probably others
as well.
---
Jeff New
Thank you everyone for your help so far.
I am still working on the problem to get a merged new dataframe which fills in
new rows with NA values for each year that is missing for plotting with gaps (
in the example the item BARTLEY: years 1984 to 1987 should be filled with a row
containing NA v
Hello
I have the following chunk of code within a rmarkdown document:
```{r infoA, echo=FALSE, tidy=FALSE}
print(pAinfo,quote=FALSE,justify="center")
```
pAinfo is data.frame with 21 rows and 4 columns. Unfortunately the resulting
html (or pdf) file does not show all the four columns together (t
library(reshape2)
?dcast
Nice example. So nice that it looks like it could be homework... thus the
pointer to docs rather than a full solution. Please read the Posting Guide, and
note that HTML email format is not necessarily a what-you-see-is-what-we-see
format so you should post in plain text
One way to accomplish this is to assign a new environment to the
formula, an environment which inherits from the formula's original
environment but one that you can add things to without affecting the
original environment. Also, since you do this in a function only the
copy of the formula in the f
Hi all,
I need to loop over "lm" within a function using "weights". For example:
mydata = data.frame(y=rnorm(100, 500, 100), x= rnorm(100),
group=rep(c(0,1), 50), myweight=1/runif(100))
reg.by.wt <- function(formula, wt, by, data) {
if(missing(by)) {
summary(lm(formula=formula, data=data,
If I have a dataframe x.df as follows:
> x.df <- data.frame(Year = c(2000, 2000, 2000, 2000, 2000, 2001, 2001,
2001, 2001, 2002), Group = c(1, 1, 1, 2, 2, 1, 2, 2, 3, 1), Eye_Color =
c("blue", "blue", "brown", "green", "green", "blue", "brown", "blue",
"blue", "blue"))
> x.df
Year Group Eye_Col
Do you know how to extract some rows of a data.frame? A short answer
is with subscripts, either integer,
first10 <- 1:10
dFirst10 <- d[first10, ] # I assume your data.frame is called 'd'
or logical
plus4 <- d[, "Col_4"] == "+"
dPlus4 <- d[ plus4, ]
If you are not familiar with that sor
On Aug 1, 2014, at 1:58 PM, Stephen HK Wong wrote:
> Dear ALL,
>
> I have a dataframe contains 4 columns and several 10 millions of rows like
> below! I want to extract out "randomly" say 1 millions of rows, can you tell
> me how to do that in R using base packages? Many Thanks
>
> Col_1
Dear ALL,
I have a dataframe contains 4 columns and several 10 millions of rows like
below! I want to extract out "randomly" say 1 millions of rows, can you tell me
how to do that in R using base packages? Many Thanks
Col_1 Col_2 Col_3 Col_4
chr13000215 3000250 -
chr13000909 3
I think it may be time for you to rethink your process. Yes there are
ways to do what you are asking, but when you start wanting to combine
graphs, tables, r output and descriptions and annotations then it is
time to look into tools like knitr. With knitr you can create a
template file with R cod
How about:
x <- as.numeric(sub("^S([0-9]+):([0-9]+)$", "\\1", xx))
y <- as.numeric(sub("^S([0-9]+):([0-9]+)$", "\\2", xx))
2014-08-01 16:46 GMT+02:00 Doran, Harold :
> I have done an embarrassingly bad job using a mixture of gsub and strsplit
> to solve a problem. Below is sample code showing
On Aug 1, 2014, at 9:46 AM, Doran, Harold wrote:
> I have done an embarrassingly bad job using a mixture of gsub and strsplit to
> solve a problem. Below is sample code showing what I have to start with (the
> vector xx) and I want to end up with two vectors x and y that contain only
> the dig
Forgot about as.numeric.
sapply(str_extract_all(xx, perl('(?<=[A-Z]|\\:)\\d+')),as.numeric)
[,1] [,2] [,3] [,4] [,5] [,6]
[1,] 24 24 24 24 24 24
[2,] 57 86 119 129 138 163
On Friday, August 1, 2014 10:59 AM, arun wrote:
You could try:
library(stringr)
simplify
You could try:
library(stringr)
simplify2array(str_extract_all(xx, perl('(?<=[A-Z]|\\:)\\d+')))
[,1] [,2] [,3] [,4] [,5] [,6]
[1,] "24" "24" "24" "24" "24" "24"
[2,] "57" "86" "119" "129" "138" "163"
A.K.
On Friday, August 1, 2014 10:49 AM, "Doran, Harold" wrote:
I have done an
On 01/08/2014 9:50 AM, Prof Brian Ripley wrote:
On 01/08/2014 14:31, Duncan Murdoch wrote:
> On 01/08/2014 7:48 AM, Roy Sasson wrote:
>> hello R community,
>> i am trying to install rqpd package on windows, using the following
>> command:
>>
>> install.packages("rqpd",repos="http://R-Forge.R-proj
On Fri, Aug 1, 2014 at 10:46 AM, Doran, Harold wrote:
> I have done an embarrassingly bad job using a mixture of gsub and strsplit to
> solve a problem. Below is sample code showing what I have to start with (the
> vector xx) and I want to end up with two vectors x and y that contain only
> the
Here's another approach:
# First put the data in a format that is easier to transmit using dput():
dta <- structure(list(Country = structure(c(1L, 3L, 2L, 1L, 3L, 2L,
1L, 2L, 1L, 3L, 2L, 1L, 3L), .Label = c("AE", "CN", "DE"), class = "factor"),
Product = c(1L, 1L, 1L, 2L, 2L, 2L, 3L, 3L, 1L,
I have done an embarrassingly bad job using a mixture of gsub and strsplit to
solve a problem. Below is sample code showing what I have to start with (the
vector xx) and I want to end up with two vectors x and y that contain only the
digits found in xx.
Any regex users with advice most welcome
Use ?split()
split(dat[,-4], dat$Year_Month) #dat is the dataset.
A.K.
Country Product Price Year_Month
AE 1 20 201204
DE 1 20 201204
CN 1 28 201204
AE 2 28 201204
DE 2
On Fri, Aug 1, 2014 at 6:41 AM, Lingyi Ma wrote:
> I have the following data set:
>
> Country Product Price Year_Month
> AE 1 20201204
> DE 1 20201204
> CN 1 28201204
> AE 2 28201204
>
Please remember the 'reply all' for the r-help page.
First Question: How can i use Pearson correlation with dichotomous data? i
want to use a correlation between dichotomous variables like spearman
correlation in ordered categorical variables?
cor(variable1, variable2, *method = "pearson"*)
Seco
On 01/08/2014 14:31, Duncan Murdoch wrote:
On 01/08/2014 7:48 AM, Roy Sasson wrote:
hello R community,
i am trying to install rqpd package on windows, using the following
command:
install.packages("rqpd",repos="http://R-Forge.R-project.org";)
however, i get the following message:
Warning: una
It is possible to do without loops if you start by calculating the totals. Then
is just aggregating and merging data.
Best regards,
Thierry
set.seed(21)
n.country <- 5
average.price <- runif(n.country, max = 200)
price <- expand.grid(
Product = 1:10,
Country = factor(LETTERS[seq_len(n.coun
On 01/08/2014 7:48 AM, Roy Sasson wrote:
hello R community,
i am trying to install rqpd package on windows, using the following command:
install.packages("rqpd",repos="http://R-Forge.R-project.org";)
however, i get the following message:
Warning: unable to access index for repository
http://R-
Hi
I do not see any solution without loops but maybe others find it.
I think that you can do it in one loop. Best structure for loop will be list.
In each cycle you will compute matrix with diagonal NA
mat<-matrix(1,nrow=number of items, ncol=number of items)
diag(mat) <- NA
apply(price chunk
Hi,
I have been using the R/fast99 library to work through the Extended FAST paper
by Saltelli et al 1999 [1].
In trying to repeat some examples with R, I have encountered the following
'bug' in the implementation.
Synopsis: tell.fast99 gives NA first order sensitivity indices when 'M *
omega[
hello R community,
i am trying to install rqpd package on windows, using the following command:
install.packages("rqpd",repos="http://R-Forge.R-project.org";)
however, i get the following message:
Warning: unable to access index for repository
http://R-Forge.R-project.org/bin/windows/contrib/2.1
I have the following data set:
Country Product Price Year_Month
AE 1 20201204
DE 1 20201204
CN 1 28201204
AE 2 28201204
DE 2 28201204
CN 2
I wonder if anyone has written some additional R code to perform
weighted logistic regression in the way of SAS PROC LGOISTIC WEIGHT
statement. I want to weight the sample using a vector of probabilities
generated from Dirichlet distribution.
It is known that the R function glm has a WEIGHT op
You should check out the animint package.
https://github.com/tdhock/animint
On Mon, Jul 28, 2014 at 5:48 PM, Shi, Tao wrote:
> hi list,
>
> I'm comparing the changes of ~100 analytes in multiple treatment
> conditions. I plotted them in several different xy scattter plots. It
> would be nic
On Fri, 1 Aug 2014 07:25:05 AM barbara tornimbene wrote:
> HI.
> I have a set of disease outbreak data. Each observation have a
> location (spatial coordinates) and a start date. Outbreaks that occur
in
> the same location within a two week periods have to be merged.
Basically I
> need to delete
On 01.08.2014 07:28, arun wrote:
Try:
If dat is the dataset.
library(stringr)
res <- str_extract(dat$Gene.Symbol, perl('[[:alnum:]]+(?= \\/)'))
res[!is.na(res)]
#[1] "CDH23"
Or without additional packages and if you want to keep all information
from the other rows of your dat
Hi
Maybe others will disagree but I find for cycle for this type of task better
than sapply.
for(i in 1:length(ind)) {
if (there are more than 3 date items*) {
postscript(ind[i])
do all plotting
dev.off()
}}
If you want to plot with gaps you need to add all relevant YEARs for x axis
with mis
HI.
I have a set of disease outbreak data. Each observation have a location
(spatial coordinates) and a start date.
Outbreaks that occur in the same location within a two week periods have to be
merged.
Basically I need to delete duplicates that have same spatial coordinated and
start dates co
Hi Dario,
The maintainer of that package is:
Benjamin Auder
and I have copied him on this message. Usually it is best to include
the maintainer in this sort of situation.
Jim
On Fri, 1 Aug 2014 01:00:21 AM Dario Strbenac wrote:
> Hello,
>
> I would like to provide a helpful bug report to the
Read
?maintainer
and
the Posting Guide mentioned below.
---
Jeff NewmillerThe . . Go Live...
DCN:Basics: ##.#. ##.#. Live Go...
38 matches
Mail list logo