Dear Gunter / Heiberger,
Thanks for the help. This is what I was looking for:
> ... and here is a non-dplyr rsolution:
>
>> z <-gsub("[^[:digit:]]"," ",dd$Lower)
>
>> sapply(strsplit(z," +"),function(x)sum(as.numeric(x),na.rm=TRUE))
> [1] 105 67 60 100 80
And that would explain, why one coul
Hi Michael,
At a guess, try this:
iqr<-function(x) {
return(paste(round(quantile(x,0.25),0),round(quantile(x,0.75),0),sep="-")
}
.col3_Range=iqr(datat$tenure)
Jim
On Tue, Apr 19, 2016 at 11:15 AM, Michael Artz wrote:
> Hi,
> I am trying to show an interquartile range while grouping values
Hi Si Jie,
Again, please send questions to the list, not me.
Okay, I may have worked out what you are doing. The program runs and
produces what I would expect in the rightmost columns of the result
"g".
You are storing the number of each test for which the p value is less
than 0.05. It looks to m
Hi,
I am trying to show an interquartile range while grouping values using
the function ddply(). So my function call now is like
groupedAll <- ddply(data
,~groupColumn
,summarise
,col1_mean=mean(col1)
,col2_mode=Mode(col2) #Fun
Hi Jeem,
First, please send questions like this to the help list, not me.
I assume that you are in a similar position to sjtan who has been
sending almost exactly the same questions.
The problem is not in the loops (which look rather familiar to me) but
in your initial assignments at the top. For
Hi Ansley,
Without your data file (or a meaningful subset) we can only guess, but
you may be trying to define groups on the columns rather than the rows
of the data set. Usually rows represent cases and each case must have
a value for the grouping variable.
Jim
On Tue, Apr 19, 2016 at 6:33 AM, A
Dear All,
Many thanks for bailing me out.
Ogbos
On Apr 18, 2016 9:07 PM, "David Winsemius" wrote:
>
> > On Apr 18, 2016, at 10:44 AM, Ogbos Okike
> wrote:
> >
> > Dear ALL,
> > Thank you so much for your contributions.
> > I have made some progress. Below is a simple script I gleaned from
> > yo
Hello,
*Error in tx %*% comb : non-conformable arguments*
Suggestions greatly appreciated. I am a beginner and this is my first time
posting.
I would like to get the summary for indicator species analysis, using
package indicspecies with multipatt. I am getting errors, I believe, do to
my dat
You can always add those names to the list: is this what you are after?
> example.names <- c("con1-1-masked-bottom-green.tsv",
"con1-1-masked-bottom-red.tsv"
+ , "con1-1-masked-top-green.tsv","con1-1-masked-top-red.tsv")
> example.list <- strsplit(example.names, "-")
> names(example.l
They aren't being stored, they are being generated on the fly. You can
create the same names using make.names()
example.names <- c("con1-1-masked-bottom-green.tsv",
"con1-1-masked-bottom-red.tsv", "con1-1-masked-top-green.tsv",
"con1-1-masked-top-red.tsv")
example.list <- strsplit(example.names,
Hi,
I am trying to add a vertical arrow (from top to bottom or from bottom
to up) to a time series plot using ggplot2 and xts. It seems that the
vertical line command "geom_vline" does not work for this purpose (Correct
me if I am wrong). I try the command "geom_segment" as follows, but I got
a
Please don't crosspost. You already posted this question to
r-sig-mixedmodels which is the appropriate list for your question.
ir. Thierry Onkelinx
Instituut voor natuur- en bosonderzoek / Research Institute for Nature and
Forest
team Biometrie & Kwaliteitszorg / team Biometrics & Quality Assuranc
I'm doing some string manipulation on a vector of file names, and noticed
something curious. When I strsplit the vector, I get a list of
character vectors.
The list is numbered, as lists are. When I cast that list as a data
frame with 'as.data.frame()', the resulting columns have names derived
fr
Hi,
I am Olga Viedma. I am running a Zero-inflated negative binomial (ZINB)
multi-level model using MCMCglmm package. I have a doubt. Can I use the "Liab"
outputs as fitted data, instead of the predicted values from "predict"? The
liab outputs fit very well with the observed data, whereas the
> On Apr 18, 2016, at 10:44 AM, Ogbos Okike wrote:
>
> Dear ALL,
> Thank you so much for your contributions.
> I have made some progress. Below is a simple script I gleaned from
> your kind responses:
> Sys.setenv(TZ="Etc/GMT")
> dates <- c("02/27/92", "02/27/92", "01/14/92", "02/28/92", "02/01/
Dear ALL,
Thank you so much for your contributions.
I have made some progress. Below is a simple script I gleaned from
your kind responses:
Sys.setenv(TZ="Etc/GMT")
dates <- c("02/27/92", "02/27/92", "01/14/92", "02/28/92", "02/01/92")
times <- c("23:0:0", "22:0:0", "01:00:00", "18:0:0", "16:0
Hi there,
I've a training dataset and a test dataset. My aim is to visually
allocate the test data within the calibrated space reassembled by the
PC's of the training data set, furthermore to keep the training data set
coordinates fixed, so they can serve as ruler for measurement for
additional te
... and a slightly more efficient non-dplyr 1-liner:
> sapply(strsplit(dd$Lower,"[^[:digit:]]"),
function(x)sum(as.numeric(x), na.rm=TRUE))
[1] 105 67 60 100 80
Cheers,
Bert
Bert Gunter
"The trouble with having an open mind is that people keep coming along
and sticking things into it."
-- Op
> On Apr 18, 2016, at 12:48 AM, Angelo Varlotta
> wrote:
>
> Hi,
> I'm trying to install from source code the 'nlme' package in
> RStudio. When I try, I get the following error message:
>
> ld: warning: directory not found for option
> '-L/usr/local/lib/gcc/x86_64-apple-darwin13.0.0/4.8.2'
>
... and here is a non-dplyr rsolution:
> z <-gsub("[^[:digit:]]"," ",dd$Lower)
> sapply(strsplit(z," +"),function(x)sum(as.numeric(x),na.rm=TRUE))
[1] 105 67 60 100 80
Cheers,
Bert
Bert Gunter
"The trouble with having an open mind is that people keep coming along
and sticking things into it
> On Apr 18, 2016, at 9:48 AM, Burhan ul haq wrote:
>
> Hi,
>
> I request help with the following:
>
> INPUT: A data frame where column "Lower" is a character containing numeric
> values (different count or occurrences of numeric values in each row,
> mostly 2)
>
>> dput(dd)
> structure(list(
## Continuing with your data
AA <- stringr::str_extract_all(dd[[2]],"[[:digit:]]+")
BB <- lapply(AA, as.numeric)
## I think you are looking for one of the following two expressions
sum(unlist(BB))
sapply(BB, sum)
On Mon, Apr 18, 2016 at 12:48 PM, Burhan ul haq wrote:
> Hi,
>
> I request help wi
Hi,
I request help with the following:
INPUT: A data frame where column "Lower" is a character containing numeric
values (different count or occurrences of numeric values in each row,
mostly 2)
> dput(dd)
structure(list(State = c("Alabama", "Alaska", "Arizona", "Arkansas",
"California"), Lower =
Date data cannot represent hour data. You need to use POSIXct or perhaps the
chron class from the chron package.
To use POSIXct, use ISOdatetime instead of ISOdate. Also be careful which
timezone you have set as default (in most operating systems calling
Sys.setenv(TZ="Etc/GMT") or similar wil
Hi
AFAIK as.Date does not accept hours. Although it is not explicitly written in
help page, the name as.Date seems to me clear enough that it works only with
dates.
If you want to use hours, minutes ... you should use strptime for converting
your values to valid date_time object.
And you shou
The most important thing is that Date objects by definition do not include time
of day. You want to look at ISOdatetime() and as.POSIXct() instead. And beware
daylight savings time issues.
-pd
On 18 Apr 2016, at 15:09 , Ogbos Okike wrote:
> Dear All,
>
> I have a data set containing year, mo
Dear All,
I have a data set containing year, month, day and counts as shown below:
data <- read.table("data.txt", col.names = c("year", "month", "day", "counts"))
Using the formula below, I converted the data to as date and plotted.
new.century <- data$year < 70
data$year <- ifelse(new.century,
This is explained in the "Details" section of the help page for partialPlot.
Best
Andy
> -Original Message-
> From: R-help [mailto:r-help-boun...@r-project.org] On Behalf Of Jesús Para
> Fernández
> Sent: Tuesday, April 12, 2016 1:17 AM
> To: r-help@r-project.org
> Subject: [R] Random For
yes, i think that must be some mistake. I just noticed that it run for the nine
sample sizes with the column fill in "1" in the result.
And yet i am still trying to figure out what is happening.
From: Thierry Onkelinx
Sent: Monday, April 18, 2016 10:03 AM
People,
I thought I needed to have some familiarity with NNs for some of my
current (non-profit, brain-related) projects so I started looking at
various programming environments including R and I got this working:
http://gekkoquant.com/2012/05/26/neural-networks-with-r-simple-example
howev
Always keep the mailing list in cc.
The code runs for each row in the data. However I get the feeling that
there is a mismatch between what you think that is in the data and the
actual data.
ir. Thierry Onkelinx
Instituut voor natuur- en bosonderzoek / Research Institute for Nature
and Forest
team
Hello,
I am currently using the sensitivity package standard regression coefficient in
order to rank variable importance in a model. I am new to using R so there may
be some obvious things I am unaware of, apologies in advance as I am still
learning.
I am using the following which I have taken
Hi, i am sorry, the output should be values between 0 and 0.1 and not supposed
to be 1.00, it is because they are type 1 error rate. And now i get output 1.00
for several samples,rhis is no correct. The loop do not run for every row. i do
not know where is my mistake. As i use the same concept
Hi,
I'm trying to install from source code the 'nlme' package in
RStudio. When I try, I get the following error message:
ld: warning: directory not found for option
'-L/usr/local/lib/gcc/x86_64-apple-darwin13.0.0/4.8.2'
ld: library not found for -lgfortran
clang: error: linker command failed wit
Dear Professor Haenlein,
Have you solved this issue yet? I found this eally interesting problem
I was wondering if it is possible to wrapper "objective function"
around igraph's 'sample_pa' and
'sample_smallworld'. If you have an example data set, I can have a look at this.
Viele Gruesse aus London
You can make this much more readable with apply functions.
result <- apply(
all_combine1,
1,
function(x){
p.value <- sapply(
seq_len(nSims),
function(sim){
gamma1 <- rgamma(x["m"], x["sp(skewness1.5)"], x["scp1"])
gamma2 <- rgamma(x["n"], x["scp1"], 1)
36 matches
Mail list logo