> On 17 Sep 2015, at 23:11, Dimitri Liakhovitski
> wrote:
>
> (x <- c("q10_1", "q10_2", "q10_11", "q12_1", "q12_2", "q13_1", "q13_11"))
>
> # Which strings start with "q10" or "q12? - WORKS
> x[grep("^q10|q12", x)]
>
> # Which strings end with "1"? - WORKS
> x[grep("1$", x)]
>
> # Which stri
Hi all,
I'm learning about how to do clusters of clients. Ç
I've founde this nice presentation on the subject, but the data is not
avaliable to use. I've contacted the autor, hope he'll answer soon.
https://ds4ci.files.wordpress.com/2013/09/user08_jimp_custseg_revnov08.pdf
Someone knows similar
On Sep 17, 2015, at 3:36 PM, Farnoosh Sheikhi via R-help wrote:
> Hello,
> I'm trying to get the distances between two Zipcode variables, but for some
> reason I get this error:
> "matching was not perfect, returning what was found.Error: no such index at
> level 1"
> Here is my code:
>
> lib
No data. See dput() (?dput) as the preferred way to send data
John Kane
Kingston ON Canada
> -Original Message-
> From: alfadia...@mac.com
> Sent: Thu, 17 Sep 2015 16:41:46 -0400
> To: r-help@r-project.org
> Subject: [R] best data storage format?
>
> Hello -
>
> I’m working on dataset
On 17/09/2015 5:46 PM, Dimitri Liakhovitski wrote:
> Duncan,
> Of course my verbal descriptions and my code don't match my regexp -
> otherwise I wouldn't be asking the question, would I?
> Please assume my verbal descriptions are correctly describing what I want.
Sorry, I interpreted "works" and
Hello,
I'm trying to get the distances between two Zipcode variables, but for some
reason I get this error:
"matching was not perfect, returning what was found.Error: no such index at
level 1"
Here is my code:
library(ggmap)mapdist(data$Zip.A, data$Zip.B, mode = "driving")
The Zip codes are all
On Sep 17, 2015, at 2:11 PM, Dimitri Liakhovitski wrote:
> (x <- c("q10_1", "q10_2", "q10_11", "q12_1", "q12_2", "q13_1", "q13_11"))
>
> # Which strings start with "q10" or "q12? - WORKS
> x[grep("^q10|q12", x)]
>
> # Which strings end with "1"? - WORKS
> x[grep("1$", x)]
>
> # Which strings e
For the last one, looks like this one works:
x[grep("^(q10|q12).*\\_1$", x)]
On Thu, Sep 17, 2015 at 5:46 PM, Dimitri Liakhovitski
wrote:
> Duncan,
> Of course my verbal descriptions and my code don't match my regexp -
> otherwise I wouldn't be asking the question, would I?
> Please assume my ver
On 17/09/2015 5:11 PM, Dimitri Liakhovitski wrote:
> (x <- c("q10_1", "q10_2", "q10_11", "q12_1", "q12_2", "q13_1", "q13_11"))
>
> # Which strings start with "q10" or "q12? - WORKS
> x[grep("^q10|q12", x)]
>
> # Which strings end with "1"? - WORKS
> x[grep("1$", x)]
>
> # Which strings end with
Duncan,
Of course my verbal descriptions and my code don't match my regexp -
otherwise I wouldn't be asking the question, would I?
Please assume my verbal descriptions are correctly describing what I want.
Thank you!
On Thu, Sep 17, 2015 at 5:42 PM, Duncan Murdoch
wrote:
> On 17/09/2015 5:11 PM,
Dear R users,
I'm attempting to override the base function runif() with a function,
custom_runif(), that I've written and tested in Rcpp - with the aim of using my
own RNG in DEoptim.
I've attempted setting the name in a namespace changing locking to no avail,
e.g.
assignInNamespace(runif, cu
Thanks everybody!
On Thu, Sep 17, 2015 at 6:57 PM, Rui Barradas wrote:
> In package reshape2
>
> Hope this helps,
>
> Rui Barradas
>
>
> Em 17-09-2015 17:03, Frank Schwidom escreveu:
>
>> Hi
>>
>> where can i find 'melt' and 'dcast' ?
>>
>> Regards
>>
>>
>> On Thu, Sep 17, 2015 at 08:22:10AM +00
Hello -
I’m working on dataset that will eventually be used in an xyz-plot.
I’m having trouble figuring out the best way to store the data (see an attached
.csv sheet exported from Excel). Some information on the data:
- Columns B - F are labels that describe the z data points
- Rows above x an
Hi Matt,
you could use matrix indexing. Here is a possible solution, which could
be optimized further (probably).
# The old matrix
(old.mat <- matrix(1:30,nrow=3,byrow=TRUE))
# matrix of indices
index <- matrix(c(1,1,1,4,
1,3,5,10,
2,2,1,3,
(x <- c("q10_1", "q10_2", "q10_11", "q12_1", "q12_2", "q13_1", "q13_11"))
# Which strings start with "q10" or "q12? - WORKS
x[grep("^q10|q12", x)]
# Which strings end with "1"? - WORKS
x[grep("1$", x)]
# Which strings end with "_1"? - WORKS
x[grep("\\_1$", x)]
# Which strings start with "q10" A
HI all,
Sorry for the title here but I find this difficult to describe succinctly.
Here's the problem.
I want to create a new matrix where each row is a composite of an old
matrix, but where the row & column indexes of the old matrix change for
different parts of the new matrix. For example, the
Note that:
>> Also not sure about efficiency but somewhat shorter...
>> unlist(lapply(5:1, seq))
>>
>>> Peter
>>>
is almost exactly sequence(5:1)
which is
unlist(lapply(5:1,seq_len))
which is "must preferred". See ?seq for details
Cheers,
Bert
>>> On Thu, Sep 17, 2015 at 11:19 AM, Dan D
If you are interested in speed for long input vectors try the following,
which should give the same result as sequence().
mySequence <-function (nvec)
{
nvec <- as.integer(nvec)
seq_len(sum(nvec)) - rep(cumsum(c(0L, nvec[-length(nvec)])),
nvec)
}
E.g.,
> n <- rpois(1e6, 3)
> system
Hi Rosa,
I don't think the problem is with the split.screen command, for you are
getting the eight plots and the screen at the right as you requested. It
looks like your margins for each plot need adjusting, and I also think you
should have about a 2.2 to 1 width to height ratio in the graphics dev
Very nice variety of solutions to create c(1:n, 1:(n-1), 1:(n-2), ... , 1)
#Testing the methods with n=1000 (microbenchmark)
n<-1000
# by far the nicest-looking, easiest to follow, and fastest is Frank
Schwidom's:
# it also requires the minimum amount of memory (as do several of the
others)
# 2.7
optimx does nothing to speed up optim or the other component optimizers.
In fact, it does a lot of checking and extra work to improve reliability
and add KKT tests that actually slow things down. The purpose of optimx
is to allow comparison of methods and discovery of improved approaches
to a p
R Help -
I am trying to use a grid search for a 2 free parameter reinforcement
learning model and the grid search is incredibly slow. I've used optimx but
can't seem to get reasonable answers. Is there a way to speed up this grid
search dramatically?
dat <- structure(list(choice = c(0, 1, 1, 1,
how abount a more complicated one?
outer( 1:5, 1:5, '-')[ outer( 1:5, 1:5, '>')]
[1] 1 2 3 4 1 2 3 1 2 1
On Thu, Sep 17, 2015 at 11:52:27AM -0700, David Winsemius wrote:
> You can add this to the list of options to be tested, although my bet would
> be placed on `sequence(5:1)`:
>
> > Reduce
I'm not too sure this is any better:
n<-5
c<-0; # establish result as numeric
for(i in seq(n,1,-1)){ c<-c(c,seq(1,i)); str(c); }; #generate array
c<-c[2:length(c)]; #remove the leading 0
If you're a fan of recursive programming:
> mklist <- function(x) { if (x==1) return(1) else return(
c(seq(1,
You can add this to the list of options to be tested, although my bet would be
placed on `sequence(5:1)`:
> Reduce( function(x,y){c( 1:y, x)}, 1:5)
[1] 1 2 3 4 5 1 2 3 4 1 2 3 1 2 1
On Sep 17, 2015, at 11:40 AM, Achim Zeileis wrote:
> On Thu, 17 Sep 2015, Peter Langfelder wrote:
>
>> Not sur
On Thu, 17 Sep 2015, Peter Langfelder wrote:
Not sure if this is slicker or easier to follow than your solution,
but it is shorter :)
do.call(c, lapply(n:1, function(n1) 1:n1))
Also not sure about efficiency but somewhat shorter...
unlist(lapply(5:1, seq))
Peter
On Thu, Sep 17, 2015 at 11:
sequence( 5:1)
Regards.
On Thu, Sep 17, 2015 at 11:19:05AM -0700, Dan D wrote:
> Can anyone think of a slick way to create an array that looks like c(1:n,
> 1:(n-1), 1:(n-2), ... , 1)?
>
> The following works, but it's inefficient and a little hard to follow:
> n<-5
> junk<-array(1:n,dim=c(n,n)
Not sure if this is slicker or easier to follow than your solution,
but it is shorter :)
do.call(c, lapply(n:1, function(n1) 1:n1))
Peter
On Thu, Sep 17, 2015 at 11:19 AM, Dan D wrote:
> Can anyone think of a slick way to create an array that looks like c(1:n,
> 1:(n-1), 1:(n-2), ... , 1)?
>
>
Can anyone think of a slick way to create an array that looks like c(1:n,
1:(n-1), 1:(n-2), ... , 1)?
The following works, but it's inefficient and a little hard to follow:
n<-5
junk<-array(1:n,dim=c(n,n))
junk[((lower.tri(t(junk),diag=T)))[n:1,]]
Any help would be greatly appreciated!
-Dan
-
John,
The intersect() function may help you. For example:
listA <- sort(sample(10, 5))
listB <- sort(sample(10, 5))
both <- intersect(listA, listB)
> listA
[1] 2 4 7 8 9
> listB
[1] 1 2 3 8 10
> both
[1] 2 8
Jean
On Wed, Sep 16, 2015 at 9:43 PM, John Sorkin
wrote:
> I have two structure
In package reshape2
Hope this helps,
Rui Barradas
Em 17-09-2015 17:03, Frank Schwidom escreveu:
Hi
where can i find 'melt' and 'dcast' ?
Regards
On Thu, Sep 17, 2015 at 08:22:10AM +, PIKAL Petr wrote:
Hi
-Original Message-
From: R-help [mailto:r-help-boun...@r-project.org] O
Best advice I can give: Find a local statistical expert to work with.
You appear to be asking for help understanding statistical methodology
when you do not have the necessary background to do so.
Cheers,
Bert
Bert Gunter
"Data is not information. Information is not knowledge. And knowledge
is c
On Thu, 17 Sep 2015, Berend Hasselman wrote:
On 17 Sep 2015, at 01:42, Dénes Tóth wrote:
On 09/16/2015 04:41 PM, Bert Gunter wrote:
Yes! Chuck's use of mapply is exactly the split/combine strategy I was
looking for. In retrospect, exactly how one should think about it.
Many thanks to all
Hi
where can i find 'melt' and 'dcast' ?
Regards
On Thu, Sep 17, 2015 at 08:22:10AM +, PIKAL Petr wrote:
> Hi
>
> > -Original Message-
> > From: R-help [mailto:r-help-boun...@r-project.org] On Behalf Of Kai Mx
> > Sent: Wednesday, September 16, 2015 10:43 PM
> > To: r-help mailing
Hi
res <- sapply( df1[ , -1], function( x) table(x)[as.character( 0:5)])
rownames( res) <- paste( sep='', 'result', 0:5)
res[ is.na( res)] <- 0
res
item1 item2 item3 item4 item5
result0 1 0 1 1 0
result1 1 2 0 0 0
result2 1 2 1 1
Hello, I have created a boxplot with the data points overlayed on top using
the below code. I am happy with the way the datapoints are jittered, however
I cannot figure out how to get the labels to jitter along with the
datapoints. The labels remain in the center and are unreadable. I have tried
a
Hello everyone,
I am going to ask this certainly tricky question here not (yet) with the
intention of getting a definitive answer, as I need to deepen my questions much
more, but just to have an approximate idea of which direction taking next.
I have a dataset where the potential respo
Hello,
Try th following.
Q1 <- matrix(c(sample(4, 200, replace = TRUE), rbinom(200,1,0.7)), ncol = 2)
Q1
Hope this helps,
Rui Barradas
Em 17-09-2015 15:37, thanoon younis escreveu:
Dear R - Users
I have a small problem when i generated two column with 200 rows and as
follows
Q1[i,1]=sample
Dear R - Users
I have a small problem when i generated two column with 200 rows and as
follows
Q1[i,1]=sample(4, 200, replace = TRUE); Q1[i,2]=rbinom(200,1,0.7)
the first vector is ordered categorical variable and the second vector is
dichotomous variable but when i run this code i found this err
Vectors have no columns or rows.
rep( NA, 200 )
If you need a matrix, you have to turn it into one:
matrix( rep( NA, 200 ), ncol=1 )
---
Jeff NewmillerThe . . Go Live...
DCN:
... and, less explicitly, but more compactly:
y <- array(dim=c(200,1))
B.
On Sep 17, 2015, at 10:07 AM, Boris Steipe wrote:
> x <- rep(NA, 200)
>
> For all cases I can think of, that is enough. If you MUST have a matrix with
> one column and two hundred rows, set:
>
> dim(x) <- c(200,1)
>
x <- rep(NA, 200)
For all cases I can think of, that is enough. If you MUST have a matrix with
one column and two hundred rows, set:
dim(x) <- c(200,1)
B.
On Sep 17, 2015, at 9:40 AM, thanoon younis wrote:
> Dear all users
>
> I want to write a vector with one column and just NA values and
Dear all users
I want to write a vector with one column and just NA values and nrow=200
when i write X=numeric(NA) is not correct how can i do this please?
Regards
[[alternative HTML version deleted]]
__
R-help@r-project.org mailing list --
Dear list members,
Apologies for cross-posting. Please, find below the information of a
course: "Bayesian Data Analysis with R and WinBUGS".
The course takes place in a nice area nearby the city of Essen. In the
course’s price is included teaching material (The BUGS Book and my
slides),
an ex
Hi John,
This will not be the complete answer, but it can probably help you in
the right direction.
First, I would subset your data.frame to include only subjects with one
observation at each time point (and I'm not sure how to do that easily).
But then, the aggregate() function is what you
On 17/09/2015 7:06 AM, John Sorkin wrote:
> I have a long (rather than wide file), i.e. the data for each subject is on
> multiple lines rather than one line. Each line has the following layout:
> subject group time value
> I have two groups, multiple subjects, each subject can be seen up to three
Hi Rosa,
Try this:
# do the first split, to get the rightmost screen for the legend
split.screen(figs=matrix(c(0,0.84,0,1,0.84,1,0,1),nrow=2,byrow=TRUE))
# now split the first screen to get your eight screens (numbered 3 to 10)
for the plots
split.screen(figs=matrix(c(0,0.25,0.5,1,
I have a long (rather than wide file), i.e. the data for each subject is on
multiple lines rather than one line. Each line has the following layout:
subject group time value
I have two groups, multiple subjects, each subject can be seen up to three
times a time 0, and at most once at times 4 and
Hi
> -Original Message-
> From: R-help [mailto:r-help-boun...@r-project.org] On Behalf Of Kai Mx
> Sent: Wednesday, September 16, 2015 10:43 PM
> To: r-help mailing list
> Subject: [R] aggregate counting variable factors
>
> Hi everybody,
>
> >From a questionnaire, I have a dataset like t
> On 17 Sep 2015, at 01:42, Dénes Tóth wrote:
>
>
>
> On 09/16/2015 04:41 PM, Bert Gunter wrote:
>> Yes! Chuck's use of mapply is exactly the split/combine strategy I was
>> looking for. In retrospect, exactly how one should think about it.
>> Many thanks to all for a constructive discussion .
50 matches
Mail list logo