On Tue, 2 Oct 2012, lieslpe wrote:
Dear SamiC,
I am also attempting to plot my zero inflated model on my data. Did you
find a solution? Does anyone else on this list have a solution?
If you want to compare observed and fitted frequencies for the counts 0,
1, 2, ..., then a common approach i
platform x86_64-apple-darwin9.8.0
arch x86_64
os darwin9.8.0
system x86_64, darwin9.8.0
version.string R version 2.13.1 (2011-07-08)
I am trying to write a function that takes a few objects as input.
test <- function(directory, num = 1:100) {
> }
the argument
I am using -mice- for multiple imputation and would like to use the gelman
diagnostic in -coda- to assess the convergence of my imputations. However,
gelman.diag requires an mcmc list as input. van Buuren and
Groothuis-Oudshoorn (2011) recommend running mice step-by-step to assess
convergence (e.
for a silly question, wondering how to test fit with the one sample as follow.
I have read _fitting distributions with R_, but that doesn't answer my specific
question.
inclined to use Kolmogorov-Smirnov D, and its associative p value.
much appreciation!
X20.001 232 93 84
Dear Anthony,
Thank you very much for helping me resolve the issues. I now got all the
results, which I intended to generate.
Pradip Muhuri
From: Anthony Damico [ajdam...@gmail.com]
Sent: Tuesday, October 02, 2012 9:50 PM
To: Muhuri, Pradip (SAMHSA/CBHS
File operations are not vectorizable. About the only thing you can do for the
iterating through files part might be to use lapply instead of a for loop, but
that is mostly a style change.
Once you have read the dbf files there will probably be vector functions you
can use (quantile). Off the to
please double-check that you've got all of your parameters correct by
typing ?svymean ?svyby and ?make.formula before you send questions to
r-help :)
# you spelled design wrong and probably need to throw out your NA values.
try this
# percentile by SPD status
svyby(~dthage, ~xspd2, design=nhis
Hello,
Maybe this time you've got it wrong, Arun. The op wants to sum the
areas, not just label them.
Using your code,
Range=cut(dat1$Percent, breaks=c(-Inf,0, 25, 50, 75, 100),
labels=c("<=0", "0-25", "25-50", "50-75", ">75"))
aggregate(Area ~ Range, data = dat1, FUN = sum)
# Range
HI,
Infact, there is no need for aggregate().
dat1$range<-cut(dat3$Percent,breaks=c(-Inf,0,25,50,75,100),labels=c("<=0","0-25","25-50","50-75",">75"))
dat1
# Area Percent range
#1 456 0 <=0
#2 3400 10 0-25
#3 79 25 0-25
#4 56 18 0-25
#5 467 0 <=0
#6 67
Hello,
There are more R friendly ways to do what you want, it seems to me easy
to avoid loops but you need to tell us how do you know which rows
correspond to the 50th and 90th quantiles. Maybe this comes from the
value in some other column?
Give a bit more complete a description and we'll s
Hi,
I guess this is what you wanted:
dat1<-read.table(text="
Area Percent
456 0
3400 10
79 25
56 18
467 0
67 67
839 85
1120 0
3482 85
",sep="",header=TRUE)
aggregate(dat1$Percent, list(Area = dat1[,"Area"],Range=cut(dat1
$Pe
On Oct 2, 2012, at 3:38 PM, Ben Harrison wrote:
> On 28 September 2012 16:38, David Winsemius wrote:
>>
>> ?text # should be fairly clear.
>
> Thank you. I was stupid to ask such a trivial question along with a
> not-so-trivial one. The second part of the question was probably more
> importan
This is a solution for UNIX/Linux-type OS users and a lot of it is only
related to R in the sense that it can allow you to reconnect to an R
session. I managed to do it and to save history, thanks to helpful
messages from Ista Zahn and Brian Ripley.
To explain this, I will call my two Linux b
Hi Thomas,
Thank you so much for your help.
Pradip
From: Thomas Lumley [tlum...@uw.edu]
Sent: Monday, October 01, 2012 6:45 PM
To: Muhuri, Pradip (SAMHSA/CBHSQ)
Cc: Anthony Damico; R help
Subject: Re: [R] svyboxplot - library (survey)
The documentation sa
Hello,
Although my R code for the svymean () and svyquantile () functions works fine,
I am stuck with the svyby () and make.formula () functions. I got the
following error messages.
- Error: object of type 'closure' is not subsettable # svyby ()
- Error in xx[[1]] : subscript out of bound
Agreed -- very cool trick. Thanks Prof Ripley
Michael
On Oct 2, 2012, at 9:59 PM, steven mosher wrote:
> thanks Dr. R. this will come in handy in the future as I have a knack for
> hanging R.
> On Oct 2, 2012 12:01 PM, "Prof Brian Ripley" wrote:
>
>> On 02/10/2012 18:29, Bert Gunter wrote:
>>
Hello,
Sorry if this process is too simple for this list. I know I can do it, but
I always read online about how when using R one should always try to avoid
loops and use vectors. I am wondering if there exists a more "R friendly"
way to do this than to use for loops.
I have a dataset that has
Thanks to all for the responses and suggestions.
I was primarily proposing a more detailed change log for packages on CRAN. To
my mind, repositories like R-forge host packages more 'raw' than those on CRAN
(i.e. CRAN seems to me to contain more 'finished' packages which occasionally
are updated
Hi,
My dataframe has two columns one with area and other with percent. How can i
add the areas that are within a range of percentage??
My dataframe looks like
Area Percent
456 0
3400 10
79 25
56 18
467 0
67 67
839 85
1120 0
3482
thanks Dr. R. this will come in handy in the future as I have a knack for
hanging R.
On Oct 2, 2012 12:01 PM, "Prof Brian Ripley" wrote:
> On 02/10/2012 18:29, Bert Gunter wrote:
>
>> ?history
>>
>> in a fresh R session, to see what might be possible. I'll bet the
>> answer is, "No, you're screwe
On 28 September 2012 16:38, David Winsemius wrote:
>
> ?text # should be fairly clear.
Thank you. I was stupid to ask such a trivial question along with a
not-so-trivial one. The second part of the question was probably more
important: is there a way to obtain the location of segments produced
b
On 2 October 2012 at 20:18, Søren Højsgaard wrote:
| I am making some comparisons of two versions of the lme4 package: The CRAN
version and the R-Forge version. For the moment I have two different R
installations, each with a different version of lme4. However it would be
convenient if I could
On Tue, Oct 2, 2012 at 4:18 PM, Søren Højsgaard wrote:
> Dear list,
>
> I am making some comparisons of two versions of the lme4 package: The CRAN
> version and the R-Forge version. For the moment I have two different R
> installations, each with a different version of lme4. However it would be
One "other similar location" is Github, where you can "watch" a
package, and this is how I keep track of changes in the packages that
I'm interested in.
Just for the interest of other R package developers, the NEWS file can
be written in Markdown and I have a Makefile
(https://github.com/yihui/kni
On 02/10/2012 21:38, Søren Højsgaard wrote:
I don't know if it would work but a kludgy attempt would be to install lme4
from CRAN, rename the lme4 directory in library to lme4cran; then install lme4
from R-forge and rename the lme4 directory to lme4forge. Then create a script
flexible script t
Thanks, will go ahead
--
View this message in context:
http://r.789695.n4.nabble.com/Install-of-R-2-15-1-for-Windows-64-bit-on-application-server-tp4644042p4644829.html
Sent from the R help mailing list archive at Nabble.com.
__
R-help@r-project.org
Install the two versions in two different *libraries* and update
.libPaths() to prioritize one over the other. /Henrik
On Tue, Oct 2, 2012 at 1:18 PM, Søren Højsgaard wrote:
> Dear list,
>
> I am making some comparisons of two versions of the lme4 package: The CRAN
> version and the R-Forge vers
I don't know if it would work but a kludgy attempt would be to install lme4
from CRAN, rename the lme4 directory in library to lme4cran; then install lme4
from R-forge and rename the lme4 directory to lme4forge. Then create a script
flexible script that would copy one of the directories to a dir
So if you have both loaded in the same instance of R, how will R know
which version of lmer or other functions that you want to run?
It seems cleanest to me to have the 2 different instances of R running
like you do now. The other option would be to change all the names
(exported ones anyways) in
Dear list,
I am making some comparisons of two versions of the lme4 package: The CRAN
version and the R-Forge version. For the moment I have two different R
installations, each with a different version of lme4. However it would be
convenient if I could have the same version within the same R in
On 10/2/2012 10:01 AM, Starkweather, Jonathan wrote:
I'm relatively new to R and would first like to sincerely thank all
those who contribute to its development. Thank you.
I would humbly like to propose a rule which creates a standard (i.e.,
strongly encouraged, mandatory, etc.) for authors to
Dear R friends.
After having some troubles learning how to create a ffdf object, now I find
myself having problems saving it.
this is the data i´d like to save:
str(DATA)
List of 3
$ virtual: 'data.frame': 6 obs. of 7 variables:
.. $ VirtualVmode : chr "double" "short" "integer" "integer"
There is already support in the packaging system for a NEWS file which
can be accessed within R using the 'news' function. What would the
changelog that you are proposing contribute or contain beyond what the
NEWS file already does?
Creating and updating NEWS is not mandatory, but is encouraged.
On 02-10-2012, at 20:50, Dereje Bacha wrote:
> Hi
>
> I am facing a problem of restricting an intercept of systems of equations.
> Y1=f(X1,X2,X3)
> Y2=(X1,X2,X4)
>
> I want to restrict an intercept of equation 2 equal to coefficient of X2 of
> equation 1.
>
Please do not hijack a thread
Hello!
Can anyone give a tip how to plot parametric effects in an Generalized
Additive Model, from mgcv package?
Thanks,
PM
__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-p
Hi
I am facing a problem of restricting an intercept of systems of equations.
Y1=f(X1,X2,X3)
Y2=(X1,X2,X4)
I want to restrict an intercept of equation 2 equal to coefficient of X2
of equation 1.
Please help
Dereje
From: Naser Jamil
To: r-help@r-project
I'm relatively new to R and would first like to sincerely thank all those who
contribute to its development. Thank you.
I would humbly like to propose a rule which creates a standard (i.e., strongly
encouraged, mandatory, etc.) for authors to include a `change log' documenting
specific changes
Dear SamiC,
I am also attempting to plot my zero inflated model on my data. Did you find
a solution? Does anyone else on this list have a solution?
Thanks,
Liesl
Message from SamiC Jun 30, 2011:
I am fitting a zero inflated negative binomial model to my data. I have
pretty much got my selected
I'm using R 2.15.1 on a 64-bit machine with Windows 7 Home Premium.
Sample problem (screwy subscripted syntax is a relic of edited down a
more complex script):
> N <- 25
> s <- rlnorm(N, 0, 1)
> require("boot")
Loading required package: boot
> v <- NULL # hold sample variance estimates
> i <-
Thank you all for your help and advice. This wasn't quite the answer
I was looking for, but these concepts make more sense to me now and I
think I should be able to resolve the issues I've been having.
Thanks again!
On Sun, Sep 30, 2012 at 6:26 PM, David Winsemius wrote:
>
> On Sep 30, 2012, at
On 02/10/2012 18:29, Bert Gunter wrote:
?history
in a fresh R session, to see what might be possible. I'll bet the
answer is, "No, you're screwed," though. Nevertheless, maybe Linux
experts can save you.
Maybe not. On a Unix-alike see ?Signals. If you can find the pid of
the R process and i
Thank you!
I just wanted to know how one goes from the values returned by kmeans to a
distance metric. You have shown me that is simply the squared distance from the
centers! Thanks again.
John
John David Sorkin M.D., Ph.D.
Chief, Biostatistics and Informatics
University of Maryland School of
On Tue, 2 Oct 2012 14:32:12 -0400 John Sorkin
wrote:
> Ranjan,
> Thank you for your help. What eludes me is how one computes the distance from
> each cluster for each subject. For my first subject, datascaled[1,], I have
> tried to use the following:
> v1 <- sum(fit$centers[1,]*datascaled[1,])
Another nitpick: don't use return() in the last statement.
It isn't needed, it looks like some other language, and
dropping it saves 8% of the time for the uncompiled code
(the compiler seems to get of it).
Bill Dunlap
Spotfire, TIBCO Software
wdunlap tibco.com
> -Original Message-
> Fro
Ranjan,
Thank you for your help. What eludes me is how one computes the distance from
each cluster for each subject. For my first subject, datascaled[1,], I have
tried to use the following:
v1 <- sum(fit$centers[1,]*datascaled[1,])
v2 <- sum(fit$centers[2,]*datascaled[1,])
v3 <- sum(fit$centers[
Hello,
Em 02-10-2012 19:18, Berend Hasselman escreveu:
On 02-10-2012, at 20:01, Rui Barradas wrote:
Hello,
Yes, it's possible to remove the loop. Since the loop is used to compute a
running product and all we want is the final result, use the vectorized
behavior of R and a final ?prod().
S
On 02-10-2012, at 20:01, Rui Barradas wrote:
> Hello,
>
> Yes, it's possible to remove the loop. Since the loop is used to compute a
> running product and all we want is the final result, use the vectorized
> behavior of R and a final ?prod().
> Seedup: another 2x. And 4x2 == 8 == 1 [decimal]
On Oct 2, 2012, at 3:59 AM, Christof Kluß wrote:
> Hi
>
> xyplot(y ~ x | subject) plots a separate graph of y against x for each
> level of subject. But I would like to have an own function for each
> level. Something like
>
> xyplot(y ~ x | subject,
> panel = function(x,y) {
> pa
Hello,
Yes, it's possible to remove the loop. Since the loop is used to compute
a running product and all we want is the final result, use the
vectorized behavior of R and a final ?prod().
Seedup: another 2x. And 4x2 == 8 == 1 [decimal] order of magnitude.
lf2 <-function (x) {
v<-1
x1
John,
On Tue, 2 Oct 2012 11:35:12 -0400 John Sorkin
wrote:
> Window XP
> R 2.15
>
> I am running a cluster analysis in which I ask for three clusters (see code
> below). The analysis nicely tells me what cluster each of the subjects in my
> input dataset belongs to. I would like two pieces o
?history
in a fresh R session, to see what might be possible. I'll bet the
answer is, "No, you're screwed," though. Nevertheless, maybe Linux
experts can save you.
May the Force be with you.
-- Bert
On Tue, Oct 2, 2012 at 10:17 AM, Mike Miller wrote:
> I connected from my desktop Linux box to
On 02-10-2012, at 17:23, Naser Jamil wrote:
> Dear R-users,
> I am facing problem with integrating in R a likelihood function which is a
> function of four parameters. It's giving me the result at the end but
> taking more than half an hour to run. I'm wondering is there any other
> efficient wa
I connected from my desktop Linux box to a Linux server using ssh in an
xterm, but that xterm was running in Xvnc. I'm running R on the server in
that xterm (over ssh). Something went wrong with Xvnc that has caused it
to hang, probably this bug:
https://bugs.launchpad.net/ubuntu/+source/vnc
> cool, im none the wiser now, but thanks anyway
That would be because your IT questionnaire requires telepathy to understand
what the writer wanted to know.
My take on these, in case it helps:
1. Is R a Clientsoftware / Serversoftware / Systemsoftware?
R can run both on a client or on a serv
Dear Professor Fox,
Apologies for my oversight relating to the polychor command and thank you for
your advice. I turned to the polychor command when trying to find an equivalent
for the polychoric command found in the psych package (I am following a
procedure outlined in Gadermann, Guhn & Zumbo
This is not primarily an R question, although I grant you that it
might intersect packages in R that do what you want. Nevertheless, I
think you would do better posting on a statistical list, like
stats.stackexchange.com . Maybe once you've figured out there what you
want, you can come back to R to
Dear R users,
I would like to employ count data as covariates while fitting a
logistic regression model. My question is:
do I violate any assumption of the logistic (and, more in general, of
the generalized linear) models by employing count, non-negative
integer variables as independent
require("fortunes")
Loading required package: fortunes
>fortune("<<-")
I wish <<- had never been invented, as it makes an esoteric and dangerous
feature of the language *seem* normal and reasonable. If you want to dumb down
R/S into a macro language, this is the operator for you.
-- Bill Venabl
Dear R-users,
I am facing problem with integrating in R a likelihood function which is a
function of four parameters. It's giving me the result at the end but
taking more than half an hour to run. I'm wondering is there any other
efficient way deal with. The following is my code. I am ready to prov
Window XP
R 2.15
I am running a cluster analysis in which I ask for three clusters (see code
below). The analysis nicely tells me what cluster each of the subjects in my
input dataset belongs to. I would like two pieces of information
(1) for every subject in my input data set, what is the prob
Hello,
See if this is it.
Nx <- rep(0,length(x))
Ny <- rep(0,length(y))
n <- (x+1)*(y+1)
results <- array(0, dim=c(2,2,64,7))
# l <- 1 # <--- This changed place
for(i in 1:length(x)){
Nx[i] <- length(1:(x[i]+1))
Ny[i] <- length(1:(y[i]+1))
l <- 1 # <--
On 02-10-2012, at 16:20, Loukia Spineli wrote:
> I want to make a multi-dimensional array. To be specific I want to make the
> following array
>
> results<-array(0,dim=c(2,2,64,7))
>
> This is the code I have created but it gives no result due to the error
> "subscript out of bound".
>
> x<-r
On Oct 2, 2012, at 13:35 , Hadley Wickham wrote:
>>> What is the special meaning for the method name start with a dot?
>>
>> It means nothing in particular, except that such objects don't show up in
>> ls() by default. The _intention_ is usually that the function is only to be
>> used internal
Another base-R-only solution uses a 2-column matrix of subscripts
to fill a matrix. E.g.,
> f <- function(data) {
+ mat <- matrix(NA_real_, nrow=max(data[[1]]), ncol=max(data[[2]]))
+ mat[cbind(data[[1]], data[[2]])] <- data[[3]]
+ mat
+ }
>
> f(dat1)
[,1] [,2] [,3] [,4]
[1,]
On 10/2/2012 6:08 AM, Steven Backues wrote:
I have a piece of code (from Xie et al. 2009 Autophagy 5:217) that
runs in R and requires the glpk package. A year or so ago, I was able
to download and install the glpk package directly from insider the R
program (for Windows), and everything worked
I want to make a multi-dimensional array. To be specific I want to make the
following array
results<-array(0,dim=c(2,2,64,7))
This is the code I have created but it gives no result due to the error
"subscript out of bound".
x<-rep(7,7) # Missingness in intervention
y<-rep(7,7) # Missingness in
Christof:
You are aware, I assume, that the subject level name can be
incorporated into the strip label via the "strip" function argument;
e.g.
xyplot(...,
strip = strip.custom(style = 1, strip.levels=c(TRUE,TRUE)),
...)
Cheers,
Bert
On Tue, Oct 2, 2012 at 6:45 AM, Christof Kluß wrote:
> subj
subj <- levels(subject)
subj[panel.number()]
seems to be a good solution
is there something like panel.legend (instead of panel.text)?
Am 02-10-2012 12:59, schrieb Christof Kluß:
> Hi
>
> xyplot(y ~ x | subject) plots a separate graph of y against x for each
> level of subject. But I would like
It's hard to know what's wrong since you did not supply your code.
Please supply a small working example and some data. To supply data use the
dput()
function, see ?dput() for details.
Welcome to R.
John Kane
Kingston ON Canada
> -Original Message-
> From: mbhpat...@gmail.com
> Sen
It doesn't seem possible to index an ff-vector using a logical
ff-vector. You can use subset (also in ffbase) or first convert 'a' to
a normal logical vector:
library(ff)
library(ffbase)
data1 <- as.ffdf(data.frame(a = letters[1:10], b=1:10))
data2 <- as.ffdf(data.frame(a = letters[5:26
It's hard to know what's wrong with your code since you did not supply it.
Please supply a small working example and some data. To supply data use the
dput() function, see ?dput() for details.
John Kane
Kingston ON Canada
> -Original Message-
> From: zhyjiang2...@hotmail.com
> Se
I have a piece of code (from Xie et al. 2009 Autophagy 5:217) that runs
in R and requires the glpk package. A year or so ago, I was able to
download and install the glpk package directly from insider the R
program (for Windows), and everything worked fine. Now I have installed
R for Windows on
Thank you very much for your answer Rolf. It helped.
I try to simulate a trade indicator model from market microstructure, where the
1 or -1 indicate a buyer or seller initiated trade respectively.
I use a Gaussian copula for simulation, so I can put in some correlation if I
want to. So I gene
On 10/02/2012 02:50 PM, Leung Chen wrote:
How to run pie chart for each categorical variable in a dataset?
Hi Leung,
I depends upon your definition of "categorical". If a character variable
has 100 possible values, you won't get a very informative pie chart out
of it. Therefore I suggest the
In this case you could use the apply function.
Let your k*l matrix is named as y. Then, in order to standardize the values
within each column use the following function
aver<-apply(y,2,mean) # calculate the mean within each column
std<-apply(y,2,sd) # calculate the stabdard deviation within each
Hi Achim:
Excellent points. Thank you so much for your prompt reply.
Tudor
--
View this message in context:
http://r.789695.n4.nabble.com/mlogit-and-model-based-recursive-partitioning-tp4644743p4644767.html
Sent from the R help mailing list archive at Nabble.com.
___
Jacob,
Try increasing the size of the pdf. For example, I can read all 919
labels in this plot ...
pdf(width=200, height=200)
plot(1:919, 1:919, axes=FALSE)
axis(1)
axis(2, at=1:919, las=1, cex=0.01)
box()
graphics.off()
Jean
JIMonroe wrote on 10/01/2012 03:42:24 PM:
>
> Hello,
> I have a
>> What is the special meaning for the method name start with a dot?
>
> It means nothing in particular, except that such objects don't show up in
> ls() by default. The _intention_ is usually that the function is only to be
> used internally and not for end-user use.
But these days, if you're w
Hi
xyplot(y ~ x | subject) plots a separate graph of y against x for each
level of subject. But I would like to have an own function for each
level. Something like
xyplot(y ~ x | subject,
panel = function(x,y) {
panel.xyplot(x,y)
panel.curve(x,y) {
# something
Hello,
What is a dataset? In R there are lots of types, including 'data.frame'
and 'list'. And R's type 'factor' corresponds to categorical variables.
You must be more specific, show us an example dataset using dput().
dput( head(x, 30) ) # paste the output of this in a post
Hope this help
Hello,
Try the following.
fun <- function(x){
a <- min(x)
b <- max(x)
(x - a)/(b - a)
}
mat <- matrix(rnorm(12), ncol=3)
apply(mat, 2, fun)
Hope this helps,
Rui Barradas
Em 02-10-2012 10:51, Rui Esteves escreveu:
Hello,
I have a matrix with values, with columns c1..cn.
I need th
Just a note that we intend to have a second patch release version on October
26. The nickname will be "Trick or Treat".
Details of current changes can be found at
http://stat.ethz.ch/R-manual/R-devel/doc/html/NEWS.html
(scroll past the R-devel stuff and look at 2.15.1 patched.)
--
Peter Dalg
Hello,
?scale in "base" package.
Best Regards,
Pascal
Le 12/10/02 18:51, Rui Esteves a écrit :
Hello,
I have a matrix with values, with columns c1..cn.
I need the values to be normalized between 0 and 1 by column.
Therefor, the 0 should correspond to the minimum value in the column c1 and
1
Hi Rui,
It doesn't really need one...
doit <- function(x) {(x - min(x, na.rm=TRUE))/(max(x,na.rm=TRUE) -
min(x, na.rm=TRUE))}
# use lapply to apply doit() to every column in a data frame
# mtcars is built into R
normed <- as.data.frame(lapply(mtcars, doit))
# very that the range of all is [0, 1]
On 12-10-02 5:51 AM, Rui Esteves wrote:
Hello,
I have a matrix with values, with columns c1..cn.
I need the values to be normalized between 0 and 1 by column.
Therefor, the 0 should correspond to the minimum value in the column c1 and
1 should correspond to the maximum value in the column c1.
Th
Hello,
I have a matrix with values, with columns c1..cn.
I need the values to be normalized between 0 and 1 by column.
Therefor, the 0 should correspond to the minimum value in the column c1 and
1 should correspond to the maximum value in the column c1.
The remaining columns should be organized in
86 matches
Mail list logo