You need recent enough cairo libraries in your OS. Your OS appears to
date from mid 2006, so likely you need a later cairo: my system is using
cairo 1.4.10 and found all the backends.
Your kernel is quite old: does your OS have kernel and cairo updates you
have not applied?
On Fri, 26 Oct 200
This is beautiful, thank you!
Greetings
Johannes
__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contai
On Fri, 26 Oct 2007, Charles C. Berry wrote:
> On Fri, 26 Oct 2007, kevinchang wrote:
>
>>
>> Hi all,
>>
>> I am coding for finding the root of f(x)= phi(x) -alpha where phi(x) is the
>> cumulative density function and alpha is constant . The problem right now is
>> I can't get the "initialX" rep
On Fri, 26 Oct 2007, kevinchang wrote:
>
> Hi all,
>
> I am coding for finding the root of f(x)= phi(x) -alpha where phi(x) is the
> cumulative density function and alpha is constant . The problem right now is
> I can't get the "initialX" representing the root out of the while loop when
> ending
Hi all,
I am coding for finding the root of f(x)= phi(x) -alpha where phi(x) is the
cumulative density function and alpha is constant . The problem right now is
I can't get the "initialX" representing the root out of the while loop when
ending , it seems to me it disappear when the loop ends acc
Oops. It looks like your 'date' is not a true date object, but either a
factor or a character string vector. You may have to change this a bit
to
Year <- function(date)
as.POSIXlt(asDate(date, format = "%m/%d/%Y"))$year + 1900
M1 <- with(tab, tapply(measurement1, Year(date), max, na.r
This is one way to implement the idea
the other responses point at:
> myfun <- function(funlist, vec){
+ tmp <- lapply(funlist, function(x)do.call(x, args = list(vec)))
+ names(tmp)[names(tmp) == ""] <-
+ sapply(match.call()[[2]], deparse)[-1][names(tmp) == ""]
+ tmp
+ }
>
>
Here's a possibility.
Year <- function(date) as.POSIXlt(date)$year + 1900
M1 <- with(tab, tapply(measurement1, Year(date), max, na.rm = TRUE))
(always you have missing values that you wish to ignore...)
Pity about your busted shift key. Keyboards are pretty cheap these
days, though...
Bill Ve
After a short exchange with the original questioner, I wrote the following
function to calculate the elasticities of lower level variables in population
transition matrices (Leslie matrices etc.) Perhaps it will be of use to others.
There is no error-checking, so use with care. Users should cons
hello,
please can anyone help me out. Am a new user of R
program. Am having problem
with this code below, not getting the expected
results.
1. Each m, the cumulative sum should be 1.000 but the
2nd and 3rd m returned 2.000 and 3.000
instead of 1.000.
2. to get the LCL(m) and UCL(m) for each m b
Dear R Gurus -
Please help me to make a multiple-panel plot using xYplot in which
the y-axis of each panel is dependent only on the data of that plot.
Here is the dataset:
responsedatemean Lower Upper
1 density 1 271.000 249.250 289
only one out of many many
dt = cbind(sim$mean$u2+sim$mean$beta1,sim$sd$u2)
dt = dt[order(dt[,1]),]
bounds =
cbind(c(dt[,1]-1.96*dt[,2],dt[,1]+1.96*dt[,2],dt[,1]),rep(1:length(sim$mean$u2),3))
bounds = bounds[order(bounds[,2]),]
plot(bounds)
T
Matthew Krachey wrote:
> I'm trying to compare the
On Fri, 26 Oct 2007, Ken Termiso wrote:
> Hello all,
>
> I'm using the following scan() parameters on a tab-separated text file that
> was generated by R.
>
> temp_file <- scan(file = outfile, sep="\t", what = character(), skip = 1,
> nlines = 1)
>
> The problem is that within some cells, there
Hi All
I can specify whatever inits, it has no effect on the estimation. I am
replicating a textbook example. The result is completely trash, having
estimates
of -58.7 (sd=59.3), where it should be closer to an ml estimate of 0.585
(SE=0.063).
The two chains within one run are different, but
Here are a few ways:
with(z, y[x %in% w])
subset(z, x %in% w)$y
z[z$x %in% w, "y"]
z$y[z$x %in% w]
# see sqldf.googlecode.com for more info
library(sqldf)
sqldf("select y from z where x in (2, 3, 5)")
# but if you know that x is 1:n and the components of w are in
# that s
System: 2.6.0
Linux kernel 2.6.15 Ubuntu dapper
R version 2.5.1
ESS 5.2.11 on Emacs 21.4.1
Colleagues
I am still struggling to produce SVG file output in R.
I initially started with RSvgDevice package. I produced a simple graphic
from an example in the documentation and it imported into Inkscape
z$y[z$x %in% w]
b
On Oct 26, 2007, at 4:19 PM, Em C wrote:
> Hi all,
>
> I'm trying to find
> something like the "==" operator that will work on vectors or
> something
> equivalent to SQL's "IN" function. For e.g., if I have:
>
> x <- c(1,2,3,4,5)
> y <- c("apples", "oranges", "grapes", "banan
Hi all,
I'm trying to find
something like the "==" operator that will work on vectors or something
equivalent to SQL's "IN" function. For e.g., if I have:
x <- c(1,2,3,4,5)
y <- c("apples", "oranges", "grapes", "bananas", "pears")
z <- data.frame (x,y)
w <- c(2,4,5)
I want R to return the values
Does a package exist that allows using frequency weights (non-whole numbers) in
an ANOVA? I understand that ANOVA is not implemented in the "survey" package.
--
__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do r
Hello all,
I'm using the following scan() parameters on a tab-separated text file that was
generated by R.
temp_file <- scan(file = outfile, sep="\t", what = character(), skip = 1,
nlines = 1)
The problem is that within some cells, there are cases where there are three
frontslashes ( /// ).
That is due to some tip-server that will show you the params available
in functions, so there is a lot of overhead in parsing what you are
currently typing. Its a nice feature, but I turned it off because of
the slowness.
-Original Message-
From: [EMAIL PROTECTED] [mailto:[EMAIL PROTE
I have not noticed this but I am still using the old
Tinn-R 1.17.2.4.
--- Jeff Miller <[EMAIL PROTECTED]> wrote:
> While on the Tinn-R topic...
>
> I absolutely love Tinn-R but I have noticed one
> quirk and am wondering if
> anyone else experiences it.
>
> In the middle typing a line of code,
While on the Tinn-R topic...
I absolutely love Tinn-R but I have noticed one quirk and am wondering if
anyone else experiences it.
In the middle typing a line of code, sometimes everything will slow down. I
will have to type a letter, wait a second, type a letter, wait a second,
etcuntil I ge
Thanks! That worked.
Silvia.
Bos, Roger wrote:
>
> Silvia,
>
> Option / Main / Application then click on the 'R' tab. At the bottom of
> the window you can browse to the new location.
>
> HTH, Roger
>
> -Original Message-
> From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED]
> On Be
What version of Tinn-R (not sure it matters, though)?
In 1.19.2.3 you have to tell Tinn-R where the GUI is by giving it's path in
the "path to preferred GUI" slot in the Options -->Main --> Application -->R
tab dialog box.
(which can be entered by browsing after clicking on the "Path to Preferred
On Fri, 26 Oct 2007, Ravi Varadhan wrote:
> Please pardon my non-R related response, but I couldn't resist this!
>
> I have always felt that the phrase "steep learning curve" is incorrectly
> used. If one plots "learning" on Y-axis and effort (or time) on the X-axis,
> then the (instantaneous) sl
Silvia,
Option / Main / Application then click on the 'R' tab. At the bottom of
the window you can browse to the new location.
HTH, Roger
-Original Message-
From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED]
On Behalf Of Silvia Lomascolo
Sent: Friday, October 26, 2007 2:31 PM
To: r-help
Hi all,
Can anyone please tell me how to start R from Tinn-R's "Toogle start/close
Rgui" button, after I've updated to a new version of R? It seems like Tinn-R
keeps looking for the previous version of R. I have updated R twice already
since I started using Tinn-R and I haven't been able to make t
On Fri, 26 Oct 2007, Tomas Vaisar wrote:
> Hi Chuck,
>
> I finally got to install v 2.6.0 and tried your initial suggestions - with
> the new version the
>
> dat <- as.data.frame( matrix( scan('tmp.txt'), nr=19) )
>
> did not make the list in the desired format, however the other two worked.
Tom
dear kind helper,
i would like to know how to find the annual maximun for a table that
basicly looks like this:
datetimemeasurement1measurement2measurement3
mm/dd/ hh:mm:ssm1 m2 m3
there are about 9000 measurements for ea
On 10/26/07, Frank Thomas <[EMAIL PROTECTED]> wrote:
...
> BTW: Contrary to some ideas both R & SPSS can be programmed and the
> algorithms for both have been published. So, no matter whether open
> source or private property you know what you do (if you want).
This is off the point of Matt's ori
Try using:
FortranForm[e]
in Mathematica as the Fortran output may be easier to translate.
On 10/26/07, [EMAIL PROTECTED] <[EMAIL PROTECTED]> wrote:
> Hi,
> I need to import some expressions form Mathematica into R editor for
> coding purposes. If I just copy the expression from Mathematica into
Hi,
I need to import some expressions form Mathematica into R editor for
coding purposes. If I just copy the expression from Mathematica into
R
editor, I obtian unuseful string. For instance, if I want to copy the
expression beta/sigma from Mathematica, I have \!\(β\/σ\) in the R
editor. Does
On Fri, Oct 26, 2007 at 10:02:23AM -0600, Eric Fuchs wrote:
> Hello:
>
> I am using R mainly on windows XP, version 2.5. I´m a biologist,
> with a medium level statistics background. I have a problem stating a
> two-way factorial design where both factors are random. I´m using the
> lmer() fu
Olá,
Veja o comando ?expand.grid
Se possível postar os questionamentos em inglês, devido ao fato da lista ser
internacional, ou poderia usar a lista em português:
http://br.groups.yahoo.com/group/R_STAT/
On 26/10/2007, Pedro Raposeiro <[EMAIL PROTECTED]> wrote:
>
> Estou a escrever pela prime
Ravi Varadhan sent the following at 26/10/2007 17:29:
> Please pardon my non-R related response, but I couldn't resist this!
>
> I have always felt that the phrase "steep learning curve" is incorrectly
> used. If one plots "learning" on Y-axis and effort (or time) on the X-axis,
> then the (in
I am using the following construct and so far I am OK with it...
b<-barplot(data, ylim=c(0, max(data)+max(data)/20))
text(b,data+max(data)/30, data)
Thanks, for the comments.
-B
|-Original Message-
|From: hadley wickham [mailto:[EMAIL PROTECTED]
|Sent: Friday, October 26, 2007 11:47 AM
By "routines" I assume that you mean "underlying numerical algorithms."
Two part answer:
1) R has a lot more of them than SPSS, and more in most data analytical
related areas than Matlab (but Matlab is the right tool for, say,
differential equation solving).
2) A detailed answer to the question
Please pardon my non-R related response, but I couldn't resist this!
I have always felt that the phrase "steep learning curve" is incorrectly
used. If one plots "learning" on Y-axis and effort (or time) on the X-axis,
then the (instantaneous) slope of the learning curve for R should be
shallowe
Hi All,
I've been working with the cover.design function in the fields package
for space filling. I'm wondering though if there is a way, or if anyone
has written scripts to include weights at candidate locations in the
space filling algorithm. This would be similar to maximum attendance and
p
Estou a escrever pela primeira vez neste post. Estive a ler o manual de R
mas mesmo assim não fui capaz de resolver o meu problema.
Passo a explicar: Tenho diversas amostras (cada amostra com várias leituras)
e quero fazer todas as combinações possíveis entre essas amostras. Sendo,
Pedra 1 (x1,1;
I "think" I understand what you want.
This seems to work for the test data you supplied
below. At least it gives the expected answer.
df1 <- unique(test.data[,c(1,4)])
names(df1) <- c("id.mother", "yy"); df1
df2 <- merge(test.data, df1) ; df2
I'm just curious . . . if effect sizes are so important, and possibly a better
way of looking at results than p-values, since they don't depend on effect size
(Kline,2004; Murphy and Myors, 2004), why don't any of the classical tests,
like t.test or glht specified for Tukey's posthocs, return e
Hello:
I am using R mainly on windows XP, version 2.5. I´m a biologist,
with a medium level statistics background. I have a problem stating a
two-way factorial design where both factors are random. I´m using the
lmer() function implemented in the Matrix package version 0.99.
My design is a
Some major differences between R and SPSS:
1/ The learning curve of R is steep and the one of SPSS is largely flat.
A difference any student will rapidly understand.
2/ The user interface in R is underdeveloped, in comparison to SPSS.
3/ In R without loving to spend time in programming you get not
On 10/25/07, Bernd Jagla <[EMAIL PROTECTED]> wrote:
> Again me.
>
>
>
> I want to plot the numbers on the bars of a barplot.
This is usually a bad idea, as the size of the numbers will distort
the shape of the bars, worsening perceptual accuracy. Do you want a
table or a graph?
Hadley
--
http
See also the "nameargs" function on p. 46 of V&R's S PROGRAMMING . As
previous posts indicated, further fiddling would be necessary to get exactly
what you want, and there's probably no universal clean solution.
Bert Gunter
Nonclinical Statistics
7-7374
-Original Message-
From: [EMAIL PR
Hi,
Thanks for your help, Thibaut. It was helpful to look at the graphs.
I think that the problem is that when the lower boundary is 0 and the upper
boundary is arbitrarily high it takes all connections into account (which
shouldn't be a problem right?). If I make the lower boundary slightly hi
Thanks a lot for all the comments and suggestions. It has helped me
solve the problem. I find the "wide" to "long" transformation of the
data especially helpful. I used this in STATA but was not aware that I
could do the same in R.
Deepankar
On Fri, 2007-10-26 at 08:44 -0500, Douglas Bates wrot
Geertje Van der Heijden wrote:
>Hi,
>
>I am trying to calculate Moran's I test for the residuals for a
>regression equation, but I have trouble converting my coordinates into
>nb format.
>
>I have used the dnearneigh() funtion now with an arbitrarily high upper
>distance to make it include all pl
Hello R
I have some data from a number of gels showing the distance of bands in
each gel and lane. My problem is to align these values by some method.
It has been suggested to me (by a Matlab expert) that I could use
Spectral methods such as COW (correlation-optimised warping) or PAGA
(peak al
That is not a well-defined concept. To define 'character' you need to
know the encoding, since that determines how to split the bytes into
characters. So only whole strings can be UTF-8 or not. You can say which
bytes in a stream of bytes would be valid in UTF-8, but if not all of them
are t
Michael,
Where can we read you document that includes "various ideas
going far beyond simply embedding R"? What about Julian's
opinion that Tinn-R is more stable and loads more quickly
than jEdit? Can that be true in a Windows environment?
-John Thaden
On Thu, 25 Oct 20
All,
I am trying to post text from an XLS spread to my wiki and I need to
remove any characters that are not UTF-8. Is there an easy gsub command
that can do this?
(I previously sent this same email to r-sig-gui. That was a mistake and
I apologize for the duplication.)
Thanks, Roger J. Bos
**
Hi
There is the
Kruskalmc
function in the
Pgirmess
Package
N
-Original Message-
From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED]
On Behalf Of Etienne Toffin
Sent: 26 October 2007 12:52
To: r-help@r-project.org
Subject: [R] Post-hoc test for Kruskal-Wallis
Hi there,
I've got a small qu
Hi Chuck,
I finally got to install v 2.6.0 and tried your initial suggestions -
with the new version the
dat <- as.data.frame( matrix( scan('tmp.txt'), nr=19) )
did not make the list in the desired format, however the other two worked.
Thanks a lot again.
Tomas
Charles C. Berry wrote:
>
> To
You have (unimportant lines omitted)
> c2= survdiff(Surv(act.surv.time,censoring)~treatgrp ,data=b)
> plot(c2)
The problem is that you are using the wrong function. It is survfit that
creates plottable survival curves, survdiff only does the log-rank test.
Terry Therneau
Another approach is to convert the data frame that you have in what is
sometimes called the "wide" format to the "long" format. See ?reshape
for details on this transformation.
In the process of doing the conversion I would also convert the sex of
the child to a factor with meaningful levels and
Dear Ralf,
I've now had a chance to take a closer look at the problems that you
reported with Anova.mlm(), and uploaded a new version of the car package to
CRAN that deals with them. Please see below:
> -Original Message-
> From: [EMAIL PROTECTED]
> [mailto:[EMAIL PROTECTED] On Behalf Of
Thanks a lot
Rafael Barros de Rezende
Mestrando em Economia - CurrÃculo Lattes:
[1]http://lattes.cnpq.br/9826095609825249
Cedeplar Centro de Desenvolvimento e Planejamento Regional
Face, UFMG ([2]http://www.cedeplar.ufmg.br)
-- Original Message ---
From: Mo
Is anyone aware of a procedure to apply Newey-West corrections for
autocorrelation to a SUR regression model? The SANDWICH package seems to be
applicable only to LM or GLM models.
Thanks,
Richard Saba
Department of Economics
Auburn University
Email: [EMAIL PROTECTED]
_
Hi,
I am trying to calculate Moran's I test for the residuals for a
regression equation, but I have trouble converting my coordinates into
nb format.
I have used the dnearneigh() funtion now with an arbitrarily high upper
distance to make it include all plots. However, when I do the
lm.morantest
Hi there,
I've got a small question :
is there any post-hoc test for Kruskal rank sum test integrated in R ?
I know that the Nemenyi test is one of the post-hoc that can be used,
but there's no (to by knowledge) R function for it.
What should I do ?
Thanks,
Etienne
__
Version 2.1-0 of the pls package is now available on CRAN.
The pls package implements partial least squares regression (PLSR) and
principal component regression (PCR). Features of the package include
- Several plsr algorithms: orthogonal scores, kernel pls, wide kernel
pls, and simpls
- Flexi
You could simply create your own function, to avoid repeating the
paste part each time:
scriptdir <- path.to.scripts
my.source <- function(file) {
source(file.path(scriptdir,file))
}
my.source("file.r")
my.source("anotherfile.r")
...
(You'll have to watch out for correct number of slash
Sorry that I was unclear. For an individual to qualify for my analysis I
want both of the following two criteria to be fulfilled:
First, I want to select measurement taken at a certain age: for the
focal individual the year of measurement (year) should be the same as
year.hatch
Second, I want th
On 10/25/07, Deepayan Sarkar <[EMAIL PROTECTED]> wrote:
> On 10/24/07, Paul Murrell <[EMAIL PROTECTED]> wrote:
> > Hi
> >
> >
> > Gustaf Rydevik wrote:
> > > Hi all,
> > >
> > > I'm trying to generate a plot containing a scatterplot, with marginal
> > > densityplots for x and y.
> > > However, when
Hi!
In the example:
hc <- hclust(dist(USArrests), "ave")
dend1 <- as.dendrogram(hc)
dend2 <- cut(dend1, h=70)
Do the branches "Branch 1", "Branch 2", "Branch 2"...in dend2$upper
str(dend2$upper)
--[dendrogram w/ 2 branches and 4 members at h = 152]
|--[dendrogram w/ 2 branches and 2 members
On Fri, 26 Oct 2007, Jonas Malmros wrote:
> Hello,
>
> My response variable seems to be distributed according to Student t
> with df=4. I have 320 observations and about 20 variables.
> I am wondering whether there is a way to fit glm with Student t for
> error distribution. Student t is not one o
Hello,
My response variable seems to be distributed according to Student t
with df=4. I have 320 observations and about 20 variables.
I am wondering whether there is a way to fit glm with Student t for
error distribution. Student t is not one of the family choices in glm
function.
How should I pr
Hello,
I have R2.6.0 installed on my laptop (Vista), and I create my scripts in WinEdt.
Today I installed R on a stationary PC which runs Windows XP Professional, SP2.
For some reason, the script I wrote on my laptop does not run on the
stationary machine. I reinstalled R but to no avail.
Problem
Hello,
I do have a question regarding resampling of data (matrices). Imagine
having 2 data sets, organized in a matrix with two columns, the first
column being, eg, traveled distance, the second column being some
dependent variable. I now have 2 data sets from two subjects covering
the same distanc
72 matches
Mail list logo