Guillaume,
Have a look at the ggplot book on p. 29
(http://had.co.nz/ggplot2/book.pdf).
HTH,
Thierry
ir. Thierry Onkelinx
Instituut voor natuur- en bosonderzoek / Research Institute for Nature
and Forest
Cel biometri
Hi
I am carrying out some logit regressions, so have a (0,1) dependent variable
and various dependent variables, each in the (-inf, inf) range. As well as
the usual output, I need to report the marginal effects of each the
explanatory variables. How can this be done? I've seen a couple of si
Thierry
First thank you for the celerity of your response.
Second I use ggplot2 like this :
>ggplot(data, aes(x,y,fill)) + geom_point() + etc.
Where did you your xlab and ylab when using ggplot2 like that?
Guillaume
2008/3/5, ONKELINX, Thierry <[EMAIL PROTECTED]>:
>
> Guillaume,
>
> Have a look a
Guillaume,
You'll have to add the appropriate scales.
ggplot(data, aes(x,y,fill)) + geom_point() + scale_x_continuous("your
xlabel") + scale_y_continuous("your ylabel")
I suppose you can add a main title in a similar way, but I haven't found
that yet. But I shure that Hadley will answer this.
On Wed, Mar 05, 2008 at 02:27:21AM -0500, Charilaos Skiadas wrote:
[...]
>
> Btw, you will likely want to take the betweenness call out, and call
> it once and store the result, instead of calling it twice (well,
> assuming the graph is largish). Or even better, use which.max:
>
> which.max(b
Hi
I have a 3 x 2 contingency table:
10 20
30 40
50 60
I want to update the frequencies to new marginal totals:
100 130
40 80 110
I want to use the ipf (iterative proportional fitting) function which
is apparently in the cat package.
Can somebody please advice me how to input this data and invoke
Dear all,
I am using "coxme" function in Kinship library to fit random treatment effect
nested within centre. I got 3 treatments (0,1,2) and 3 centres. I used
following commands, but got an error.
> ugroup=paste(rep(1:3,each=3),rep(0:2,3),sep='/')
> mat1=bdsmatrix(rep(c(1,1,1,1,1,1,1,1,1),3),b
Mark, graph.adjacency always preserves the order of the vertices,
so the vertex at row/column 1 will be vertex #0 in the igraph graph,
etc. I'll document this in a minute.
This means that you can always do
g <- graph.adjacency(A)
V(g)$name <- colnames(A)
But i completely agree that this should
ONKELINX, Thierry inbo.be> writes:
>
> Guillaume,
>
> You'll have to add the appropriate scales.
>
> ggplot(data, aes(x,y,fill)) + geom_point() + scale_x_continuous("your
> xlabel") + scale_y_continuous("your ylabel")
>
> I suppose you can add a main title in a similar way, but I haven't f
Hi,
I have some questions about p.adjust.
"The false discovery rate is a less stringent condition than the family wise
error rate, so these methods are more powerful than the others.", these
methods refer to FDR methods or FWER methods. Simply what are the
differences/pros/cons of both classes of
Hi,
There are different types of tiff methods in bitmap(), which one should be
used for publication-quality pictures ? '"tiffcrle"',
'"tiffg3"', '"tiffg32d"', '"tiffg4"', '"tifflzw"', '"tiffpack"',
'"tiff12nc"',
'"tiff24nc"',
Thanks
[[alternative HTML version deleted]]
Dear All,
In a package, I want to use some C code where I am using a structure
(as the basic element of a linked list) with flexible array members.
Basically, this is a structure where the last component is an
incomplete array type (e.g., Harbison & Steel, "C, a reference
manual, 5th ed.", p. 159
Thierry and Ingo,
Thanks for these smart responses. It works fine.
Guillaume
2008/3/5, Ingo Michaelis <[EMAIL PROTECTED]>:
>
>
>
> ONKELINX, Thierry inbo.be> writes:
>
> >
> > Guillaume,
> >
> > You'll have to add the appropriate scales.
> >
> > ggplot(data, aes(x,y,fill)) + geom_point() + scal
On 05-Mar-08 07:14:28, Chandra Shah wrote:
> Hi
> I have a 3 x 2 contingency table:
> 10 20
> 30 40
> 50 60
> I want to update the frequencies to new marginal totals:
> 100 130
> 40 80 110
> I want to use the ipf (iterative proportional fitting) function
> which is apparently in the cat package.
>
Ng Stanley wrote:
> Hi,
>
> There are different types of tiff methods in bitmap(), which one should be
> used for publication-quality pictures ?
I'd avoid bitmap at all and try vector formats such as PostScript or PDF
or similar formats depending on your publication.
Uwe Ligges
> '"tiffcr
On Wednesday 05 March 2008 09:25:11 am Paul Sweeting wrote:
PS> I am carrying out some logit regressions, so have a (0,1) dependent
PS> , I need to report the marginal effects of
Have a look at lrm of the design package. It reports Dxy which is maybe what
you want...
Stefan
Goodmorning Jim,
My file has not only more than a million values, but more than a million
rows and moreless 30 columns (it is a productive dataset for cows), infact
with read.table i'm not able to import it.
It is an xls file.
How do you import your million rows and 4-5 columns files?
thank you
r
Thanks to Mr.Liviu Androvic and Mr.Richard Rowe helped me in PCA.
Because I have just learn R language in a few day so I have many problem.
1) I don't know why PCA rotation function not run although I try many times.
Would you please hepl me and explain how to read the PCA map (both of
rotated a
A parallel coordinate plot would do fine. Load the package iplots
and then use the command ipcp(x1, x2,...)
Antony Unwin
__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-p
Thats really great, but now the sink() uses also this width. Is it
possible to make sink using another width (i.e. 80 characters)?
Thanks,
Martin
Am Dienstag, den 04.03.2008, 13:23 +0100 schrieb Martin Elff:
> On Tuesday 04 March 2008 (12:34:47), Peter Dalgaard wrote:
> > Martin Kaffanke wrote:
Hi,
In my discpipline, it is common to plot one acoustic property on a
positive scale but from top to bottom on the ordinate and the same for
another measurement on the abscissa.
So, the origin of the plot is on the top right of the plot, with
increasing values to the left /down. This is to highli
Hi all!
This is a rather statistical question;
Which effect sizes (parametric or not) could I use in order to estimate
the amount of non-linear correlation between 2 variables?
Is it possible to correct for auto-correlation within the correlated
times series?
Any suggestions for the ap
Hello list,
I am trying to apply the paired t.test between diseased and not diseased
patients to identify genes that are more expressed in the one situation
under the other. In order to retrieve the genes that are more expressed in
the positive disease state I do:
p.values<-c()
for(i in 1:length(S
On Wed, Mar 05, 2008 at 12:32:19PM +0100, Erika Frigo wrote:
> My file has not only more than a million values, but more than a million
> rows and moreless 30 columns (it is a productive dataset for cows), infact
> with read.table i'm not able to import it.
> It is an xls file.
read.table() expe
On Wednesday 05 March 2008 (12:56:17), Martin Kaffanke wrote:
> Thats really great, but now the sink() uses also this width. Is it
> possible to make sink using another width (i.e. 80 characters)?
# auto width adjustment
.adjustWidth <- function(...){
options(width=Sys.getenv("COLUMNS"))
I am sorry, the test is unpaired...But my question remains
Thanks,
Eleni
On Wed, Mar 5, 2008 at 2:33 PM, Eleni Christodoulou <[EMAIL PROTECTED]>
wrote:
> Hello list,
>
> I am trying to apply the paired t.test between diseased and not diseased
> patients to identify genes that are more expressed
Hello
I've stumbled upon a problem for inversion of a matrix with large values,
and I haven't found a solution yet... I wondered if someone could give a
hand. (It is about automatic optimisation of a calibration process, which
involves the inverse of the information matrix)
code:
***
> I had the same problem and found a solution in some forums. Try this:
>
> p<-ggplot(data, aes(x,y,fill)) + geom_point() + scale_x_continuous("your
> xlabel") + scale_y_continuous("your ylabel")
The new (more ggplot-like way) is to do:
ggplot(data, aes(x,y,fill)) + ... + opts(title = "my title")
On 3/5/2008 8:21 AM, gerardus vanneste wrote:
> Hello
>
> I've stumbled upon a problem for inversion of a matrix with large values,
> and I haven't found a solution yet... I wondered if someone could give a
> hand. (It is about automatic optimisation of a calibration process, which
> involves the
Sorry, I meant to send this to the whole list.
On Mar 5, 2008, at 8:46 AM, Charilaos Skiadas wrote:
> The problem doesn't necessarily have to do with the range of data.
> At first level, it has to do with the simple fact that dfdb has
> rank 6 at most, (7 at most in general, though in your ca
On Wed, Mar 5, 2008 at 2:05 PM, ian white <[EMAIL PROTECTED]> wrote:
> Don't you need to make some allowance for multiple testing? E.g. to get
> a experiment-wise significance level of 0.01 you need
>
> which(p.values < very small number)
>
> where the very small number is approximately 0.01/(tota
Dear all,
I did a non-linear least square model fit
y ~ a * x^b
(a) > nls(y ~ a * x^b, start=list(a=1,b=1))
to obtain the coefficients a & b.
I did the same with the linearized formula, including a linear model
log(y) ~ log(a) + b * log(x)
(b) > nls(log10(y) ~ log10(a) + b*log10(x), start=l
On 3/4/2008 2:45 PM, Jarrett Byrnes wrote:
> Hello, R-i-zens. I'm working on an data set with a factorial ANOVA
> that has a significant interaction. I'm interested in seeing whether
> the simple effects are different from 0, and I'm pondering how to do
> this. So, I have
>
> my.anova<-lm
Dear all,
I try to import a SPSS.por dataset with about 6000 cases and 650
variables with R commander
and got error messages:
'Error: /temp/t.por" use.value.labels=true,
max.value.label=inf,
'Error: data is not data frame and cannot be attached'
Any comment?
Thanks in advan
Write out the objective functions that they are minimizing and it
will be clear they are different so you can't expect the same
results.
On Wed, Mar 5, 2008 at 8:53 AM, Wolfgang Waser <[EMAIL PROTECTED]> wrote:
> Dear all,
>
> I did a non-linear least square model fit
>
> y ~ a * x^b
>
> (a) > nls
The CPU on my computer 'died' and I have had to purchase a new
computer. I have just installed v 2.6.2. My previous computer had v
2.5.1 and a large number of files in the library folder. These files
have been copied to a partition on my new hard drive, along with the
old R installation.
Can
Dear all,
I try to import a SPSS.por dataset with about 6000 cases and 650
variables with R commander
and got error messages:
'Error: /temp/t.por" use.value.labels=true,
max.value.label=inf,
'Error: data is not data frame and cannot be attached'
Any comment?
Thanks in advan
included message
Dear all,
I am using "coxme" function in Kinship library to fit random treatment effect
nested within centre. I got 3 treatments (0,1,2) and 3 centres. I used
following
commands, but got an error.
> ugroup=paste(rep(1:3,each=3),rep(0:2,3),sep='/')
>
mat1=bdsmatrix
Hi,
I am trying to generate a figure of 9 plots that are contained in one
device by using
par(mfrow = c(3,3,))
I would like to have 1 common legend for all 9 plots somewhere outside
of the plotting area (as opposed to one legend inside each of the 9
plots, which the function legend() seems to ge
On Wednesday 05 March 2008 (14:53:27), Wolfgang Waser wrote:
> Dear all,
>
> I did a non-linear least square model fit
>
> y ~ a * x^b
>
> (a) > nls(y ~ a * x^b, start=list(a=1,b=1))
>
> to obtain the coefficients a & b.
>
> I did the same with the linearized formula, including a linear model
>
> l
Bob,
You can copy the files from the packages to your new computer. Then run
update.packages(checkBuilt = TRUE).
That should do.
HTH,
Thierry
ir. Thierry Onkelinx
Instituut voor natuur- en bosonderzoek / Research In
See ?.libPaths for info on setting the location of your libraries.
If you do want to copy them the copydir.bat utility in
batchfiles.googlecode.com
can do that.
On Wed, Mar 5, 2008 at 9:07 AM, Bob Green <[EMAIL PROTECTED]> wrote:
>
> The CPU on my computer 'died' and I have had to purchase a new
On Wed, Mar 5, 2008 at 7:43 AM, Duncan Murdoch <[EMAIL PROTECTED]> wrote:
> On 3/5/2008 8:21 AM, gerardus vanneste wrote:
> > Hello
> >
> > I've stumbled upon a problem for inversion of a matrix with large values,
> > and I haven't found a solution yet...
Someone with experience in numerical l
I have strings contain postcode and letters, some seperated with blank, some
with comma, and some hasn't seperated. eg, "2324gz" "2567 HK" "3741,BF"
I want to seperate the number and letters into two new variables.
I know this should be quite basic question, but searched on regex syntax and
th
Looks like I turned an "off my one error" into an "off by two error" by
adding rather than subtracting. Clearly a logic error on my part.
Also, which.max is clearly superior as it results in half as many
function calls.
Thanks guys!
As an aside, although igraph may use the C indexing conventio
Hi all
I am analyzing a data set containing information about the behaviour of
marine molluscs on a vertical wall. Since I have replicate observations
on the same individuals I was thinking to use the geepack library.
The data are organised in a dataframe with the following variables
Date = da
I would like to acknowledge the answers I received from Tom Filloon, Mike
Cheung and Berwyn Turlach.
Berwyn's response was exactly what I needed. Use solve.QP from the quadprog
package in R. S-Plus has the equivalent function solveQP in the NuOpt
module.
Berwyn's response is below
G'day Carlos,
the comma seperated file is 37Mb, and I get the below message:
it is zoo object read in this way:
# chron
> library(chron)
> fmt.chron <- function(x) {
+chron(sub(" .*", "", x), gsub(".* (.*)", "\\1:00", x))
+ }
> z1 <- read.zoo("all.csv", sep = ",", header = TRUE, FUN = fmt.chron)
and then t
Try this:
> library(gsubfn)
> x <- c("2324gz", "2567 HK", "3741,BF")
> strapply(x, "[[:digit:]]+|[[:alpha:]]+")
[[1]]
[1] "2324" "gz"
[[2]]
[1] "2567" "HK"
[[3]]
[1] "3741" "BF"
On Wed, Mar 5, 2008 at 9:51 AM, sun <[EMAIL PROTECTED]> wrote:
> I have strings contain postcode and letters, so
This should do it for you:
> x <- c("2564gc", "2367,GH", "2134 JHG")
> x.sep <- gsub("([[:digit:]]+)[ ,]*([[:alpha:]]+)", "\\1 \\2", x)
> # now create separate values
> strsplit(x.sep, " ")
[[1]]
[1] "2564" "gc"
[[2]]
[1] "2367" "GH"
[[3]]
[1] "2134" "JHG"
>
On 3/5/08, sun <[EMAIL PROTECTED]>
> Which effect sizes (parametric or not) could I use in order to estimate
> the amount of non-linear correlation between 2 variables?
>
> Is it possible to correct for auto-correlation within the correlated
> times series?
>
I think the starting point is to develop a model, even conceptual,
You could try plotting it in pieces to use less RAM.
library(zoo)
library(chron)
z <- zoo(1:10, chron(1:10))
# same as plot(z)
plot(z[1:5], ylim = range(z), xlim = range(time(z)))
lines(z[5:10])
On Wed, Mar 5, 2008 at 10:00 AM, stephen sefick <[EMAIL PROTECTED]> wrote:
> the comma seperated file
Our March-April 2008 R/S+ course schedule is now available. Please check
out this link for additional information and direct enquiries to Sue
Turner [EMAIL PROTECTED] Phone: 206 686 1578
--Can't see your city? Please email us! --
Ask for Group Discount
Hi Sun,
vec <- c("2324gz","2567 HK","3741,BF")
vec1 <- gsub('[^[:digit:]]','',vec)
vec2 <- gsub('[^[:alpha:]]','',vec)
> vec1
[1] "2324" "2567" "3741"
> vec2
[1] "gz" "HK" "BF"
Cheers
Vincenzo
---
Vincenzo Luc
I will like to analyse a binary cross over design using the random
effects model. The probability of success is assumed to be logistic.
Suppose as an example, we have 4 subjects undergoing a crossover design,
where the outcome is either success or failure. The first two subjects
receive treatme
Huh. Very interesting. I haven't really worked with manipulating contrast
matrices before, save to do a prior contrasts. Could you explain the matrix
you laid out just a bit more so that I can generalize it to my case?
Chuck Cleland wrote:
>
>
>One approach would be to use glht() in t
Hello all,
I am trying to use
m <- seq(-1,1,0.1)
x1 <- vector()
x2 <- vector()
for(i in m){
x1[i] <- i
x2[i] <- i^2
}
dat <- data.frame(x1,x2)
But, I have false result
>dat
x1 x2
1 1 1
could some tell me how it is possible to do this?
Thank you!
Hi,
I have a survey dataset of about 2 observations
where for 2 factor variables I have about 200 missing
values each. I want to impute these using 10 possibly
explanatory variables which are a mixture of integers
and factors.
Since I was quite intrigued by the concept of rrp I
wanted to use
Hi all,
I would like to know whether there is any function in R were i can
find the cross-correlation of two or more multivariate (time series) data. I
tried the function ccf() but it seems like to have two univariate datasets.
Please let me know.
sincerely,
sandeep
--
Sandeep Joseph PhD
On Wed, 5 Mar 2008, Wolfgang Waser wrote:
> Dear all,
>
> I did a non-linear least square model fit
>
> y ~ a * x^b
>
> (a) > nls(y ~ a * x^b, start=list(a=1,b=1))
>
> to obtain the coefficients a & b.
>
> I did the same with the linearized formula, including a linear model
>
> log(y) ~ log(a) + b
Davood Tofighi wrote:
> Thanks for your reply. For each condition, I will have a matrix or data
> frames of 1000 rows and 4 columns. I also have a total of 64 conditions for
> now. So, in total, I will have 64 matrices or data frames of 1000 rows and 4
> columns. The format of data I would like to
> Date: Wed, 05 Mar 2008 15:59:59 +0100 (CET)
> From: Neuer Arkadasch <[EMAIL PROTECTED]>
> Sender: [EMAIL PROTECTED]
> Precedence: list
>
> Hello all,
>
> I am trying to use
>
> m <- seq(-1,1,0.1)
> x1 <- vector()
> x2 <- vector()
> for(i in m){
> x1[i] <- i
> x2[i] <- i^2
Try this:
m <- seq(-1,1,0.1)
x1 <- vector(length=length(m))
x2 <- vector(length=length(m))
for(i in m){
x1[i] <- i
x2[i] <- i^2
}
dat <- data.frame(x1,x2)
Ravi.
---
Ravi Varadhan, Ph.D.
Assistant Pro
m <- seq(-1,1,0.1)
x1 <- c()
x2 <- c()
for(i in 1:length(m)){
x1[i] <- m[i]
x2[i] <- m[i]^2
}
dat <- data.frame(x1,x2)
-Original Message-
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] Behalf Of Neuer Arkadasch
Sent: March 5, 2008 10:00 AM
To: [EMAIL PROTECTED]
Subject: [R
Thanks all for the prompt answers!!! All works perfectly!
up and running! Thanks!
"jim holtman" <[EMAIL PROTECTED]> wrote in message
news:[EMAIL PROTECTED]
> This should do it for you:
>
>> x <- c("2564gc", "2367,GH", "2134 JHG")
>> x.sep <- gsub("([[:digit:]]+)[ ,]*([[:alpha:]]+)", "\\1 \\2",
Why not simply?
m <- seq(-1, 1, by = 0.1)
dat <- data.frame(m, m^2)
- Erik Iverson
Neuer Arkadasch wrote:
> Hello all,
>
> I am trying to use
>
> m <- seq(-1,1,0.1)
> x1 <- vector()
> x2 <- vector()
> for(i in m){
> x1[i] <- i
> x2[i] <- i^2
> }
> dat <- data.frame(x1,
Ramon Diaz-Uriarte wrote on 03/05/2008 04:25 AM:
> Dear All,
>
> In a package, I want to use some C code where I am using a structure
> (as the basic element of a linked list) with flexible array members.
> Basically, this is a structure where the last component is an
> incomplete array type (e.g
rrp is working!
Sorry, it was my mistake... fiddling around to find
out what the problem is I forgot to re-include the
variables which are to be imputed. It seems like this
case is not caught but the algorithm finishes with the
mentioned error.
Anyway, I am still a little fuzzy about imputation a
On Wed, 5 Mar 2008, Chandra Shah wrote:
> Hi
> I have a 3 x 2 contingency table:
> 10 20
> 30 40
> 50 60
> I want to update the frequencies to new marginal totals:
> 100 130
> 40 80 110
> I want to use the ipf (iterative proportional fitting) function which
> is apparently in the cat package.
> C
On Tue, Mar 4, 2008 at 9:48 PM, John Sorkin <[EMAIL PROTECTED]> wrote:
> Prof. Bates was correct to point out the lack of specifics in my original
> posting. I am looking for a package that will allow we to choose among link
> functions and account for repeated measures in a repeated measures ANO
Hi there!
In my case,
cor(d[1:20])
makes me a good correlation matrix.
Now I'd like to have it one sided, means only the left bottom side to be
printed (the others are the same) and I'd like to have * where the
p-value is lower than 0.05 and ** lower than 0.01.
How can I do this?
And another
On 3/5/2008 10:09 AM, jebyrnes wrote:
> Huh. Very interesting. I haven't really worked with manipulating contrast
> matrices before, save to do a prior contrasts. Could you explain the matrix
> you laid out just a bit more so that I can generalize it to my case?
Each column corresponds to
Well well well...
To summarize : let assume that A is a class (slot x) and C is a class
containing A (A and slot y) - as(c,"A") calls new("A"). So new("A") HAS
TO works, you can not decide to forbid empty object (unless you define
setAs("C","A") ?)
- In addition, any test that you would like to
On 3/5/08, Fredrik Karlsson <[EMAIL PROTECTED]> wrote:
> Hi,
>
> In my discpipline, it is common to plot one acoustic property on a
> positive scale but from top to bottom on the ordinate and the same for
> another measurement on the abscissa.
> So, the origin of the plot is on the top right of
Thank you Yinghai, that's what I need :-)!
Yinghai Deng <[EMAIL PROTECTED]> schrieb: m <- seq(-1,1,0.1)
x1 <- c()
x2 <- c()
for(i in 1:length(m)){
x1[i] <- m[i]
x2[i] <- m[i]^2
}
dat <- data.frame(x1,x2)
-Original Message-
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] Be
Hi,
I'm trying to create a density plot which I used to do in geneplotter
using the following code. Unfortunately I can't find the combination of
R release and geneplotter that works.
Can anyone suggest a fix or an alternative to smoothScatter that will
plot depth of one dive vs depth of the nex
On Wed, 5 Mar 2008, Boikanyo Makubate wrote:
> I will like to analyse a binary cross over design using the random
> effects model. The probability of success is assumed to be logistic.
> Suppose as an example, we have 4 subjects undergoing a crossover design,
> where the outcome is either success
Try this:
On 05/03/2008, Martin Kaffanke <[EMAIL PROTECTED]> wrote:
> Hi there!
>
> In my case,
>
> cor(d[1:20])
>
> makes me a good correlation matrix.
>
> Now I'd like to have it one sided, means only the left bottom side to be
> printed (the others are the same) and I'd like to have * wher
Hi Eleni,
Check *"Computing Thousands of Test Statistics Simultaneously in R" *in
http://stat-computing.org/newsletter/v181.pdf
Other alternative could be the multtest package.
HTH
Jorge
On Wed, Mar 5, 2008 at 8:55 AM, Eleni Christodoulou <[EMAIL PROTECTED]>
wrote:
> On Wed, Mar 5, 2008 a
Thank you everybody.
Phil, your expand.grid works very nicely and I will use it for
non-vectorized functions.
Yet I am a bit confused about "vectorization". For me it is synonymous of
"no loop". :-(
I wrote a toy example (with a function which is not my log-likelihood).
FIRST PART
nir=1:10
log
Dear Jeff,
Thanks for the suggestion. However, something is still not working.
This is a simple example:
*** start C
#include
struct Sequence {
int len;
unsigned int state_count[];
};
int main(void) {
struct Sequence *A;
int n = 4;
// First li
Dear List,
I am looking for an efficient method for replacing values in a
data.frame conditional on the values of a separate data.frame. Here is
my scenario:
I have a data.frame (A) with say 1000 columns, and 365 rows. Each cell
in the data.frame has either valid value, or NA. I have an additional
Try
bb[is.na(aa)] <- NA
It may be simple but it is not necessarily obvious :)
--- Carson Farmer <[EMAIL PROTECTED]> wrote:
> Dear List,
>
> I am looking for an efficient method for replacing
> values in a
> data.frame conditional on the values of a separate
> data.frame. Here is
> my scenario:
Folks,
A nice new data resource has come up -- http://data.un.org/
I thought it would be wonderful to setup an R function like
tseries::get.hist.quote() which would be able to pull in some or all
of this data.
I walked around a bit of it and I'm not able to map the resources to
predictable URLs w
m <- seq(-1,1,0.1)
x1 <- vector()
x2 <- vector()
# the loop statement was incorrect.
for(i in 1:length(m)){
x1[i] <- m[i]
x2[i] <- m[i]^2
}
dat <- data.frame(x1,x2)
# But why not something like this? There is no need
for a loop.
x1 <- seq(-1,1,0.1)
mdat <- data.frame(x1, x2=x1^2
Ah. I see. So, if I want to test to see whether each simple effect is
different from 0, I would do something like the following:
cm2 <- rbind(
"A:L" = c(1, 0, 0, 0, 0, 0),
"A:M" = c(1, 1, 0, 0, 0, 0),
"A:H" = c(1, 0, 1, 0, 0, 0),
"B:L" = c(1, 0, 0, 1, 0, 0),
"B:M" = c(1, 1, 0, 1, 1, 0)
Hello everybody,
I have a question about box-constrained optimization. I've done some
research and I found that optim could do that. Are there other ways in R ?
Is the following correct if I have a function f of two parameters belonging
for example to [0,1] and [0,Infinity] ?
optim(par=param, fn=
?mtitle should do it.
--- Georg Otto <[EMAIL PROTECTED]> wrote:
> Hi,
>
> I am trying to generate a figure of 9 plots that are
> contained in one
> device by using
>
> par(mfrow = c(3,3,))
>
> I would like to have 1 common legend for all 9 plots
> somewhere outside
> of the plotting area (as op
Thanks to All,
The comments were very helpful; however, the the simulation is running very
slow. I reduced the number of loops (conditions) so I have 36 loops, and the
data-generation occurs 1000 times within each loop. At the end of each 1000
reps, I saved the summary (e.g., mean) of the reps to
On 3/5/2008 1:32 PM, jebyrnes wrote:
> Ah. I see. So, if I want to test to see whether each simple effect is
> different from 0, I would do something like the following:
>
> cm2 <- rbind(
> "A:L" = c(1, 0, 0, 0, 0, 0),
> "A:M" = c(1, 1, 0, 0, 0, 0),
> "A:H" = c(1, 0, 1, 0, 0, 0),
> "B:L" =
Dear All,
Can R perform n-way ANOVA, i.e., with 3 or more factors?
Thanks in advance,
Paul
__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
a
On Wed, 2008-03-05 at 15:28 +0100, Georg Otto wrote:
> Hi,
>
> I am trying to generate a figure of 9 plots that are contained in one
> device by using
>
> par(mfrow = c(3,3,))
>
> I would like to have 1 common legend for all 9 plots somewhere outside
> of the plotting area (as opposed to one leg
Try to download something in IE and look at the bottom of your browser
where the URL is displayed or look at the Javascript in:
http://data.un.org/_Scripts/SeriesActions.js
and its apparent that the format is as follows:
http://data.un.org/Handlers/DownloadHandler.ashx?DataFilter=srID:1000&dataMar
Hi,
Let me make the following points in response to your questions:
1. Your call to optim() with "L-BFGS-B" as the method is correct. Just
make sure that your function "f" is defined as negative log-likelihood,
since optim is by default a minimizer. The other option is to define
log-likelihood
On Wed, 5 Mar 2008, Paul Smith wrote:
> Dear All,
>
> Can R perform n-way ANOVA, i.e., with 3 or more factors?
Yes. There are even examples on the help page!
>
> Thanks in advance,
>
> Paul
>
> __
> R-help@r-project.org mailing list
> https://stat.eth
[EMAIL PROTECTED] writes:
> Well well well...
You're partly misunderstanding...
> To summarize : let assume that A is a class (slot x) and C is a class
> containing A (A and slot y) - as(c,"A") calls new("A"). So new("A")
> HAS TO works, you can not decide to forbid empty object (unless you
> de
On Wed, 5 Mar 2008, Ramon Diaz-Uriarte wrote:
> Dear Jeff,
>
> Thanks for the suggestion. However, something is still not working.
> This is a simple example:
>
> *** start C
> #include
>
> struct Sequence {
> int len;
> unsigned int state_count[];
> };
>
>
Hello,
I am an advanced user of R. Recently I found out that apparently I do
not fully understand vectors and lists fully
Take this code snippet:
T = c("02.03.2008 12:23", "03.03.2008 05:54")
Times = strptime(T, "%d.%m.%Y %H:%M")
Times # OK
class(Times) # OK
is.list(Tim
On 6/03/2008, at 2:53 AM, Wolfgang Waser wrote:
> Dear all,
>
> I did a non-linear least square model fit
>
> y ~ a * x^b
>
> (a) > nls(y ~ a * x^b, start=list(a=1,b=1))
>
> to obtain the coefficients a & b.
>
> I did the same with the linearized formula, including a linear model
>
> log(y) ~ log
Hello,
Given a list with all elements having identical layout, e.g.:
l = NULL
l[[1]] = list(4, "hello")
l[[2]] = list(7, "world")
l[[3]] = list(9, " ")
is there an easy way to collapse this list into a data.frame with each
row being the elements of the list ?
I.e. in this case I want to
1 - 100 of 147 matches
Mail list logo