Hi Lauri,
I see two possibilities.
Let say that you have
a <-c(1:5)
b <- c(1:7)
c <- c(1:4)
l <- list(a,b,c)
and you want to create an Excel file with columnA (1)
containing a (5 rows), column B (2) containing b and
column C containing c.
One possibility would be to write each ROW of the
output
On Sunday 02 December 2007 06:01:58 pm Deepayan Sarkar wrote:
> On 12/2/07, Dylan Beaudette <[EMAIL PROTECTED]> wrote:
> > Hi,
> >
> > I have noticed an odd inconsistency when plotting a 'step' function
> > (type='s') in xyplot() vs. plot().
> >
> > For example, given the following data:
> >
> > ##
Hi, All;
I've been trying to install 64-bit R in leopard for the past several days
but no luck.
I thought everything worked out fine but when I tried to ran it, I got the
following error:
/Library/Frameworks/R.framework/Versions/2.6/Resources/bin/R: line 173:
/Library/Frameworks/R.framework/Reso
I believe we need to know the following about packages:
(1) Does the package do what it purports to do, i.e. are the results valid?
(2) Have the results generated by the package been validate against some other
statistical package, or hand-worked example?
(3) Are the methods used in the soundly ba
Following Gabor's suggestion, if x is your data.frame
you can do
y <- x[x$month %in% c(3,4,5),]
aggregate(y[,4:6],list(y$hour),mean)
--- Sherri Heck <[EMAIL PROTECTED]> wrote:
> Hi Gabor,
>
> Thank you for your help. I think I need to clarify
> a bit more. I am
> trying to say
>
> average
library(spatstat)
?ripras
Also, ``with user interaction'': ?clickpoly
HTH
cheers,
Rolf Turner
(Having said that, let me point out that it is a pretty dubious
practice to
``let the data choose the window''. The observation window is
*always* determined
by separate
Given a set of coordinates that form an irregular sampling area, is
there an R function to determine boundary points (coordinates defining
the limits of the area), either with or without user interaction ?
# for example, given the following irregular sampling area, how could
I define the boundary
Use format:
f <- function(x) {
format(match.call())
}
f(pi + 3)
On Dec 2, 2007 8:46 PM, tom sgouros <[EMAIL PROTECTED]> wrote:
>
> Gabor Grothendieck <[EMAIL PROTECTED]> wrote:
>
> > Try this:
> >
> > > survey.write <- function(x) {
> > +print(match.call())
> > +x
> > + }
> > >
Gabor Grothendieck <[EMAIL PROTECTED]> wrote:
> Try this:
>
> > survey.write <- function(x) {
> +print(match.call())
> +x
> + }
> > out <- survey.write(pi+3)
> survey.write(x = pi + 3)
That's exactly what I need. Thank you.
But now, another question. I can't seem to get the value ret
On 12/2/07, Dylan Beaudette <[EMAIL PROTECTED]> wrote:
> Hi,
>
> I have noticed an odd inconsistency when plotting a 'step' function
> (type='s') in xyplot() vs. plot().
>
> For example, given the following data:
>
> ## generate some profile depths: 0 - 150, in 10 cm increments
> depth <- seq(0,150
One thing that I would suggest that you do is to use Rprof on a subset
of the data that runs for 10-15 minutes and see where some of the hot
spots are. Since you have not provided commented, minimal,
self-contained, reproducible code, it is hard to determine where the
inefficiencies are since we d
Dear Joerg van den Hoff,
I tryed your suggestion and got the following error:
> nls(y~B*x^A + C, start = c(A=3.2, B=0.002, C=0))
Error in nls(y ~ B * x^A + C, start = c(A = 3.2, B = 0.002, C = 0)) :
singular gradient
>
> sessionInfo()
R version 2.6.0 (2007-10-03)
i386-pc-mingw32
locale:
LC_
Hi, All;
I've been trying to install 64-bit R in leopard for the past several days
but no luck.
I thought everything worked out fine but when I tried to ran it, I got the
following error:
/Library/Frameworks/R.framework/Versions/2.6/Resources/bin/R: line 173:
/Library/Frameworks/R.framework/Resour
Hi,
I have noticed an odd inconsistency when plotting a 'step' function
(type='s') in xyplot() vs. plot().
For example, given the following data:
## generate some profile depths: 0 - 150, in 10 cm increments
depth <- seq(0,150, by=10)
## generate some property: random numbers in this case
prop <
jimbib webber <[EMAIL PROTECTED]> wrote in
news:[EMAIL PROTECTED]:
> Each number in the below list resides in a quantile. When put in
> order, there are 10 numbers, so the first is in the 0.1 quantile, the
> second in the 0.2 etc.
>
> Lets say we have 10 examples of systolic blood pressure from 3
Try this:
> survey.write <- function(x) {
+print(match.call())
+x
+ }
> out <- survey.write(pi+3)
survey.write(x = pi + 3)
On Dec 2, 2007 6:55 PM, tom sgouros <[EMAIL PROTECTED]> wrote:
>
>
> Gabor Grothendieck <[EMAIL PROTECTED]> wrote:
>
> > If what you mean is that you have a file, te
Gabor Grothendieck <[EMAIL PROTECTED]> wrote:
> If what you mean is that you have a file, test.R, of R commands
> and you are using source("test.R") and you wish to discover the
> name "test.R" without hard coding it in your file, then place this in
> test.R:
>
> ofile <- parent.frame(2)$ofile
If what you mean is that you have a file, test.R, of R commands
and you are using source("test.R") and you wish to discover the
name "test.R" without hard coding it in your file, then place this in
test.R:
ofile <- parent.frame(2)$ofile
and ofile will be set to "test.R". Note that the line shown
Hello all:
I have a function that writes a fairly elaborate report based on some
survey data. For documentation and bookkeeping purposes, I'd like to
write out in the report the function call that produced the report, or
at least enough information to help me recreate the steps that led to
that
Also the fitted values satisfy Const = -B = 33 (approximately) so we could try:
> plot(richness ~ area)
> nls(richness ~ C * (1 - 1/sqrt(area)), start = c(C = 33))
Nonlinear regression model
model: richness ~ C * (1 - 1/sqrt(area))
data: parent.frame()
C
32.85
residual sum-of-squares:
Package review is a nice idea. But you raise a worrying point.
Are any of the 'downright dangerous' packages on CRAN?
If so, er... why?
>>> <[EMAIL PROTECTED]> 12/01/07 7:21 AM >>>
>I think the need for this is rather urgent, in fact. Most packages are
>very good, but I regret to say some are pr
On 3/12/2007, at 9:26 AM, Joerg van den Hoff wrote:
> and, contrary to other assessments you've received, I definitely
> would prefer `nls'
> for least squares fitting instead of using `optim' or other general
> minimization routines.
Clearly you are far cleverer than I at getting
OK. Since the model is linear except for A lets use brute force to
repeatedly evaluate the sum of squares for values of A between
-2 and 2 proceeding in steps of .01 solving the other parameters using
lm. That will give us better starting values and we should be able to
use nls on that.
> x <- seq
This message reports a problem, and its solution. I found the solution while
posting. Since others may have the same problem, I am continuing with the
post.
PROBLEM
I am having a different problem than the one that others have reported (e.g.
http://www.nabble.com/Problem-Installing-2.6.0-on-Mac
Dear Gabor,
Thank you for your reply.
In fact I am ajusting several models at same time, like linear, log-linear,
log-log, piecewise etc. One of the models are the power model. I really need to
fit a power model because it one of the hypothesis which have been suggested on
literature.
In addit
Is that really the model we want? When we have problems sometimes
its just a sign that the model is not very good in the first place.
plot(richness ~ area)
shows most of the points crowded the left and just a few points out to
the right. This
does not seem like a very good pattern for model fit
On Sun, Dec 02, 2007 at 11:08:01AM -0800, Milton Cezar Ribeiro wrote:
> Dear all,
> I am still fighting against my "power model".
> I tryed several times to use nls() but I can??t run it.
> I am sending my variables and also the model which I would like to fit.
> As you can see, this "power model"
It seems I didn't get by with your previous solutions after all... I
would still need some more advice on the subject. I edited the DF so
that now all variables contain missing values (NAs).
y1 <- rnorm(10) + 6.8
y2 <- rnorm(10) + (1:10*1.7 + 1)
y3 <- rnorm(10) + (1:10*6.7 + 3.7)
y <- c(y1,y2,y3)
On 02/12/2007 2:16 PM, Allen McIntosh wrote:
> Version: Observed in 2.5.1
>
>> x <- 1:10
>> y <- 1
>> z <- array(1:10,dim=c(10,1))
>> persp(x,y,z)
> Error in persp(x, y, z, xlim, ylim, zlim, theta, phi, r, d, scale, expand, :
> invalid 'x' argument
>
>
> The problem isn't 'x'. It's 'y'
On 02/12/2007 1:31 PM, Ben Bolker wrote:
>
>
> Duncan Murdoch-2 wrote:
>> On 02/12/2007 11:49 AM, Allen McIntosh wrote:
>>> Version: 2.5.1
>> That's an obsolete version, but the issue is still present in R-devel.
>>
>>> array() is inconsistent when given non-integral dimensions:
>>>
zz <- ar
Rule number 1: Read the help for the function you are using.
You must supply starting values for the fit --- which the code you gave
doesn't do.
Rule number 2: Don't use nls()! Endless grief results.
Instead try:
foo <- function(par,x,y){
Const <- par[1]
B <- par[2]
A <- par[3]
sum((
Version: Observed in 2.5.1
> x <- 1:10
> y <- 1
> z <- array(1:10,dim=c(10,1))
> persp(x,y,z)
Error in persp(x, y, z, xlim, ylim, zlim, theta, phi, r, d, scale, expand, :
invalid 'x' argument
The problem isn't 'x'. It's 'y'.
__
R-help@r-proj
Greetings,
I am trying to run a logistic regression model for binary data with a random
intercept and slope in R 2.6.1. When I use the code:
lmer1<-lmer(infect ~ time+gender + (1+time|id), family=binomial, data=ichs,
method="Laplace")
Then from:
summary(lmer1)
I get the message:
Error in if
hello all,
I trying to use the package 'odfWeave'
and I get the follow error:
### error message
#
...
Removing content.xml
Post-processing the contents
Error in .Call("RS_XML_Parse", file, handlers, as.logical(addContext), :
E
Greetings,
I am trying to run a logistic regression model for binary data with a random
intercept and slope in R 2.6.1. When I use the code:
lmer1<-lmer(infect ~ time+gender + (1+time|id), family=binomial, data=ichs,
method="Laplace")
Then from:
summary(lmer1)
I get the message:
Error in if
Dear all,
I am still fighting against my "power model".
I tryed several times to use nls() but I can´t run it.
I am sending my variables and also the model which I would like to fit.
As you can see, this "power model" is not the best model to be fit, but I
really need also to fit it.
The model
Thank you Prof Ripley for your solutions, I'll get by with these.
Lauri
2007/12/2, Prof Brian Ripley <[EMAIL PROTECTED]>:
> On Sun, 2 Dec 2007, Lauri Nikkinen wrote:
>
> > #Dear R-users,
> > #I have a data.frame like this:
> >
> > y1 <- rnorm(10) + 6.8
> > y2 <- rnorm(10) + (1:10*1.7 + 1)
> > y3
On Thu, 29-Nov-2007 at 01:22PM -0800, Nathan Vandergrift wrote:
|>
|> I'm trying to get my graphics so that I can use them in LaTeX to create (via
|> ) a pdf presentation.
|>
|> I've tried controlling inner and outer margins and figure size using par(),
|> to no avail. The ps output keeps appear
On Sun, 2 Dec 2007, Lauri Nikkinen wrote:
> #Dear R-users,
> #I have a data.frame like this:
>
> y1 <- rnorm(10) + 6.8
> y2 <- rnorm(10) + (1:10*1.7 + 1)
> y3 <- rnorm(10) + (1:10*6.7 + 3.7)
> y <- c(y1,y2,y3)
> x <- rep(1:3,10)
> f <- gl(2,15, labels=paste("lev", 1:2, sep=""))
> g <- seq(as.Date(
Duncan Murdoch-2 wrote:
>
> On 02/12/2007 11:49 AM, Allen McIntosh wrote:
>> Version: 2.5.1
>
> That's an obsolete version, but the issue is still present in R-devel.
>
>> array() is inconsistent when given non-integral dimensions:
>>
>>> zz <- array(0,dim=c(4,3.01))
>>> dim(zz)
>> [1] 4 3
>
On 02/12/2007 11:49 AM, Allen McIntosh wrote:
> Version: 2.5.1
That's an obsolete version, but the issue is still present in R-devel.
> array() is inconsistent when given non-integral dimensions:
>
>> zz <- array(0,dim=c(4,3.01))
>> dim(zz)
> [1] 4 3
>> zz <- array(0,dim=c(201,4.05))
> Error in
#Dear R-users,
#I have a data.frame like this:
y1 <- rnorm(10) + 6.8
y2 <- rnorm(10) + (1:10*1.7 + 1)
y3 <- rnorm(10) + (1:10*6.7 + 3.7)
y <- c(y1,y2,y3)
x <- rep(1:3,10)
f <- gl(2,15, labels=paste("lev", 1:2, sep=""))
g <- seq(as.Date("2000/1/1"), by="day", length=30)
DF <- data.frame(x=x,y=y, f=
R Users:
I am trying to estimate a model of fertility behaviour using birth history data
with maximum likelihood. My code works but is extremely slow (because of
several for loops and my programming inefficiencies); when I use the genetic
algorithm to optimize the likelihood function, it takes
Each number in the below list resides in a quantile. When put in order,
there are 10 numbers, so the first is in the 0.1 quantile, the second
in the 0.2 etc.
Lets say we have 10 examples of systolic blood pressure from 30 year olds:
104,95,106,105,110,150,101,98,85,104
This is a random sam
Version: 2.5.1
array() is inconsistent when given non-integral dimensions:
> zz <- array(0,dim=c(4,3.01))
> dim(zz)
[1] 4 3
> zz <- array(0,dim=c(201,4.05))
Error in dim(data) <- dim : dim<- : dims [product 804] do not match the length
of object [814]
[IMHO the code that did this is broken. M
Ah so ! Thank you .
--- Gabor Grothendieck <[EMAIL PROTECTED]>
wrote:
> Its a bit tricky if you want to get it to work
> exactly the same as
> Excel even in the presence of runs but in terms of
> the R approx function
> I think percentrank corresponds to ties = "min" if
> the value is among those
[EMAIL PROTECTED] wrote:
> Hi Jim,
> Thanks for getting back to me so quickly.
>
> I did look at color.legend, but that seems to plot colored blocks for
> the observations (in this case the mean) and not for the color.scale
> (which represents variance in this case). Unless there is a
> functiona
47 matches
Mail list logo