On Thu, May 03, 2012 at 09:07:39PM -0400, li li wrote:
> Hi all,
>I have a 100 by 100 matrix and I divided this matrix into 100 groups,
> each is a
> 10 by 10 submatrix. I want find out the means of each group.
>I know we can use apply function for mean by margins. Is there a
> function i
Hello,
In zoo package, if I would like the time frame to be 1981M01 to 1982M12,
then I code
time_0<-as.yearmon("1981-01")+(0:23)/12
However, if the time frame of interest becomes 1981M01 to 2011M12, it is
relatively hard to calculate the number of months. Is there any faster way
to do it?
Dear All,
I am using the LIMMA package to create 2 contrasts for my data and then
calculating the vennCounts of the decideTests from the contrast.fit to be able
to create venn Diagrams.
The code works fine but the summary(results) shows zeros for all i.e. no gene
were up regulated or downregula
Dear Experienced R Practitioners,
I have 4GB .txt data called "dataset.txt" and have attempted to use *ff,
bigmemory, filehash and sqldf *packages to import it, but have had no
success. The readLines output of this data is:
readLines("dataset.txt",n=20)
[1] " "
(1) There's something funny about the data that you present below.
At least in my mailer (Thunderbird) what looks like a space between
the X and Y values turns into a NULL character when I try to copy
and paste.
(2) Create a data frame, say "M" with the X and Y values as given
in your email.
(3
This is correct , now working. I did not have the preceeding comma in
"Data2[,i] . . ."
Thanks very much! Very much appreciated. Ben Neal
-Original Message-
From: Peter Ehlers [mailto:ehl...@ucalgary.ca]
Sent: Thu 5/3/2012 3:04 PM
To: Ben Neal
Cc: Jim Lemon; r-help@r-project.org
Subject
Hi,
I have a tab seperated file with 206 rows and 30 columns.
I read in the file into R using read.table() function. I checked the dim()
of the data frame created in R, it had only 103 rows (exactly half), 30
columns. Then I tried reading in the file using read.delim() function and
this time the
Dear R users,
I am applying the augmented-Dickey-Fuller Unit Root Test
(ur.df function of the urca package) to a time series of
approximately 50 values.
To be sure I understood what was going on with the ur.df
function, I checked the critical values of the 3 test
statistics (tau, phi2 and phi3 if
Dear all,
I open a bmp device by the bmp() function in the png package, but I
don't know how to write color pixel by pixel into the file? Any help or hint?
What I want to do is to create a 512*512 bmp file with certain dots
being red and others black. I have all the pixel coordinate
Hi,
For some reason I have been unable to use the predict function when I
desire the standard error to be calculated too. For example, when I try
the following:
l<- loess(d~x+y, span=span, se=TRUE)
p<- predict(l, se=TRUE)
I get the following error message:
Error in vector("double", length
Hi,
I am working on a capacity planning task for a socket server. As
part of the research I read " Quickly Generating Billion-Record
Synthetic Databases" which is about benchmarking databases. I understand
from that paper that there are specific datasets with statistics
properties that can stre
Hello,
Hui Du wrote
>
> Hi All,
>
> Suppose I have the following codes:
>
> x = data.frame(A = rnorm(20), B = rnorm(20), C = rnorm(20))
> a = list()
> a[["1.1"]] = x
> a[["1.2"]] = x
>
> b = list()
> b[["1.1"]] = c("A", "B")
> b[["1.2"]] = c("B", "C")
>
>
> Now I want to apply b to a like t
Hello,
>
> I have a vector wherein the cases are either uniform or mixed-strings (so
> "AAA" vs "ABABABABA"). Different parts > of the vector apply to
> different users, so [1:29] is one guy, [30:50] is another, and [51:70] is
> another. There are about
> 100,000 users, and I have an objec
Thanks for that - too bad there isn't a simple workaround!
greetings
Remko
--
View this message in context:
http://r.789695.n4.nabble.com/deparse-substitute-x-on-an-object-with-S3-class-tp4605592p4607815.html
Sent from the R help mailing list archive at Nabble.com.
Dear R gurus,
I am trying to overload some operators in order to let these work with the
ff package by registering the S3 objects from the ff package and
overloading the operators as shown below in a reproducible example where
the "*" operator is overloaded.
require(ff)
setOldClass(Classes=c("ff_
If a run a LOESS model and then produce a smoothed surface: Is there any
way to determine the coordinates of the local maxima on the surface?
[[alternative HTML version deleted]]
__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/
If i am not mistaking the quantile function and percentile function and the
same other than the way they are expressed in R. I think where I am losing
it is just how to express the function.
myvec<- c(5,4,3,2,1,10,9,8,7,6)
PercentileFinder<- function (vec,p){
sortedlist<- sort(vec)
count<- length(
On 04/05/2012 00:43, William Dunlap wrote:
> class(10)
[1] "numeric"
> class(10L)
[1] "integer"
> class(10i)
[1] "complex"
Why not 10I for integer? Perhaps because "I" and "l"
look too similar, perhaps because "i" and "I" sound
too similar. The "L" does not mean "long": in
On 5/3/2012 9:28 PM, Joshua Wiley wrote:
How are you using R? Any special front ends that might be causing
this? Can you try it in unsuffered consequences?
I'm running R 1.15.0; sessionInfo() appears below. I get this
from Rgui i386 and x64 plus when calling Rterm x64 via GNU Emacs
How are you using R? Any special front ends that might be causing
this? Can you try it in unsuffered consequences?
Josh
On Thu, May 3, 2012 at 9:07 PM, Spencer Graves
wrote:
> Hello All:
>
>
> I'm still unable to get Rprofile.site to set, e.g.,
> options(max.print=222), as I did with
Hello All:
I'm still unable to get Rprofile.site to set, e.g.,
options(max.print=222), as I did with previous versions of R.
I just found similar questions posed by Trevor Miles and Ross Bowden
with replies by Uwe Ligges and Duncan Murdoch.
In addition to the things I tried docu
I have tried to replicate Mohammed's problem using synthetic data and
used the parameter estimates from his Stata fit for generating the
data. The data has the following notable features:
- sample size is rather small: 30 groups with 3 observations each
- residual variance is high relative to the
On May 3, 2012, at 7:39 PM, Hui Du wrote:
Hi All,
Suppose I have the following codes:
x = data.frame(A = rnorm(20), B = rnorm(20), C = rnorm(20))
a = list()
a[["1.1"]] = x
a[["1.2"]] = x
b = list()
b[["1.1"]] = c("A", "B")
b[["1.2"]] = c("B", "C")
Now I want to apply b to a like this, for
Hello,
When i generate data with the code below there appear NA as part of the
generated data, i prefer to have zero (0) instead of NA on my data.
Is there a command i can issue to replace the NA with zero (0) even if it is after generating the data?
Thank you
?Surv
time2: ending time of the
Hi all,
I have a 100 by 100 matrix and I divided this matrix into 100 groups,
each is a
10 by 10 submatrix. I want find out the means of each group.
I know we can use apply function for mean by margins. Is there a
function in R for
means by groups also? Thanks.
Hannah
[[alter
I just realized that I have sent this only to Mohammed. So here it is
to the list:
These two model fits should yield the same results. In fact, if we use
some simulated data generated by the model
yik=2+0.5*xik1+0.25*xik2+uk+eik , with uk~N(0, 1) and eik~N(0, 1)
and compare the results between S
On 5/3/2012 11:23 AM, Richard M. Heiberger wrote:
Michael,
I normally do this with the panel.bwplot.intermediate.hh in the HH package.
This function works by plotting each box with its own call to the
underlying panel.bwplot function.
Thanks; I'll check that out.
_
Also,
On Thu, May 3, 2012 at 1:57 PM, J Toll wrote:
> On Thu, May 3, 2012 at 10:43 AM, Christopher Kelvin
> wrote:
>
>> Is there a command i can issue to replace the NA with zero (0) even if it is
>> after generating the data?
>
> Chris,
>
> I didn't try your example code, so this suggestion is
> class(10)
[1] "numeric"
> class(10L)
[1] "integer"
> class(10i)
[1] "complex"
Why not 10I for integer? Perhaps because "I" and "l"
look too similar, perhaps because "i" and "I" sound
too similar. The "L" does not mean "long": integers
are 4 bytes long.
Bill Dunlap
Spotfire, TIBCO
Hi All,
Suppose I have the following codes:
x = data.frame(A = rnorm(20), B = rnorm(20), C = rnorm(20))
a = list()
a[["1.1"]] = x
a[["1.2"]] = x
b = list()
b[["1.1"]] = c("A", "B")
b[["1.2"]] = c("B", "C")
Now I want to apply b to a like this, for each element of 'a', only select the
corresp
On 04/05/12 03:43, Christopher Kelvin wrote:
Hello,
When i generate data with the code below there appear NA as part of the
generated data, i prefer to have zero (0) instead of NA on my data.
Is there a command i can issue to replace the NA with zero (0) even if it is
after generating the dat
Good Evening
We have been searching through the R documentation manuals without success on
this one.
What is the purpose or result of the "L" in the following?
n=10
and
n=10L
or
c(5,10)
versus
c(5L,10L)
Thanks
Joe
Thanks
Joe
[[alternative HTML version deleted]]
_
Ben,
I think that your original for-loop would work if you just
replaced the 'i' in the lines() call with 'Data2[,i]':
for (i in 2:length(Data2)) {
lines(MONTH, Data2[, i], type="o", pch=22, lty=2, col="blue")
}
Peter Ehlers
On 2012-05-03 07:04, Ben Neal wrote:
Jim, thanks for the
Hello,
I don't understand what went wrong or how to fix this. How do I set qr=TRUE
for gam?
When I produce a fit using gam like this:
fit = gam(y~s(x),data=as.data.frame(l_yx),family=family,control =
list(keepData=T))
...then try to use predict:
(see #1 below in the traceback() )
> traceback()
Dear José,
Here is one way:
# aux. function
foo <- function(x, ...){
m <- mean(x, ...)
S <- sd(x, ...)
x > m + S
}
# result
iris$rule <- with(iris, ave(Petal.Width, list(Species), FUN = foo))
head(iris)
HTH,
Jorge.-
On Thu, May 3, 2012 at 5:19 P
Seattle May 17-18 *
XLSolutions May-July 2012 R/S-PLUS courses schedule is now
available online at 9 USA cities for with 13 new courses: *** Suggest a
future
course date/city
(1) R-PLUS: A Point-and-Click Approach to R
(2) S-PLUS / R : Programming
Le jeudi 03 mai 2012 à 07:37 -0700, agent dunham a écrit :
> Dear community,
>
> I'm having this silly problem.
>
> I've a linear model. After fixing it, I wanted to know which data had
> studentized residuals larger than 3, so i tried this:
>
> d1 <- cooks.distance(lmmodel)
> r <- sqrt(abs(rs
Hi everyone,
I would like to identify the case by groups that is just bigger that
avg plus sd. For example, using species as group and petal.wid as my
variable in the iris data.
What's the better way to doit? creating a function?
So,the question is to identify the single element of each species
You can do the following to allow others to recreate your problem.
yourFileBytes <- readBin("yourFile", what="integer", size=1, n=300) # is 300
bytes enough to see the problem?
dput(yourFileBytes)
Put the output of dput(yourFileBytes) in your mail. Someone can (and you
should)
recreate the
Dear John,
Thank you very much for your response, I appreciate your input.
I am able to subtract the two columns, (B - C) , the subset information I
need is how many "A"s and who are the "A". For example P,Q,R,S,T, persons
earned $ 7, 2, 3, 6, 9 in 1 st month and $ 4, 6, 9, 2, 5 in 2nd month. I
On Thu, May 3, 2012 at 11:50 PM, Mohammed Mohammed
wrote:
> Hi folks
>
> I am using the lmer function (in the lme4 library) to analyse some data where
> individuals are clustered into sets (using the SetID variable) with a single
> fixed effect (cc - 0 or 1). The lmer model and output is shown b
On Fri, May 4, 2012 at 2:02 AM, R. Michael Weylandt
wrote:
> Note that print.testclass(testlist) also works as expected so it might
> be there's a forced evaluation somewhere inside S3 dispatch
Indeed. Without evaluating the argument, the S3 method dispatch can't
work out which method to dispatch
I did mention in my initial email that I tried little, big, swap and
.Platform$endian without any success, I keep getting the same very small
numbers.
Thanks
On Thu, May 3, 2012 at 2:36 PM, Duncan Murdoch wrote:
> On 03/05/2012 1:57 PM, kapo coulibaly wrote:
>
>> I believe here is the structure
Thanks Jeff and Sarah.
I was thinking mainly of using the base path and paste routine which is
something I do in Windows
It will take me a while to figrue out relative paths.
John Kane
Kingston ON Canada
> -Original Message-
> From: sarah.gos...@gmail.com
> Sent: Thu, 3 May 2012
I'm sorry, it's still not clear what you are doing but perhaps this is
close?
mydata <- data.frame( a = c(1, 2, 3, 4 , 5),
b = c(7, 2, 3, 6, 9),
c = c(4, 6, 9, 2, 5))
mydata$d <- mydata$b - mydata$c
mydata
subset(mydata, mydata$d ==max(mydata$d))
John Kane
Kin
Have you read "An Introduction to R?" If I understand correctly, your
question is very basic and suggests that you have not yet made even a
minimal effort on your own to learn R before asking for help from this
list.
-- Bert
answer: A[C>B]
On Thu, May 3, 2012 at 11:14 AM, Shankar Lanke wrote:
>
On 03/05/2012 1:57 PM, kapo coulibaly wrote:
I believe here is the structure of the file I'm trying to read:
record marker (4 bytes), 2 integers (4 bytes each), 2 doubles (8 bytes
each), one string (16 bytes or 16 characters), 3 integers (4 bytes each), 1
record marker (4 bytes) and a big array
On Wed, May 2, 2012 at 3:32 PM, Michal Figurski
wrote:
> Dear R-Helpers,
>
> I'm working with immunoassay data and 5PL logistic model. I wanted to
> experiment with different forms of weighting and parameter selection, which
> is not possible in instrument software, so I turned to R.
>
> I am usin
The others have made suggestions for csv output.
For xlsx output go to CRAN, click on the packages link at the left, then
select packages listed by name. Then use your browser to search for the
string 'xls". There are several packages that offer this capability.
-Don
--
Don MacQueen
Lawrence L
Dear All,
Thank you very much in advance.
I have a data sets as shown below A (Patient ID ), B and C are the
Concentration of drug in blood on day 1 and day 4, D is the difference in
conc. To do this in R I have written a code as follows, identified the
number of patients who have more concentrati
Your post suggests statistical confusion on several levels. But as
this concerns statistics, not R, consult your local statistician or
post on a statistical help list (e.g. stats.stackexchange.com).
Incidentally, the short answer to your question about the overlap is:
why shouldn't they? You will
On Thu, May 3, 2012 at 1:53 PM, Jeff Newmiller wrote:
> "I like the idea of staying with absolute paths."
>
> Before you write too much R code that builds in absolute paths, please
> consider how difficult it will be to adjust all of those paths if you need to
> run on a different computer or yo
On Thu, May 3, 2012 at 10:43 AM, Christopher Kelvin
wrote:
> Is there a command i can issue to replace the NA with zero (0) even if it is
> after generating the data?
Chris,
I didn't try your example code, so this suggestion is far more
general, but you might try something along the lines of:
Hello,
Don't cross post, please.
You could even have saved you some time: the answer is already given in
R-devel.
Rui Barradas
--
View this message in context:
http://r.789695.n4.nabble.com/Runtime-column-name-creation-tp4606497p4606561.html
Sent from the R help mailing list archive at Nabbl
Hi everyone,
I have encountered this problem while using 'segmented' plugin for R i386
2.15.0 (for Windows 32bit OS) and I just cannot find neither explanation nor
solution for it.
I am trying to run this data
gpp temp
1.661 5
5.028 10
9.772 15
8.692 20
5.693 25
6.293 30
7.757
I believe here is the structure of the file I'm trying to read:
record marker (4 bytes), 2 integers (4 bytes each), 2 doubles (8 bytes
each), one string (16 bytes or 16 characters), 3 integers (4 bytes each), 1
record marker (4 bytes) and a big array of doubles (8 bytes each).
Everything in the fi
I have the following data from an image analysis program, in which the x and
y co-ordinates are locations of the centroids of shapes on a 2 dimensional
plot. The Y co-ordinates were positive, but I changed them to negative as
the resulting scatterplot was upside down (the image analysis program rea
Dear list,
I'm a bit perplexed why the 95% confidence bands for the predicted
probabilities for units where x=0 and x=1 overlap in the following instance.
I've simulated binary data to which I've then fitted a simple logistic
regression model, with one covariate, and the coefficient on x is sta
Your code did not run on my computer, however the "atop" function should do
what you are looking for I guess. This is more or less your axis, only
changed a bit in your formula, think it looks better this way:
e.g.
par(mar=c(5,7,.5,.5), las=1, adj=.5, cex.lab=1.5)
plot(1, type="n"
, xlab="Wee
"I like the idea of staying with absolute paths."
Before you write too much R code that builds in absolute paths, please consider
how difficult it will be to adjust all of those paths if you need to run on a
different computer or you need to reorganize your overall directory structure.
If you k
Hello all,
I have a data frame with column names s1, s2, s3s11
I have a function that gets two parameters, one is used as a subscript for
the column names and another is used as an index into the chosen column.
For example:
my_func <- function(subscr, index)
{
if (subscr == 1)
{
Hi guys,
I really like the package adegenet for population structure analysis. I used
the function "Fst" from the pegas package (wrapped within adegenet) in order
to generate F-statistic estimates for my data set. However, this does not
provide me with confidence intervals or p-values. I was wonde
Hello,
When i generate data with the code below there appear NA as part of the
generated data, i prefer to have zero (0) instead of NA on my data.
Is there a command i can issue to replace the NA with zero (0) even if it is
after generating the data?
Thank you
library(survival)
p1<-0.8;b<-1.5;
On 03/05/2012 12:41 PM, kapo coulibaly wrote:
I'm trying to read a binary file created by a fortran code using readBin
and readChar. Everything reads fine (integers and strings) except for
double precision numbers, they are read as huge or very small number
(1E-250,...). I tried various endiannes
Thanks Sarah,
I suspected something like that but am still gropping around in Linux. I
vaguely remember how to cd to someplace. Shades of DOS 3.2! Of was that Unixor
both!
Also I think I was trying to be a bit too smart-alecky in where I was placing
my data folder so I moved it to my home fo
Thanks.
I had not realsed there were relative paths until Sarah mentioned them.
It's working now: see my post to Sarah.
John Kane
Kingston ON Canada
> -Original Message-
> From: jdnew...@dcn.davis.ca.us
> Sent: Thu, 03 May 2012 09:30:10 -0700
> To: jrkrid...@inbox.com, r-help@r-proj
Thanks, Keith.
I failed to cc the following reply to John Nash to the list. Your
email persuaded me that it might be useful to do so.
None of this changes the fact that the model is overfitted. You may be
able to get convergence to some set of parameter estates, but they
won't have much meaning s
I'm trying to read a binary file created by a fortran code using readBin
and readChar. Everything reads fine (integers and strings) except for
double precision numbers, they are read as huge or very small number
(1E-250,...). I tried various endianness, swap, But nothing has worked so
far.
I also t
> ?nls.control
> fit<- nls(MFI~a + b/((1+(nom/c)^d)^f), data=x, weights=x$weights,
+ start=c(a=100, b=1, c=100, d=-1, f=1),
control=nls.control(warnOnly=TRUE))
Warning message:
In nls(MFI ~ a + b/((1 + (nom/c)^d)^f), data = x, weights = x$weights, :
step factor 0.000488281 reduced below '
Thank you Steve,
thats the thing I was looking for
/Johannes
Original-Nachricht
> Datum: Thu, 3 May 2012 08:20:51 -0400
> Von: "Steven Wolf"
> An: "\'David Winsemius\'" , "\'Johannes Radinger\'"
>
> CC: R-help@r-project.org
> Betreff: RE: [R] Two ecdf with log-scales
>
All of your tests are with relative paths. Use getwd() identify your starting
directory, and if it isn't you can use setwd() to start in the right place.
---
Jeff NewmillerThe . . Go
Hi John,
You're probably messing up the path, just as you suspect.
If you use a relative path, like you are doing, then R looks for that
location starting at R's current working directory, visible with
getwd(). For linux, that's the location at which you started R if you
started it from a termina
I was bored and have tried doing it with console and texworks (also uses
pdflatex)
texworks preview shows it properly colored, but in acrobat reader it is black
and white again. Still scratching my head..
Am 03.05.2012 um 17:14 schrieb Jessica Streicher:
> Hi there!
>
> I have found a trange
I am the proud owner of a new laptop since my old one died the other day.
Currently I have a dual-boot Windows 7 Home and Ubuntu 12.04 . I'll leave the
Windows problems for another post.
I know practically nothing about Linux so I am probably doing something stupid
but ... at the moment I cann
carol white yahoo.com> writes:
>
> Hi,
> I split a data set into two partitions (80 and 42), use the first as the
training set in glm and the second as
> testing set in glm predict. But when I call glm.predict, I get the warning
message:
>
> Warning message:
> 'newdata' had 42 rows but variabl
You do not seem to have suppied either code nor data. Please supply both.
John Kane
Kingston ON Canada
> -Original Message-
> From: shankarla...@gmail.com
> Sent: Wed, 2 May 2012 22:06:54 -0400
> To: r-help@r-project.org
> Subject: [R] Identifying the particular X or Y in a sorted list
Hi all,
I am trying to run an lme4 model (logistic regression with mixed effects) in
MCMCglmm but am unsure how to implement it properly.
Currently, my lme4 model formula looks as follows: "outcome ~ (1 + var1 +
var2 | study) + var1 + var2"
In English, this means that I am fitting a random effec
Thank you very much, though I still don't quite undertdand the
explanation :)
Nevertheless, I just found a seemingly simple (at least quiker to type)
solution after try-and-error:
eval(mapply(function(x) {x; function() x}, c("a", "b")))
Wish it may help future readers.
On Thu, May 03, 2012 at
On Thu, May 03, 2012 at 03:08:00PM +0200, Kehl Dániel wrote:
> Dear List-members,
>
> I have a problem where I have to estimate a mean, or a sum of a
> population but for some reason it contains a huge amount of zeros.
> I cannot give real data but I constructed a toy example as follows
>
> N1 <
Thanks, all! I'll try these out. I'm trying to work up something that is
platform independent (if possible) for use with mmap. I'll do some tests
on these suggestions and see which works best. I'll try to report back in a
few days. Cheers!
--j
2012/5/3 "Jens Oehlschlägel"
> Jonathan,
>
>
Dear Jeff,
thank you for the response.
Of course I know this is a theory question still I hope to get some
comments on it
(if somebody already dealt with alike problems might suggest a package
and it would not take longer than saying this is a theoretical question)
The values are counts, so 0 m
Michael,
I normally do this with the panel.bwplot.intermediate.hh in the HH package.
This function works by plotting each box with its own call to the
underlying panel.bwplot function.
This example from ?HH::position uses the "positioned" class to
determine where to
place the box.
> library(HH)
Hi there!
I have found a trange problem with getting pairs()-plots to show properly in
latex \subfloat environments.
If i generate images of these plots with pdf() and include them in subfloats,
they will either show up in grayscale, or sometimes the datapoints of the
pairplots are missing. Mi
Hi David,
Thanks for your input. My first thought was to look for missing
values, but I can tell you there are no missing values in the input.
The error is occurring somewhere deep inside coxpenal.fit, so I can't
identify how any NAs might be created. Also, the if/else syntax is
from c
Dear All,
I have a data sets as shown below A (Patient ID ), B and C are the
Concentration of drug in blood on day 1 and day 4, D is the difference in
conc. To do this in R I have written a code as follows, identified the
number of patients who have more concentration on day 4 . Here I want to
ide
Jim, thanks for the reply. I tried what you recommend, but I still get an error
when running it, just as I did with the similar loops I was trying. Here is the
error:
Error in xy.coords(x, y) : 'x' and 'y' lengths differ
That comes from this code:
#Get file
library(zoo)
setwd("/Users/benjami
Hello,
Shankar Lanke wrote
>
> Dear All,
>
> I have a data sets as shown below A (Patient ID ), B and C are the
> Concentration of drug in blood on day 1 and day 4, D is the difference in
> conc. To do this in R I have written a code as follows, identified the
> number of patients who have more
Dear All,
I'm using AIC-weighted model averaging to calculate model averaged
parameter estimates and partial r-squares of each variable in a
10-variable linear regression.
I've been using the MuMIn package to calculate parameter estimates, but
am struggling with partial r-squares. There does
Dear community,
I'm having this silly problem.
I've a linear model. After fixing it, I wanted to know which data had
studentized residuals larger than 3, so i tried this:
d1 <- cooks.distance(lmmodel)
r <- sqrt(abs(rstandard(lmmodel)))
rstu <- abs(rstudent(lmmodel))
a <- cbind( mydata, d1, r,
Although you have provided R code to illustrate your problem, it is
fundamentally a statistics theory question, and belongs somewhere else like
stats.stackexchange.net.
When you post there, I recommend that you spend more effort to identify why the
zeros are present. If they are indicators of u
[Env: R 2.14.2 / Win Xp]
In the examples below, I'm using lattice::bwplot to plot boxplots of 4
variables, grouped by a factor 'epoch'
which also corresponds to a numeric year. I'd like to modify the plots
to position the boxplots according to
the numeric value of year, but I can't figure out
I am trying to estimate a covariance matrix from the Hessian of a posterior
mode. However, this Hessian is indefinite (possibly because of
numerical/roundoff issues), and thus, the Cholesky decomposition does not
exist. So, I want to use a modified Cholesky algorithm to estimate a Cholesky
of
Dear Philipp,
this is just a tentative answer because debugging is really not possible
without a reproducible example (or, at a very bare minimum, the output
from traceback()).
Anyway, thank you for reporting this interesting numerical issue; I'll
try to replicate some similar behaviour on a simi
So, to get back to mapply:
eval(mapply(function(x) substitute(function() z,list(z=x)), c("a", "b"))$a)()
or like this:
mapply(function(x) eval(substitute(function(i) z*i,list(z=x))), c(2,3))[[1]](2)
Am 03.05.2012 um 16:02 schrieb Jessica Streicher:
> Now.. i just tried around and this might b
Note that print.testclass(testlist) also works as expected so it might
be there's a forced evaluation somewhere inside S3 dispatch -- I was
playing around with this the other day, having gotten myself confused
by it and stumbled across the internal logic (or at least something
that made sense to me
Now.. i just tried around and this might be a bit strange way to do things..
createFunc<-function(v){
v_out<-NULL
for(i in v){
v_out[[i]]<-substitute(function(){x},list(x=i))
}
return(v_out)
}
> y<-createFunc(c("a","b"))
> y
$a
function() {
"a"
On May 2, 2012, at 3:02 PM, Jessica Myers wrote:
Hi,
I am using coxph from the survival package to fit a large model
(100,000 observations, ~35 covariates) using both ridge regression
(on binary covariates) and penalized splines (for continuous
covariates).
In fitting, I get a strange
Dear R users,
For the moment, I have a script and a function which calculates correlation
matrices between all my data files. Then, it chooses the best correlation
for each data and take it in order to fill missing data in the analysed file
(so the data from the best correlation file is put automa
Dear Michael and Sarah,
the superfluous points arose as an error (e.g. double-click) in the
measurement process. Thus, looking on the image I recognize them
easily and everything what I need is to write their numbers.
readline() serves well for that purpose. Thanks a lot!
Ondřej
On 2 May 2012 18
As i see it you will save the actual "text" of the function - and when you call
it later on it takes the last value of x it has encountered as the value. I
guess you want the x not to be saved as x, but as "a" or "b", so, as its value.
I am not sure how to do that however as of yet.
Am 03.05.2
1 - 100 of 131 matches
Mail list logo