Thank you very much Duncan Murdoch and bbolker.
I see the process and will put those functions to my own package.
Best
ozgur
-
Ozgur ASAR
Research Assistant
Middle East Technical University
Department of Statistics
06531, Ankara Turkey
Ph: 90-312-210530
2012/3/4, westland :
> I am still/again having trouble getting PLSR to recognize the input data
> frames. Here is what I have done:
>
> I read in an 1 x 8 table of data to 'pls'
>
> assign the first four columns to matrix 'dep' and the second four to matrix
> 'ind' with the following comman
Hi Fredrik,
A reproducible example would help. We have neither your data nor your
functions.
It is not clear to me what your problem is; I have no difficulty
passing arguments from a higher function to ddply().
mtcars[1,1] <- NA
f <- function(data, factors, f1) {
ddply(.data = data, as.quote
Hi Michael,
No, sorry - that is neither the problem or the solution.
> suspicious.vowels(pb,c("Type","Sex","Vowel"),F1,F2)
Error in mean(y, na.rm = na.rm) : object 'f1' not found
/Fredrik
On Sat, Mar 3, 2012 at 7:04 PM, R. Michael Weylandt <
michael.weyla...@gmail.com> wrote:
> Untested, but
?plot.formula
... and note the 'data' argument, which you have appeared to have
omitted. Presumably you received an error message that informed you
that R could not find your variables, but as you have failed to follow
the posting guide and provide us an error message, one cannot be sure.
One mig
That code doesn't seem to run (much to a little bit of surprise) as
is: try this:
d <- structure(list(REMOVED = c(0.07, 0.1, 0.11, 0.12, 0.15, 0.19,
0.28, 0.31, 0.3, 0.34, 0.35, 0.39, 0.38, 0.4, 0.42, 0.4, 0.41,
0.42, 0.48, 0.48, 0.47, 0.49, 0.5, 0.51, 0.53, 0.58, 0.59, 0.65,
0.6, 0.6, 0.69, 0.7,
Hi there,
I'm trying to make a scatterplot of removed versus duration for each type of
bee. No matter what I try, I can't seem to get my code to work.
Any help would be appreciated. Thanks!
My r-code:
dat$BEE <- with(dat, factor(BEE, c(1,2)))
plot(REMOVED~DURATION,pch=BEE, col=BEE) REMOVED DURATI
Hi,
Sorry I didn't know the original post in this thread was not included. I'm
using R version 2.14.0 (2011-10-31).
This is the program:
fitLME4 <- lme(iadl ~ obstime,
random = ~ obstime | id, data = iadl.long.df, na.action=na.omit)
fitSURV <- coxph(Surv(Time, death) ~ agew1, data = last_aa.df,
Whoops. I spoke too soon. That last approach does not survive shrinking the
graphics window. I'll go back to the axis and par approach.
Frank
Frank Harrell wrote
>
> Thanks very much for the ideas Baptiste and Greg. I think this is a way
> to go:
>
> # Right justifies (if adj=1) a vector of
Thanks very much for the ideas Baptiste and Greg. I think this is a way to
go:
# Right justifies (if adj=1) a vector of strings against the right margin
(side=4) or against the y-axis (side=2)
outerText <-
function(string, y, side=4, cex=par('cex'), adj=1, ...) {
if(side %nin% c(2,4)) stop(
"R. Michael Weylandt" writes:
> It'd be doubly helpful if you could post desired output as well.
I beg alls pardon, I suddenly realized that in my case the solution is
trivial. Here is an example with a mock-up data.
Let's generate some data
#+begin_src R
qq <-
expand.grid(
Untested, but it might be simpler than that:
suspicious.vowels(pb,c("Type","Sex","Vowel"),"F1",F2)
Note that "F1" is in quotes but F2 isn't.
Michael
On Sat, Mar 3, 2012 at 5:46 PM, Fredrik Karlsson wrote:
> Dear list,
>
> Sorry, but I cannot get my head around how and I could pass arguments al
On Sat, Mar 3, 2012 at 5:48 PM, Nathan Lemoine wrote:
> It appears that the subscripts are only passing two values, the center of
> each group. There should be six values, one for the center of each bar
> (correct?),
No. That's also why your code doesn't work. x[subscripts] are not the
centroi
Hello everybody,
I don't give up the fight, but it's hard. I have finded a solution for the
ligature with a best converter wich tranlated more precisely PDF to plain
text. But a new problem has occured. In french particulary, but it should be
the case in english too, I have a big problem ' " bracke
It'd be doubly helpful if you could post desired output as well.
If you haven't seen it before, the easiest way to post R data is to
use the dput() function to get a plain-text (mailing list friendly)
representation. If your data is large, dput(head(DATA, 30)) should
suffice.
(We wouldn't want to
Can you post dput(head(eqn, 30)) so we can take a look at your data?
It's something of a cryptic error and that would go a long way in
helping us help you.
Without that though, I'm not sure you need the I(as.matrix.(dep)) and
I(as.matrix(ind)), I would imagine (untested) that eqn <-
data.frame(dep
If you would post a subset of your data so that we can see what you
are talking about, we could probably help you come up with a solution.
On Sat, Mar 3, 2012 at 7:50 PM, Mikhail Titov wrote:
> Hello!
>
> I’m having stacked data in a data.frame with 2 factors, ordered POSIXct, and
> actual value
Seconded
John Kane
Kingston ON Canada
> -Original Message-
> From: rolf.tur...@xtra.co.nz
> Sent: Sat, 03 Mar 2012 13:46:42 +1300
> To: 538...@gmail.com
> Subject: Re: [R] Cleaning up messy Excel data
>
> On 03/03/12 12:41, Greg Snow wrote:
>
>
>> It is possible to do the right thing
I do agree with the recommendation to consider car::dataEllipse() for
plotting a best fit for the theoretical bivariate Normal situation.
However, the bagplot function from Has Peter Wold: http://www.wiwi.uni-bielefeld.de/~wolf/#software
is more likely to "line up" and be suitable for drawi
Hello!
I’m having stacked data in a data.frame with 2 factors, ordered POSIXct, and
actual value as numeric (as if for lattice::xyplot).
I would like to calculate first difference using “diff” function within
corresponding subsets/partitions. Since data.frame is organized by factors and
has so
Sorry for the second post. I caught a few glitches in the code from my previous
post. I've also made a little bit of progress in trying to figure out why I
can't get error bars to line up properly. It appears that the subscripts are
only passing two values, the center of each group. There should
On 3/3/2012 12:34 PM, drflxms wrote:
Thanx a lot Greg for the hint and letting me not alone with this!
Tried ellipse and it works well. But I am looking for something more
precise. The ellipse fits the real border to the nearest possible
ellipse. I want the "real" contour, if possible.
The 'co
Dear All,
a new version of MultiPhen (0.3) is available on CRAN, and will be available to
a mirror near you soon.
*Please upgrade* because this release is a bug fix release. A new version, with
improvements in the output and more useful error messages is basically ready,
but I will wait to re
On 12-03-03 11:51 AM, Özgür Asar wrote:
Dear all,
Actually I could not decide who to contact, then decided to post here. If
not appropriate sorry for that.
I have written some functions in R which might be supposed to available in
the base package but not available. I am wondering that how can
# devtools
The aim of `devtools` is to make your life as a package developer
easier by providing R functions that simplify many common tasks.
Devtools is opinionated about how to do package development, and
requires that you use `roxygen2` for documentation and `testthat` for
testing. Future versi
Note that while those other routines are quick, your original code
can easily be made much faster for large n (number of rows) by
preallocating the output vectors to their ultimate length.
E.g., replace
out <- numeric()
elements <- numeric()
for (i in 1:(length(data[, 1]) - windowSize + 1
Dear list,
Sorry, but I cannot get my head around how and I could pass arguments along
to high-level functions. What I have is a function that would benefit from
me using ddply from the plyr package.
However, I cannot get the arguments passing part right.
So, this is my function:
> suspicious.vo
Please allow one (hopefully ;) last question:
Do you think the code I adopted from Hänggi is valid in selecting a
contour which encloses i.e. 68% of the scatter-plot data? - I am still
not completely shore... still looking for the reason of the different
result of Hänggis and Foresters code. Should
Dear all,
you can find below my solution for sliding a window. Please find below the code
for the two alternatives and the benchmarks.
install.packages('caTools')
require(caTools)
do_sliding_for_a_window_duty_cycle <- function(DataToAnalyse, windowSize) {
data<-DataToAnalyse
out <- nume
On Fri, Mar 2, 2012 at 11:29 AM, sajjad R wrote:
>
> Dear All,
>
> I hope to run some simple survival analysis using the cox-proportional hazard
> models in R, my command will look like below:
>
> cox <- summary( coxph( Surv( mortality , TIME ) ~ Independent variables ) )
>
> My query is about sp
Puhh, I am really happy to read that the idea was not completely
sensless. This would have heavily damaged my anyway labile view of the
world of statistics ;)
In any case I need to get to know more about bivariate normal
distributions! Any literature recommendations?
Felix
Am 03.03.12 22:13, schr
I am still/again having trouble getting PLSR to recognize the input data
frames. Here is what I have done:
I read in an 1 x 8 table of data to 'pls'
assign the first four columns to matrix 'dep' and the second four to matrix
'ind' with the following commands:
dep <- pls[,1:4]
ind <- pl
On Sat, Mar 3, 2012 at 2:36 PM, Peter Langfelder
wrote:
> 3. Instead of calculating the correlations one-by-one, calculate them
> in small blocks (if you have enough memory and you run a 64-bit R).
> With 900M rows, you will only be able to put a 900Mx2 into an R
> object, but if you have two suc
On Sun, Mar 4, 2012 at 8:04 AM, Hed Bar-Nissan wrote:
> Following David example if i just wanted to do means
> would multiplying the cases according to the weight do the work?
If the weights are scaled to have mean 1, and you have no missing
data, then yes. If you have missing data, the scaling
Greg, extremely cool thoughts! Thank you for delving into it this deep.
As I mentioned, I am a neurologist with unfortunately poor statistical
training. You are professional statisticians. So I'd like to apologize
for any unprofessional nomenclature and strange thoughts beforehand.
As my previous
Hi,
I received an interesting answer, given by Elias T. Krainski, in another
list. I repeat it here for just the record, it may be useful to someone
else.
library(vegan)
data(dune)
dis <- vegdist(dune)
cluc <- hclust(dis, "complete")
plot(cluc)
r <- rect.hclust(cluc, 3)
text(cumsum(sapply(r,lengt
Hello,
Thank you for your help/advice!
The issue here is speed/efficiency. I can do what I want, but its really
slow.
The goal is to have the ability to do calculations on my data and have it
adjusted for look-ahead. I see two ways to do this:
(I'm open to more ideas. My terminology: Unadjusted
You might want to make use of the "environment" concept. Personally
I find it rather head-twisting, but very powerful and useful. The local()
function could be useful to you.
Be prepared to do a bit of study, read the documentation *very* carefully,
and do quite a bit of experimentation if you
To further explain. If you want contours of a bivariate normal, then
you want ellipses. The density for a bivariate normal (with 0
correlation to keep things simple, but the theory will extend to
correlated cases) is proportional to exp( -1/2 ( x1^2/v1 + x2^2/v2 )
so a contour of the distributi
On Mar 3, 2012, at 20:25 , drflxms wrote:
> "Once you go into two dimensions, SD loses all meaning, and adding
> nonparametric density estimation into the mix doesn't help, so just stop
> thinking in those terms!"
>
> This makes me really think a lot! Is plotting the 0,68 confidence
> interval i
The key part of the ellipse function is:
matrix(c(t * scale[1] * cos(a + d/2) + centre[1], t * scale[2] *
cos(a - d/2) + centre[2]), npoints, 2, dimnames = list(NULL,
names))
Where (if I did not miss anything) the variable 't' is derived from a
chisquare distribution and the c
thanks a lot, Berend, I misread the n and shape as shape and scale
parameter!
On Sat, Mar 3, 2012 at 3:34 PM, Berend Hasselman wrote:
>
> On 03-03-2012, at 21:27, C W wrote:
>
> > I want to get random numbers from Gamma distribution, when I do
> >
> > rgamma(1.95, 2)
> > [1] 2.313977
> >
> > but
On 03-03-2012, at 21:27, C W wrote:
> I want to get random numbers from Gamma distribution, when I do
>
> rgamma(1.95, 2)
> [1] 2.313977
>
> but,
> rgamma(2, 2)
> [1] 2.7985347 0.9415515
>
> Why do I get two random numbers? It's the same density function, I don't
> see why the output should b
Hi Garrett,
That works great, thank you!
gsee wrote
>
> Try this:
>
> x <- structure(list(day = 19, C1 = structure(1L, .Label = c("", "C1"
> ), class = "factor"), C2 = structure(2L, .Label = c("", "C2"), class =
> "factor"),
>C3 = structure(1L, .Label = c("", "C3"), class = "factor"),
>
I want to get random numbers from Gamma distribution, when I do
rgamma(1.95, 2)
[1] 2.313977
but,
rgamma(2, 2)
[1] 2.7985347 0.9415515
Why do I get two random numbers? It's the same density function, I don't
see why the output should be a pair of numbers, is this explained in the
documentation?
Wow, David,
thank you for these sources, which I just screened. bagplot looks most
promising to me. I found it in the package ‘aplpack’ as well as in the R
Graph Gallery
http://addictedtor.free.fr/graphiques/RGraphGallery.php?graph=112
Ellipses are not exactly what I am heading for. I am looking
A general solution if you always want 2 columns and the pattern is
always every other column (but the number of total columns could
change) would be:
cbind( c(Dat[,c(TRUE,FALSE)]), c(Dat[,c(FALSE,TRUE)]) )
On Sat, Mar 3, 2012 at 11:40 AM, David Winsemius wrote:
>
> On Mar 3, 2012, at 11:02 AM
Sometimes we adapt to our environment, sometimes we adapt our
environment to us. I like fortune(108).
I actually was suggesting that you add a tool to your toolbox, not limit it.
In my experience (and I don't expect everyone else's to match) data
manipulation that seems easier in Excel than R is
Thank you very much for your thoughts!
Exactly what you mention is, what I am thinking about during the last
hours: What is the relation between the den$z distribution and the z
distribution.
That's why I asked for ecdf(distribution)(value)->percentile earlier
this day (thank you again for your qu
You might want to look at the various wtd.* functions in the Hmisc
package:
require(Hmisc)
?wtd.stats
'wtd.mean' is just one of the functions supplied. You might want to
contemplate the simplicity of Harrell's function code, since it is not
hidden. Just type:
wtd.mean
--
David.
On Mar
Thanks David for your reply. And regarding the plain-text sending, I
thought it was enabled in my Outlook. However recently I formatted and
reinstalled everything therefore that option was not corrected which
now I have corrected.
Thanks,
On Sun, Mar 4, 2012 at 12:25 AM, David Winsemius wrote:
>
Following David example if i just wanted to do means
would multiplying the cases according to the weight do the work?
Something like this on a data.frame
(Must be a simpler way to do it with R - the sapply scope confused me)
weightBy <- function(origDataFrame,weightVector)
{
case_Number_Aft
On Mar 3, 2012, at 12:34 PM, drflxms wrote:
Thanx a lot Greg for the hint and letting me not alone with this!
Tried ellipse and it works well. But I am looking for something more
precise. The ellipse fits the real border to the nearest possible
ellipse. I want the "real" contour, if possible.
On Mar 3, 2012, at 11:02 AM, Bogaso Christofer wrote:
Hi all, let say I have following matrix:
Dat <- matrix(1:30, 5, 6); colnames(Dat) <- rep(c("Name1",
"Names2"), 3)
Dat
Name1 Names2 Name1 Names2 Name1 Names2
[1,] 1 611 1621 26
[2,] 2 712
On Mar 3, 2012, at 17:01 , drflxms wrote:
> # this is the critical block, which I still do not comprehend in detail
> z <- array()
> for (i in 1:n){
>z.x <- max(which(den$x < x[i]))
>z.y <- max(which(den$y < y[i]))
>z[i] <- den$z[z.x, z.y]
> }
As far as I can tell, the po
It is a bit unclear from your posting exactly what your data
is. I'm assuming that you have an R dataset, a single character
string called Data, made with a command like
Dat = "1 2 3 4 5
6 7 8 9 10
11 12 13 14 15"
(If you have a file, say "Dat.R", containing those three lines of text then
run sour
Özgür Asar metu.edu.tr> writes:
> Actually I could not decide who to contact, then decided to post here. If
> not appropriate sorry for that.
>
> I have written some functions in R which might be supposed to available in
> the base package but not available. I am wondering that how can I submit
Hi
spec<-ugarchspec(variance.model = list(model = "sGARCH", garchOrder = c(1,
1), submodel = NULL, external.regressors = NULL, variance.targeting =
FALSE), mean.model = list(armaOrder=c(0,0),include.mean = FALSE, archm =
FALSE, archpow = 1, arfima = FALSE, external.regressors = NULL, archex =
FALS
Dear all,
Actually I could not decide who to contact, then decided to post here. If
not appropriate sorry for that.
I have written some functions in R which might be supposed to available in
the base package but not available. I am wondering that how can I submit
those functions to that package?
Hello, I fixed a small bug in case some one wants to source it.
B.R
Alex
# A) My for loop version as a function
do_sliding_for_a_window_duty_cycle <- function(DataToAnalyse, threshold,
windowSize) {
 data=matrix(data=NA,nrow=nrow(ThresholdData),ncol=ncol(ThresholdData))
 ThresholdData <- (
Thanx a lot Greg for the hint and letting me not alone with this!
Tried ellipse and it works well. But I am looking for something more
precise. The ellipse fits the real border to the nearest possible
ellipse. I want the "real" contour, if possible.
Meanwhile I found an interesting function named
Look at the ellipse package (and the ellipse function in the package)
for a simple way of showing a confidence region for bivariate data on
a plot (a 68% confidence interval is about 1 SD if you just want to
show 1 SD).
On Sat, Mar 3, 2012 at 7:54 AM, drflxms wrote:
> Dear all,
>
> I created a bi
Using the readlines function on your dat string gives the error
because it is looking for a file named "2 3 ..." which it is not
finding. more likely what you want is to create a text connection
(see ?textConnection) to your string, then use scan or read.table on
that connection.
On Sat, Mar 3, 2
When I was still teaching undergraduate intro biz-stat (among that community it
is always
abbreviated), we needed to control the spreadsheet behaviour of TAs who entered
marks into
a spreadsheet. We came up with TellTable (the Sourceforge site is still around
with refs
at http://telltable-s.sour
Hi all, let say I have following matrix:
> Dat <- matrix(1:30, 5, 6); colnames(Dat) <- rep(c("Name1", "Names2"), 3)
> Dat
Name1 Names2 Name1 Names2 Name1 Names2
[1,] 1 611 1621 26
[2,] 2 712 1722 27
[3,] 3 813 1823
OK, the following seems to work
still do not understand exactly why...
library(MASS)
# parameters:
n<-100
# generate samples:
set.seed(138813)
#seed <- .Random.seed
x<-rnorm(n); y<-rnorm(n)
# estimate non-parameteric density surface via kernel smoothing
den<-kde2d(x, y, n=n)
# store z values of
On 2012-3-3 23:15, Bogaso Christofer wrote:
Dear all, I have been given a data something like below:
Dat = "2 3 28.3 3.05 8 3 3 22.5 1.55 0 1 1 26.0 2.30 9 3 3 24.8 2.10 0
3 3 26.0 2.60 4 2 3 23.8 2.10 0 3 2 24.7 1.90 0 2 1 23.7 1.95 0
3 3 25.6 2.15 0 3 3 24.3 2.15 0 2 3 25.8 2.65 0 2 3 28.2
The code as you requested below :) (you can source it)
# A) My for loop version as a function
do_sliding_for_a_window_duty_cycle <- function(DataToAnalyse, threshold,
windowSize) {
 data=matrix(data=NA,nrow=nrow(ThresholdData),ncol=ncol(ThresholdData))
 ThresholdData <- (DataToAnalyse > th
Dear all, I have been given a data something like below:
Dat = "2 3 28.3 3.05 8 3 3 22.5 1.55 0 1 1 26.0 2.30 9 3 3 24.8 2.10 0
3 3 26.0 2.60 4 2 3 23.8 2.10 0 3 2 24.7 1.90 0 2 1 23.7 1.95 0
3 3 25.6 2.15 0 3 3 24.3 2.15 0 2 3 25.8 2.65 0 2 3 28.2 3.05 11
4 2 21.0 1.85 0 2 1 26.0 2.30 14 1
deltamethod function in package msm may help (but bear in mind the
warnings/admonitions/recommendations of other helpers)
HTH
Rubén
--
Rubén H. Roa-Ureta, Ph. D.
AZTI Tecnalia, Txatxarramendi Ugartea z/g,
Sukarrieta, Bizkaia, SPAIN
-Original Message-
From: r-help-boun...@r-project.or
Dear all,
I created a bivariate normal distribution:
set.seed(138813)
n<-100
x<-rnorm(n); y<-rnorm(n)
and plotted a scatterplot of it:
plot(x,y)
Now I'd like to add the 2D-standard deviation.
I found a thread regarding plotting arbitrary confidence boundaries from
Pascal Hänggi
http://www.mai
Hi
I need to a function that automatically fits a regression to data, using the
stepAIC. I've ran the code manually and it works fine. However, when I run
the function on the same data, the following error occurs:
Problem in regimp(fullsim = simt, fullsim1 = simt1,..: Length of (weights)
(variabl
Context added:
On Mar 2, 2012, at 3:51 PM, knavero wrote:
aggregate(z, identity, mean)
1 2 3 4 5
1.0 3.0 5.0 6.0 7.5
aggregate(z, mean)
Error: length(time(x)) == length(by[[1]]) is not TRUE
As generally happens when you call a function and fail to provide
enough arguments to fill
On 03-03-2012, at 14:55, Berend Hasselman wrote:
>
> On 03-03-2012, at 14:31, Alaios wrote:
>
>> Dear all,
>> I am having a vector of around 300.000 elements and I Want to slide fast a
>> window from the first element until the last-Windowsize
>>
>> what I have so far is the following for sta
On 03-03-2012, at 14:31, Alaios wrote:
> Dear all,
> I am having a vector of around 300.000 elements and I Want to slide fast a
> window from the first element until the last-Windowsize
>
> what I have so far is the following for statement:
>
> for (i in 1:(length(data[,1]) - windowSize)) {
>
On Sat, Mar 3, 2012 at 8:31 AM, Alaios wrote:
> Dear all,
> I am having a vector of around 300.000 elements and I Want to slide fast a
> window from the first element until the last-Windowsize
>
> what I have so far is the following for statement:
>
> for (i in 1:(length(data[,1]) - windowSize))
Thank you a lot Peter, Stefan and Pascal,
for your quick an inspiring answers.
ecdf(distribution)(value)->percentile was exactly, what I was looking
for, as it is in my eyes somehow the equivalent to
quantile(distribution, percentile)->value, isn't it.
Greetings from sunny Munich, Felix
Am 03.
Just move it to C and you'll probably be ok. I believe runmean in
library(caTools) provides a very fast implementation.
Michael
On Mar 3, 2012, at 8:31 AM, Alaios wrote:
> Dear all,
> I am having a vector of around 300.000 elements and I Want to slide fast a
> window from the first element u
Hi Felix,
Have a look at ?pnorm and ?qnorm.
And at ?Distributions
Regards,
Pascal
- Mail original -
De : drflxms
À : r-help@r-project.org
Cc :
Envoyé le : Samedi 3 mars 2012 21h37
Objet : [R] percentile of a given value: is there a "reverse" quantile function?
Dear all,
I am famil
On Mar 3, 2012, at 13:37 , drflxms wrote:
> Dear all,
>
> I am familiar with obtaining the value corresponding to a chosen
> probability via the quantile function.
> Now I am facing the opposite problem I have a value an want to know it's
> corresponding percentile in the distribution. So is the
Dear all,
I am having a vector of around 300.000 elements and I Want to slide fast a
window from the first element until the last-Windowsize
what I have so far is the following for statement:
for (i in 1:(length(data[,1]) - windowSize)) {
out[i] <- mean(data[i:(i + windowSize - 1), ])
?ecdf
Best,
Stephan
On 03.03.2012 13:37, drflxms wrote:
Dear all,
I am familiar with obtaining the value corresponding to a chosen
probability via the quantile function.
Now I am facing the opposite problem I have a value an want to know it's
corresponding percentile in the distribution. So i
Dear all,
I am familiar with obtaining the value corresponding to a chosen
probability via the quantile function.
Now I am facing the opposite problem I have a value an want to know it's
corresponding percentile in the distribution. So is there a function for
this as well?
Thank you for your supp
Thanks a lot!
Greate reference
From: Patrick Burns
Cc: R help
Sent: Saturday, March 3, 2012 10:28 AM
Subject: Re: [R] "Global" Variable in R
Possible, but not necessarily easier
in the long run. See Circle 6 of
'The R Inferno'.
http://www.burns-stat.com
Thank you, Duncan. Building an installer package from source might work.
On Friday, March 2, 2012, Duncan Murdoch wrote:
> On 12-03-02 5:32 AM, Tom Hopper wrote:
>
> See the Installation and Administration manual, in particular the section
(3.1.8, I think) on building the Inno Setup installer.
>
Possible, but not necessarily easier
in the long run. See Circle 6 of
'The R Inferno'.
http://www.burns-stat.com/pages/Tutor/R_inferno.pdf
On 03/03/2012 08:56, Alaios wrote:
Dear all, I would like to ask you if there is a way
to define a global variable in R?
I am having a bunch of function a
Dear all, I would like to ask you if there is a way
to define a global variable in R?
I am having a bunch of function and I think the easier would be to have a
global function defined at some point...
Would that be possible?
Regards
Alex
[[alternative HTML version deleted]]
On Mar 3, 2012, at 02:10 , kmuller wrote:
>
> Greetings.
>
> I'm a Master's student working on an analysis of herbivore damage on plants.
> I have a tried running a glm with one categorical predictor (aphid
> abundance) and a binomial response (presence/absence of herbivore damage).
> My predic
On Mar 3, 2012, at 02:30 , knavero wrote:
> I've also searched "?identity" in the R shell and it doesn't seem to be the
> definition I'm looking for for this particular usage of "identity" as an
> argument in the aggregate function. I simply would appreciate a conceptual
> explanation of what it
89 matches
Mail list logo