Re: [R] [Fwd: ***HTML***R help]

2008-09-03 Thread tyler
Dieter Menne <[EMAIL PROTECTED]> writes:
> Brian Lunergan  ncf.ca> writes:
>> 
>> I am stuck with a problem and I need help
>> ---
>> R has a defined function sum()
>> 
>> I by chance defined a function with same name and now I m not able to
>> get rid of my sum() fuction. Every time i use sum() my sum function
>> gets called and not the R's built in one...
>
> Almost for sure your .Rdata is read in. Either try another startup 
> directory, or delete .Rdata, or start R with the --vanilla option.
>

To correct this without deleting .Rdata:

mysum <- sum  ## if you still want to use your function
rm(sum)

This will remove your sum, leaving the 'built-in' available. You can also
access the 'built-in' as:

base::sum()

if there was any reason to keep both functions with the same name (unlikely).

Cheers,

Tyler
--

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] latticeExtra: useOuterStrips and axis.line$lwd

2009-04-28 Thread tyler
Hi,

I'm working on some lattice wireframe figures that have two conditioning
factors, and I want the strips labelled on the top and left of the
entire plot, rather than above each individual panel. useOuterStrips()
does this, but it draws internal axis lines, even after I explicitly set
axis.line to 0. Is there a way to use useOuterStrips but without axis
boxes?

I've included a short example. I know the example looks odd without axis
lines, but in my more complicated wireframe plots I think the axis
lines are just extra clutter, so I'd like them to disappear.

Thanks,

Tyler


library(lattice)
my.trellis.pars <- trellis.par.get("axis.line")
my.trellis.pars$lwd = 0
mtcars$HP <- equal.count(mtcars$hp)

trellis.par.set("axis.line", my.trellis.pars)
xyplot(mpg ~ disp | HP + factor(cyl), mtcars)

useOuterStrips(xyplot(mpg ~ disp | HP + factor(cyl), mtcars))

-- 
The purpose of models is not to fit the data but to sharpen the
questions. --Samuel Karlin

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] latticeExtra: useOuterStrips and axis.line$lwd

2009-04-28 Thread tyler
Deepayan Sarkar  writes:

> On Tue, Apr 28, 2009 at 7:40 AM, tyler  wrote:
>> Hi,
>>
>> I'm working on some lattice wireframe figures that have two conditioning
>> factors, and I want the strips labelled on the top and left of the
>> entire plot, rather than above each individual panel. useOuterStrips()
>> does this, but it draws internal axis lines, even after I explicitly set
>> axis.line to 0. Is there a way to use useOuterStrips but without axis
>> boxes?
>
> Those are actually not axis lines, but the borders of the
> 0-width/height strips that still get drawn. Here's a modified
> useOuterStrips() that doesn't draw the strips, which I'll include in
> the next release (also, see below regarding lwd=0).

Wonderful, thanks! I discovered this afternoon that lwd=0 wasn't doing
what I wanted - it looked fine on screen, but the resulting eps/pdf
still had the lines. I resolved that problem by setting alpha = 0, which
I guess is the same as col = "transparent"?

Thanks again!

Tyler

>
> useOuterStrips <-
> function(x,
>  strip = strip.default,
>  strip.left = strip.custom(horizontal = FALSE),
>  strip.lines = 1,
>  strip.left.lines = strip.lines)
> {
> dimx <- dim(x)
> stopifnot(inherits(x, "trellis"))
> stopifnot(length(dimx) == 2)
> opar <- if (is.null(x$par.settings)) list() else x$par.settings
> par.settings <-
> modifyList(opar,
>list(layout.heights =
> if (x$as.table) list(strip = c(strip.lines,
> rep(0, dimx[2]-1)))
> else list(strip = c(rep(0, dimx[2]-1), 1)),
> layout.widths =
> list(strip.left = c(strip.left.lines, rep(0,
> dimx[1]-1)
> if (is.character(strip))
> strip <- get(strip)
> if (is.logical(strip) && strip)
> strip <- strip.default
> new.strip <-
> if (is.function(strip))
> {
> top.row <- if (x$as.table) 1 else nrow(trellis.currentLayout())
> function(which.given, which.panel, var.name, ...) {
> if (which.given == 1 && current.row() == top.row)
> strip(which.given = 1,
>   which.panel = which.panel[1],
>   var.name = var.name[1],
>   ...)
> }
> }
> else strip
> if (is.character(strip.left))
> strip.left <- get(strip.left)
> if (is.logical(strip.left) && strip.left)
> strip.left <- strip.custom(horizontal = FALSE)
> new.strip.left <-
> if (is.function(strip.left))
> {
> function(which.given, which.panel, var.name, ...) {
> if (which.given == 2 && current.column() == 1)
> strip.left(which.given = 1,
>which.panel = which.panel[2],
>var.name = var.name[2],
>...)
> }
> }
> else strip.left
> update(x,
>par.settings = par.settings,
>strip = new.strip,
>    strip.left = new.strip.left,
>par.strip.text = list(lines = 0.5),
>layout = dimx)
> }
>
>
>> I've included a short example. I know the example looks odd without axis
>> lines, but in my more complicated wireframe plots I think the axis
>> lines are just extra clutter, so I'd like them to disappear.
>>
>> Thanks,
>>
>> Tyler
>>
>>
>> library(lattice)
>> my.trellis.pars <- trellis.par.get("axis.line")
>> my.trellis.pars$lwd = 0
>
> You should use
>
> my.trellis.pars$col = "transparent"
>
> (lwd=0 is not what you think it is).
>
> -Deepayan
>
>> mtcars$HP <- equal.count(mtcars$hp)
>>
>> trellis.par.set("axis.line", my.trellis.pars)
>> xyplot(mpg ~ disp | HP + factor(cyl), mtcars)
>>
>> useOuterStrips(xyplot(mpg ~ disp | HP + factor(cyl), mtcars))
>>
>> --
>> The purpose of models is not to fit the data but to sharpen the
>> questions.                             --Samuel Karlin
>>
>> __
>> R-help@r-project.org mailing list
>> https://stat.ethz.ch/mailman/listinfo/r-help
>> PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
>> and provide commented, minimal, self-contained, reproducible code.
>>
>
> ___

[R] ellipsis problem

2009-05-04 Thread tyler
Hi,

I'm confused about the use of ellipsis in function arguments. I'm trying
to write a wrapper for plot to automate the combination of plot() and
points() calls for a data.frame. Some arguments seem to get passed
through to the inner plot, while others cause an error:

 Error in eval(expr, envir, enclos) : 
  ..1 used in an incorrect context, no ... to look in

As a minimal example:

tmp <- data.frame(Y = sample(1:10, 40, replace = TRUE),
  X = sample(1:10, 40, replace = TRUE))

myplot <- function(x, ...) {
  plot(Y ~ X, data = x, ...)
}

myplot(tmp) ## works fine
myplot(tmp, tcl = 1) ## works fine

myplot(tmp, tcl = -0.1)
Error in eval(expr, envir, enclos) : 
  ..1 used in an incorrect context, no ... to look in

myplot(tmp, mgp = c(3, 0.5, 0))
Error in eval(expr, envir, enclos) : 
  ..1 used in an incorrect context, no ... to look in

plot(Y ~ X, data = tmp, mgp = c(3, 0.5, 0)) ## works
plot(Y ~ X, data = tmp, tcl = -0.1) ## works

What am I doing wrong?

Thanks,

Tyler

R version 2.8.1 (2008-12-22)
Debian Testing

-- 
What is wanted is not the will to believe, but the will to find out,
which is the exact opposite.   --Bertrand Russell

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] ellipsis problem

2009-05-04 Thread tyler
Duncan Murdoch  writes:

> On 5/4/2009 2:55 PM, tyler wrote:
>>
>
> This looks like another manifestation of the following bug:
>
[...]
>
> which has recently been fixed in R-patched. I haven't traced through
> it, but I do see the same error as you in 2.9.0, but not in 2.9.0
> patched.
>

Ok, thanks. I've updated to 2.9.0, but we don't have the patched version
packaged for Debian yet. I can work around the problem until we do.

Cheers,

Tyler

-- 
Philosophy of science is about as useful to scientists as ornithology is
to birds.  --Richard Feynman

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] using contour() with x,y,z data?

2008-06-26 Thread tyler
"Bob and Deb" <[EMAIL PROTECTED]> writes:

> I'm new to R and I have a problem :-)  Below is what my data file that looks
> like.  I tried to import and contour this data by doing this:
>
>cv_data <- read.table("cv_data.csv",sep=",",header=TRUE)
>attach(cv_data)
>contour(x,y,z)
>
> I get the error "Error in contour.default(x,y,z) : increasing 'x', and 'y'
> values expected"  I can see that y is not increasing, but I don't know how
> to get this to work.

All of your arguments to contour are wrong. x and y are the locations of
the gridlines for the figure (and are not required), and z is a matrix
of values, not a single vector. See ?contour for the details and
examples.

I'm sure there is a better way to do this, but when I have data in the
format you do:

x   y z
10, 0.1,  3
10, 0.2,  7
[snip]
10, 1.0,  5
20, 0.1, 12
20, 0.2,  4

I use the following function to convert it to the proper format:

image.maker <- function(coords, value){
  N <- length(unique(coords[,1]))
  image.out <- matrix(NA, nrow = N, ncol = N)
  coords[,1] <- as.numeric(factor(coords[,1]))
  coords[,2] <- as.numeric(factor(coords[,2]))
  for (i in 1:nrow(coords))
image.out[coords[i,1], coords[i,2]] <- value[i]
  return(image.out)
}


my.image <- image.maker(cv_data[,c("x", "y")], cv_data$z)
contour(my.image)

HTH,

Tyler

-- 
Power corrupts. PowerPoint corrupts absolutely.
   --Edward Tufte

http://www.wired.com/wired/archive/11.09/ppt2.html

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] adonis (vegan package) and subsetted factors

2008-04-10 Thread tyler
Hi,

I'm trying to use adonis on a subset of data from a dataframe. The
actual data is in columns 5:118, and the first four columns are various
factors. There are 3 levels of the factor Habitat, and I want to examine
differences among only two of them. So I started with:

> CoastNear = subset(gel_data, Habitat != "I")

The resulting data.frame has three levels for Habitat, but only two of
those levels have any records. Then I run:

> adonis(CoastNear[,5:118]~Habitat, data = CoastNear,permutations=1000,
+ method='jaccard')

Call:
adonis(formula = CoastNear[, 5:118] ~ Habitat, data = CoastNear,
permutations = 1000, method = "jaccard") 

  Df  SumsOfSqsMeanSqsF.Model R2 Pr(>F)
Habitat2.000  0.0092966  0.0046483  2.0549327 0.0707  0.005
Residuals 54.000  0.1221491  0.00226200.9293   
Total 56.000  0.1314457   1.   

This appears to be wrong - with only two Habitat levels I should only
have 1 Df, shouldn't I? I checked by forcibly excising the third factor
level:

> CoastNear$Habitat <- as.factor(as.character(CoastNear$Habitat))
> adonis(CoastNear[,5:118]~Habitat, data = CoastNear,permutations=1000,
+ method='jaccard')

Call:
adonis(formula = CoastNear[, 5:118] ~ Habitat, data = CoastNear,
permutations = 1000, method = "jaccard") 

  Df  SumsOfSqsMeanSqsF.Model R2 Pr(>F)
Habitat1.000  0.0092966  0.0092966  4.1859740 0.0707  0.003
Residuals 55.000  0.1221491  0.00222090.9293   
Total 56.000  0.1314457   1.   

This appears to be correct. Subsetting factors is something I always
struggle with in R, but, based on previous experience with lda(), it
seems that R generally does the right thing. Am I doing something wrong
here, or is there a problem in adonis?

Thanks,

Tyler

ps. Sorry for not supplying a reproducible bit of code. The data.frame
is quite large. The general layout is:

> head(gel_data) # additional data columns trimmed for email
  Site Habitat Plot Concate A01 A02 A03 A04 A05 A06 A07 A08 A09 A10 
1   PE   C1PEC1   1   1   1   1   1   1   1   1   1   1 
2   PE   C3PEC3   1   1   1   1   0   1   1   1   1   1 
3   PE   C4PEC4   1   1   1   1   0   1   1   1   1   1 
4   PE   C6PEC6   1   1   1   1   0   1   1   1   1   1 
5   PE   C   12   PEC12   1   1   1   1   0   1   1   1   1   1 
6   PE   C   13   PEC13   1   1   1   1   0   1   1   1   1   1

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] adonis (vegan package) and subsetted factors

2008-04-10 Thread tyler
On Thu, Apr 10, 2008 at 04:47:44PM +0100, Gavin Simpson wrote:
> 
> This behaviour arises from the following, using the in-built dune data:
> 
> > newdune.env <- subset(dune.env, Management != "NM")
> > newdune.env$Management
>  [1] BF SF SF SF HF SF HF HF BF BF HF SF SF HF
> Levels: BF HF NM SF
> 
> Notice this hasn't dropped the empty level "NM", and this is what is
> catching out adonis --- it is not checking for empty levels in the
> grouping factor, as this shows:
> 
> > newdune <- dune[which(dune.env$Management != "NM"), ]
> > adonis(newdune ~ Management*A1, data=newdune.env, permutations=100)
> 
> Call:
> adonis(formula = newdune ~ Management * A1, data = newdune.env,
> permutations = 100) 
> 
> Df SumsOfSqs  MeanSqs  F.Model R2 Pr(>F)
> Management 3.0   0.57288  0.19096  1.27735 0.2694  <0.01 ***
> 
> For now, forcibly remove empty factor levels as per your second example,
> but I'll take a look at fixing adonis() 

Great, thanks!

Tyler

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] inconsistent lm results with fixed response variable

2009-01-20 Thread tyler
Hi,

I'm analyzing a large number of simulations using lm(), a sample of the
resulting data is pasted below. In some simulations, the response
variable doesn't vary, ie:

> tmp[[2]]$richness
 [1] 40 40 40 40 40 40 40 40 40 40 40 40 40 40 40 40 40 40 40 40 40 40 40 40 40

When I analyze this using R version 2.8.0 (2008-10-20) on a linux
cluster, I get an appropriate result:


## begin R ##

summary(lm(richness ~ het, data = tmp[[2]]))

Call:
lm(formula = richness ~ het, data = tmp[[2]])

Residuals:
   Min 1Q Median 3QMax
 0  0  0  0  0

Coefficients:
Estimate Std. Error t value Pr(>|t|)
(Intercept)   40  0 Inf   <2e-16 ***
het0  0  NA   NA
---
Signif. codes:  0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1

Residual standard error: 0 on 23 degrees of freedom
Multiple R-squared:  ,  Adjusted R-squared:
F-statistic:   on 1 and 23 DF,  p-value: NA

## end R ##

This is good, as when I extract the Adjusted R-squared and slope I get
NaN and 0, which are easily identified in my aggregate analysis, so I
can deal with them appropriately. 

However, this isn't always the case:

## begin R ##

 tmp[[1]]$richness
 [1] 40 40 40 40 40 40 40 40 40 40 40 40 40 40 40 40 40 40 40 40 40 40 40 40 40
[26] 40 40 40 40 40 40 40 40 40 40 40

 summary(lm(richness ~ het, data = tmp[[1]]))

Call:
lm(formula = richness ~ het, data = tmp[[1]])

Residuals:
   Min 1Q Median 3QMax
-8.265e-14  1.689e-15  2.384e-15  2.946e-15  4.022e-15

Coefficients:
 Estimate Std. Error   t value Pr(>|t|)
(Intercept) 4.000e+01  8.418e-15 4.752e+15   <2e-16 ***
het 1.495e-14  4.723e-14 3.160e-010.754
---
Signif. codes:  0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1

Residual standard error: 1.44e-14 on 34 degrees of freedom
Multiple R-squared: 0.5112, Adjusted R-squared: 0.4968
F-statistic: 35.56 on 1 and 34 DF,  p-value: 9.609e-07

## end R ##

This is a problem, as when I plot the adj. R sq as part of an aggregate
analysis of a large number of simulations, it appears to be a very
strong regression. I wouldn't have caught this except it was
exceptionally high for the simulation parameters. It also differs by
more than rounding error from the results with R 2.8.1 running on my
laptop (Debian GNU/Linux), i.e., adj. R sq 0.5042 vs 0.4968.
Furthermore, on my laptop, none of the analyses produce a NaN adj. R sq,
even for data that do produce that result on the cluster.

Both my laptop and the linux cluster have na.action set to na.omit. Is
there something else I can do to ensure that lm() returns slope == 0
and adj.R.sq == NaN when the response variable is fixed? 

Thanks for any suggestions,

Tyler

Data follows:

`tmp` <-
list(structure(list(richness = c(40, 40, 
40, 40, 40, 40, 40, 40, 40, 40, 40, 40, 40, 40, 40, 40, 40, 40, 
40, 40, 40, 40, 40, 40, 40, 40, 40, 40, 40, 40, 40, 40, 40, 40, 
40, 40), range = c(0.655084651733024, 0.579667533660137, 0.433092220907644, 
0.62937198839679, 0.787891987978164, 0.623511540624239, 0.542744487102066, 
0.905937570175433, 0.806802881350753, 0.680413208666325, 0.873426339019084, 
0.699982832956593, 0.697716600618959, 0.952729864926405, 0.782938474636578, 
1.03899695305995, 0.715075858219333, 0.579749205792549, 1.20648999819246, 
0.648677938600964, 0.651883559714785, 0.997318331273967, 0.926368116052012, 
0.91001274146868, 1.20737951037620, 1.12006560586723, 1.09806272133903, 
0.9750792390176, 0.356496202035743, 0.612018080768747, 0.701905693862144, 
0.735857916053381, 0.991787489781244, 1.07247435214078, 0.60061903319766, 
0.699733090379818), het = c(0.154538307084452, 0.143186508136608, 
0.0690948358402777, 0.132337152911839, 0.169037344105692, 0.117783183361602, 
0.117524251767612, 0.221161206774407, 0.204574928003633, 0.170571000779693, 
0.204489357007294, 0.131749663515638, 0.154127894997213, 0.232672587431942, 
0.198610891796736, 0.260497696582693, 0.129028191256682, 0.128717975847452, 
0.254300896783617, 0.113546727236817, 0.142220347446853, 0.24828642688332, 
0.194340945175726, 0.190782985783610, 0.214676796387244, 0.252940213066992, 
0.22362832797347, 0.182423482989676, 0.0602332226418674, 0.145400861749859, 
0.141297315445974, 0.139798699247632, 0.222815139716421, 0.211971297234962, 
0.120813579628747, 0.150590744533818), n.rich = c(40, 40, 40, 
40, 40, 40, 40, 40, 40, 40, 40, 40, 40, 40, 40, 40, 40, 40, 40, 
40, 40, 40, 40, 40, 40, 40, 40, 40, 40, 40, 40, 40, 40, 40, 40, 
40)), .Names = c("richness", "range", "het", "n.rich")), 
 structure(list(richness = c(40, 40, 
40, 40, 40, 40, 40, 40, 40, 40, 40, 40, 40, 40, 40, 40, 40, 40, 
40, 40, 40, 40, 40, 40, 40), range = c(0.753203162648624, 0.599708526308711, 
0.714477274087683, 0.892359682406808, 0.868440625159371, 0.753239521511417, 
1.20164969658467, 1.20462111558583, 1.13142122690491, 0.95241921975703, 
1.132144816535

Re: [R] inconsistent lm results with fixed response variable

2009-01-20 Thread tyler
Rolf Turner  writes:

> Oh for Pete's sake!

No, just for me.

> Computers use floating point arithmetic.  Your residual standard error in
> case 2 (i.e. 1.44e-14) *is* 0, but floating point arithmetic can't quite see
> that this is so. 

Yes, and that's fine. When I put together a lattice plot to display
several hundred slope coefficients, I don't need to distinguish between
1.44e-14 and 0. Both are visually 'zero', and accurately reflect the
lack of relationship.

My problem came when viewing a lattice plot of several hundred adj. R sq
values, and viewing a handful of very high values in cases where there
is no actual relationship. In some cases R did what I expected, and gave
me a NaN which didn't plot. In other cases, it gave me a very large
number, which did plot, and was quite confusing in context.

Anyways, it will be easy to add a check as you suggest.

Thanks for your time,

Tyler

> Put in a check for the RSE being 0, and ``over- ride'' the adjusted R
> squared to be NA (or NaN, or whatever floats your boat) in such
> instances. The all.equal() function might be useful to you:
>
>> x <- 1.44e-14
>> all.equal(x,0)
> [1] TRUE
>
> (Caution:  Trap for Young Players:  If x and y are ``really'' different,
> then all.equal(x,y) doesn't return FALSE as you might expect, but rather
> a description of the difference between x and y --- which may be complicated
> if x and y are complicated objects.  The function isTRUE() is useful here.)
>
>   cheers,
>
>   Rolf Turner
>
>
> On 21/01/2009, at 9:21 AM, tyler wrote:
>
>> Hi,
>>
>> I'm analyzing a large number of simulations using lm(), a sample of the
>> resulting data is pasted below. In some simulations, the response
>> variable doesn't vary, ie:
>>
>>> tmp[[2]]$richness
>>  [1] 40 40 40 40 40 40 40 40 40 40 40 40 40 40 40 40 40 40 40 40 40 40 40 40 
>> 40
>>
>> When I analyze this using R version 2.8.0 (2008-10-20) on a linux
>> cluster, I get an appropriate result:
>>
>>
>> ## begin R ##
>>
>> summary(lm(richness ~ het, data = tmp[[2]]))
>>
>> Call:
>> lm(formula = richness ~ het, data = tmp[[2]])
>>
>> Residuals:
>>Min 1Q Median 3QMax
>>  0  0  0  0  0
>>
>> Coefficients:
>> Estimate Std. Error t value Pr(>|t|)
>> (Intercept)   40  0 Inf   <2e-16 ***
>> het0  0  NA   NA
>> ---
>> Signif. codes:  0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
>>
>> Residual standard error: 0 on 23 degrees of freedom
>> Multiple R-squared:  ,  Adjusted R-squared:
>> F-statistic:   on 1 and 23 DF,  p-value: NA
>>
>> ## end R ##
>>
>> This is good, as when I extract the Adjusted R-squared and slope I get
>> NaN and 0, which are easily identified in my aggregate analysis, so I
>> can deal with them appropriately.
>>
>> However, this isn't always the case:
>>
>> ## begin R ##
>>
>>  tmp[[1]]$richness
>>  [1] 40 40 40 40 40 40 40 40 40 40 40 40 40 40 40 40 40 40 40 40 40 40 40 40 
>> 40
>> [26] 40 40 40 40 40 40 40 40 40 40 40
>>
>>  summary(lm(richness ~ het, data = tmp[[1]]))
>>
>> Call:
>> lm(formula = richness ~ het, data = tmp[[1]])
>>
>> Residuals:
>>Min 1Q Median 3QMax
>> -8.265e-14  1.689e-15  2.384e-15  2.946e-15  4.022e-15
>>
>> Coefficients:
>>  Estimate Std. Error   t value Pr(>|t|)
>> (Intercept) 4.000e+01  8.418e-15 4.752e+15   <2e-16 ***
>> het 1.495e-14  4.723e-14 3.160e-010.754
>> ---
>> Signif. codes:  0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
>>
>> Residual standard error: 1.44e-14 on 34 degrees of freedom
>> Multiple R-squared: 0.5112, Adjusted R-squared: 0.4968
>> F-statistic: 35.56 on 1 and 34 DF,  p-value: 9.609e-07
>>
>> ## end R ##
>>
>> This is a problem, as when I plot the adj. R sq as part of an aggregate
>> analysis of a large number of simulations, it appears to be a very
>> strong regression. I wouldn't have caught this except it was
>> exceptionally high for the simulation parameters. It also differs by
>> more than rounding error from the results with R 2.8.1 running on my
>> laptop (Debian GNU/Linux), i.e., adj. R sq 0.5042 vs 0.4968.
>> Furthermore, on my laptop, none of the analyses produce a NaN adj. R sq,
>> even for data that do produce that result on the cluster.
>>
>> Bot

Re: [R] Discriminant function analysis

2008-02-07 Thread tyler
On Thu, Feb 07, 2008 at 02:36:58PM +, Gavin Simpson wrote:
> 
> But I'm not sure this matters much. If you use the formula interface to
> lda(), factors get expanded to the dummy variables Tyler is talking
> about. But of course, a factor with two levels 0/1 doesn't need much
> manipulation as you only need a single dummy variable to represent its
> two states:
> 

Thanks, Gavin!

R's formula interface if very powerful, and I'm just starting to
understand how to take full advantage of it.

> You might want to standardise your exp variables to zero mean and unit
> variance prior to doing the lda so that all variables carry the same
> weight, if you have mixtures of numeric (continuous) variables and
> binary ones.

This is the part I was unsure of. If you have a categorical
explanatory variable with five levels, you can turn it into four dummy
variables, which you then standardize. Does the original variable end
up getting four times the weight of a single numerical variable?

Cheers,

Tyler

-- 
There is something fascinating about science. One gets such wholesale 
returns of conjecture out of such a trifling investment of fact.
   --Mark Twain

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] Need help optimizing/vectorizing nested loops

2008-12-09 Thread tyler
Hi,

I'm analyzing a large number of large simulation datasets, and I've
isolated one of the bottlenecks. Any help in speeding it up would be
appreciated.

`dat` is a dataframe of samples from a regular grid. The first two
columns are the spatial coordinates of the samples, the remaining 20
columns are the abundances of species in each cell. I need to calculate
the species richness in adjacent cells for each cell in the sample. 
For example, if I have nine cells in my dataframe (X = 1:3, Y = 1:3):

  a b c
  d e f
  g h i

I need to calculate the neighbour-richness for each cell; for a, this is
the richness of cells b, d and e combined. The neighbour richness of
cell e would be the combined richness of all the other eight cells.

The following code does what I what, but it's slow. The sample dataset
'dat', below, represents a 5x5 grid, 25 samples. It takes about 1.5
seconds on my computer. The largest samples I am working with have a 51
x 51 grid (2601 cells) and take 4.5 minutes. This is manageable, but
since I have potentially hundreds of these analyses to run, trimming
that down would be very helpful.

After loading the function and the data, the call

  system.time(tmp <- time.test(dat))

Will run the code. Note that I've excised this from a larger, more
general function, after determining that for large datasets this section
is responsible for a slowdown from 10-12 seconds to ca. 250 seconds.

Thanks for your patience,

Tyler


time.test <- function(dat) {

  cen <- dat
  grps <- 5
  n.rich <- numeric(grps^2)
  n.ind <- 1
  
  for(i in 1:grps)
for (j in 1:grps) {
  n.cen <- numeric(ncol(cen) - 2)
  neighbours <- expand.grid((j-1):(j+1), (i-1):(i+1)) 
  neighbours <- neighbours[-5,] 
  neighbours <- neighbours[which(neighbours[,1] %in% 1:grps &
 neighbours[,2] %in% 1:grps),]
  
  for (k in 1:nrow(neighbours))
n.cen <- n.cen + cen[cen$X == neighbours[k,1] &
 cen$Y == neighbours[k,2], -c(1:2)]
  
  n.rich[n.ind] <- sum(as.logical(n.cen))
  n.ind <- n.ind + 1
}

return(n.rich)
}

`dat` <- structure(list(
  X = c(1, 2, 3, 4, 5, 1, 2, 3, 4, 5, 1, 2, 3, 4, 5, 1, 2, 3, 4, 5, 1,
  2, 3, 4, 5), Y = c(1, 1, 1, 1, 1, 2, 2, 2, 2, 2, 3, 3, 3, 3, 3, 4,
  4, 4, 4, 4, 5, 5, 5, 5, 5), V1 = c(0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L,
  0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 45L, 131L, 0L, 0L, 34L,
  481L, 1744L), V2 = c(0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L,
  0L, 1L, 88L, 0L, 70L, 101L, 13L, 634L, 0L, 0L, 71L, 640L, 1636L), V3
  = c(0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 49L, 3L, 113L,
  1L, 44L, 167L, 336L, 933L, 0L, 14L, 388L, 1180L, 1709L), V4 = c(0L,
  0L, 0L, 0L, 0L, 0L, 3L, 12L, 0L, 0L, 2L, 1L, 36L, 45L, 208L, 7L,
  221L, 213L, 371L, 1440L, 26L, 211L, 389L, 1382L, 1614L), V5 = c(96L,
  7L, 0L, 0L, 0L, 10L, 17L, 0L, 5L, 0L, 0L, 11L, 151L, 127L, 160L,
  27L, 388L, 439L, 1117L, 1571L, 81L, 598L, 1107L, 1402L, 891L), V6 =
  c(16L, 30L, 13L, 0L, 0L, 10L, 195L, 60L, 29L, 29L, 1L, 107L, 698L,
  596L, 655L, 227L, 287L, 677L, 1477L, 1336L, 425L, 873L, 961L, 1360L,
  1175L), V7 = c(249L, 101L, 69L, 0L, 18L, 186L, 331L, 291L, 259L,
  248L, 336L, 404L, 642L, 632L, 775L, 455L, 801L, 697L, 1063L, 978L,
  626L, 686L, 1204L, 1138L, 627L), V8 = c(300L, 163L, 65L, 145L, 377L,
  257L, 690L, 655L, 420L, 288L, 346L, 461L, 1276L, 897L, 633L, 812L,
  1018L, 1337L, 1295L, 1163L, 550L, 1104L, 768L, 933L, 433L), V9 =
  c(555L, 478L, 374L, 349L, 357L, 360L, 905L, 954L, 552L, 438L, 703L,
  984L, 1616L, 1732L, 1234L, 1213L, 1518L, 1746L, 1191L, 967L, 1394L,
  1722L, 1706L, 610L, 169L), V10 = c(1527L, 1019L, 926L, 401L, 830L,
  833L, 931L, 816L, 1126L, 1232L, 1067L, 1169L, 1270L, 1277L, 1145L,
  1159L, 1072L, 1534L, 997L, 391L, 1328L, 1414L, 1037L, 444L, 1L), V11
  = c(1468L, 1329L, 1013L, 603L, 1096L, 1237L, 1488L, 1189L, 1064L,
  1303L, 1258L, 1479L, 1421L, 1365L, 1101L, 1415L, 1145L, 1329L,
  1325L, 236L, 1379L, 1199L, 729L, 328L, 0L), V12 = c(983L, 1459L,
  791L, 898L, 911L, 1215L, 1528L, 960L, 1172L, 1286L, 1358L, 722L,
  857L, 1478L, 1452L, 1502L, 1013L, 745L, 455L, 149L, 1686L, 917L,
  1013L, 84L, 0L), V13 = c(1326L, 1336L, 1110L, 1737L, 1062L, 1578L,
  1382L, 1537L, 1366L, 1308L, 1301L, 1357L, 746L, 622L, 934L, 1132L,
  954L, 460L, 270L, 65L, 957L, 699L, 521L, 18L, 1L), V14 = c(1047L,
  1315L, 1506L, 1562L, 1254L, 1336L, 1106L, 1213L, 1220L, 1457L, 858L,
  1606L, 590L, 726L, 598L, 945L, 732L, 258L, 45L, 6L, 937L, 436L, 43L,
  0L, 0L), V15 = c(845L, 935L, 1295L, 1077L, 1400L, 1049L, 802L,
  1247L, 1449L, 1046L, 1134L, 877L, 327L, 352L, 470L, 564L, 461L,
  166L, 0L, 0L, 230L, 110L, 29L, 0L, 0L), V16 = c(784L, 675L, 1157L,
  1488L, 1511L, 1004L, 420L, 523L, 733L, 724L, 833L, 542L, 171L, 116L,
  384L, 357L, 197L, 0L, 0L, 0L, 246L, 0L, 0L, 0L, 0L), V17 = c(444L,
  873L, 530L, 596L, 448L, 431L, 109L, 446L, 378L, 243L, 284L, 148L,
  6

Re: [R] Need help optimizing/vectorizing nested loops

2008-12-09 Thread tyler
Charles C. Berry writes:
 > On Tue, 9 Dec 2008, tyler wrote:
 > 
 > > I'm analyzing a large number of large simulation datasets, and I've
 > > isolated one of the bottlenecks. Any help in speeding it up would be
 > > appreciated.
 > 
 > Cast the neighborhoods as an indicator matrix, then use matrix 
 > multiplications:
 > 
 > > system.time({ mn <- with(dat,outer(1:25,1:25, function(i,j)
 > abs(X[i]-X[j])<2 & abs(Y[i]-Y[j])<2 & i!=j ))


Wow, that's fantastic! On my biggest matrix, your code cost me 3.854
seconds, compared to 207.769 seconds with my original function. I
might be able to use `outer' to solve some other slowdowns. 

Thanks,

Tyler

 > HTH,
 > 
 > Chuck
 > 
 > 
 > 
 > >
 > > `dat` is a dataframe of samples from a regular grid. The first two
 > > columns are the spatial coordinates of the samples, the remaining 20
 > > columns are the abundances of species in each cell. I need to calculate
 > > the species richness in adjacent cells for each cell in the sample.
 > > For example, if I have nine cells in my dataframe (X = 1:3, Y = 1:3):
 > >
 > >  a b c
 > >  d e f
 > >  g h i
 > >
 > > I need to calculate the neighbour-richness for each cell; for a, this is
 > > the richness of cells b, d and e combined. The neighbour richness of
 > > cell e would be the combined richness of all the other eight cells.
 > >
 > > The following code does what I what, but it's slow. The sample dataset
 > > 'dat', below, represents a 5x5 grid, 25 samples. It takes about 1.5
 > > seconds on my computer. The largest samples I am working with have a 51
 > > x 51 grid (2601 cells) and take 4.5 minutes. This is manageable, but
 > > since I have potentially hundreds of these analyses to run, trimming
 > > that down would be very helpful.
 > >
 > > After loading the function and the data, the call
 > >
 > >  system.time(tmp <- time.test(dat))
 > >
 > > Will run the code. Note that I've excised this from a larger, more
 > > general function, after determining that for large datasets this section
 > > is responsible for a slowdown from 10-12 seconds to ca. 250 seconds.
 > >
 > > Thanks for your patience,
 > >
 > > Tyler
 > >
 > >
 > > time.test <- function(dat) {
 > >
 > >  cen <- dat
 > >  grps <- 5
 > >  n.rich <- numeric(grps^2)
 > >  n.ind <- 1
 > >
 > >  for(i in 1:grps)
 > >for (j in 1:grps) {
 > >  n.cen <- numeric(ncol(cen) - 2)
 > >  neighbours <- expand.grid((j-1):(j+1), (i-1):(i+1))
 > >  neighbours <- neighbours[-5,]
 > >  neighbours <- neighbours[which(neighbours[,1] %in% 1:grps &
 > > neighbours[,2] %in% 1:grps),]
 > >
 > >  for (k in 1:nrow(neighbours))
 > >n.cen <- n.cen + cen[cen$X == neighbours[k,1] &
 > > cen$Y == neighbours[k,2], -c(1:2)]
 > >
 > >  n.rich[n.ind] <- sum(as.logical(n.cen))
 > >  n.ind <- n.ind + 1
 > >}
 > >
 > >return(n.rich)
 > > }
 > >
 > > `dat` <- structure(list(
 > >  X = c(1, 2, 3, 4, 5, 1, 2, 3, 4, 5, 1, 2, 3, 4, 5, 1, 2, 3, 4, 5, 1,
 > >  2, 3, 4, 5), Y = c(1, 1, 1, 1, 1, 2, 2, 2, 2, 2, 3, 3, 3, 3, 3, 4,
 > >  4, 4, 4, 4, 5, 5, 5, 5, 5), V1 = c(0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L,
 > >  0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 45L, 131L, 0L, 0L, 34L,
 > >  481L, 1744L), V2 = c(0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L,
 > >  0L, 1L, 88L, 0L, 70L, 101L, 13L, 634L, 0L, 0L, 71L, 640L, 1636L), V3
 > >  = c(0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 49L, 3L, 113L,
 > >  1L, 44L, 167L, 336L, 933L, 0L, 14L, 388L, 1180L, 1709L), V4 = c(0L,
 > >  0L, 0L, 0L, 0L, 0L, 3L, 12L, 0L, 0L, 2L, 1L, 36L, 45L, 208L, 7L,
 > >  221L, 213L, 371L, 1440L, 26L, 211L, 389L, 1382L, 1614L), V5 = c(96L,
 > >  7L, 0L, 0L, 0L, 10L, 17L, 0L, 5L, 0L, 0L, 11L, 151L, 127L, 160L,
 > >  27L, 388L, 439L, 1117L, 1571L, 81L, 598L, 1107L, 1402L, 891L), V6 =
 > >  c(16L, 30L, 13L, 0L, 0L, 10L, 195L, 60L, 29L, 29L, 1L, 107L, 698L,
 > >  596L, 655L, 227L, 287L, 677L, 1477L, 1336L, 425L, 873L, 961L, 1360L,
 > >  1175L), V7 = c(249L, 101L, 69L, 0L, 18L, 186L, 331L, 291L, 259L,
 > >  248L, 336L, 404L, 642L, 632L, 775L, 455L, 801L, 697L, 1063L, 978L,
 > >  626L, 686L, 1204L, 1138L, 627L), V8 = c(300L, 163L, 65L, 145L, 377L,
 > >  257L, 690L, 655L, 420L, 288L, 346L, 461L, 1276L, 897L, 633L, 812L,
 > >  1018L, 1337L, 1295L, 1163L, 550L, 1104L, 768L, 933L, 433L),

[R] Windows R 2.15.1 on Citrix

2015-12-11 Thread Tyler Auerbeck
We're currently having an odd issue on an installation of Windows R 2.15.1
over Citrix. Occasionally we will see the application dissapear. Sometimes
this will happen immediately, after a few minutes, etc. It's never after
the exact same action or same period of time. I've looked at the even logs,
but don't see anything that coordinates with that period of time. Has
anyone run across this issue before? Or can anyone point me to where the R
application may write some of its own logs?

[[alternative HTML version deleted]]

__
R-help@r-project.org mailing list -- To UNSUBSCRIBE and more, see
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] R package built using newer version of R

2016-01-04 Thread Tyler Auerbeck
We're currently looking at using the R eclipse plugin StatET as our
development environment. Due to certain requirements, we're still using
2.15.1. However a required package of StatET was built using 2.15.3, which
results in the following warning:

Warning message:
package 'rj' was built under R version 2.15.3

I'm still fairly new to R, but is there any way for us to rebuild this
package using 2.15.1? It doesn't appear to cause us any issues, but it's
still not desirable for users to see that warning.

Any help would be appreciated.

[[alternative HTML version deleted]]

__
R-help@r-project.org mailing list -- To UNSUBSCRIBE and more, see
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] R package built using newer version of R

2016-01-05 Thread Tyler Auerbeck
When I run the install.packages("rj",type="source") I get the following:

> install.packages("rj",type="source")
Warning message:
package ‘rj’ is not available (for R version 2.15.1)

I believe this is because this is a package available directly from the
creators of StatET. I tried pulling the zip down directly from their
website and ran the following:

>
install.packages("C:\\users\\admin\\Downloads\\rj_2.0.3-1.zip",type="source",repos=NULL)
package 'rj' successfully unpacked and MD5 sums checked

This installs it directly, but it still installs it as compiled for 2.15.3,
which we see the same warning I originally mentioned.

Here is the sessionInfo() you asked for:

> sessionInfo()
R version 2.15.1 (2012-06-22)
Platform: x86_64-pc-mingw32/x64 (64-bit)

locale:
[1] LC_COLLATE=English_United States.1252  LC_CTYPE=English_United
States.1252LC_MONETARY=English_United States.1252
[4] LC_NUMERIC=C   LC_TIME=English_United
States.1252

attached base packages:
[1] stats graphics  grDevices utils datasets  methods   base

loaded via a namespace (and not attached):
[1] tools_2.15.1


If there isn't a good way to compile this for 2.15.1, is there any way to
just ignore the warning? I've seen that you can do something like

options( warn = -1 )

I know this isn't recommended to do on an extended timeframe, but this
message only occurs during the first command that you run. Even if we
could set up some sort of profile that would set this suppression, run
a dummy command and then unset the suppression. I know this is a
workaround, but I just wasn't sure what would be the simpler solution.

Let me know what you think or what I may be missing.

On Tue, Jan 5, 2016 at 3:41 AM, Harrie Robins  wrote:

> If that fails (sometimes R gives a version error, package not available for
> R version X.X.X), you could try downloading the source package
> (package.tar.gz) and compile it with running from console (or prompt):
>
> R CMD INSTALL packagename.tar.gz library-location
>
> Regards,
>
> Harrie
>
> -Original Message-
> From: R-help [mailto:r-help-boun...@r-project.org] On Behalf Of Duncan
> Murdoch
> Sent: maandag 4 januari 2016 20:16
> To: Tyler Auerbeck ; r-help@r-project.org
> Subject: Re: [R] R package built using newer version of R
>
> On 04/01/2016 2:02 PM, Tyler Auerbeck wrote:
> > We're currently looking at using the R eclipse plugin StatET as our
> > development environment. Due to certain requirements, we're still
> > using 2.15.1. However a required package of StatET was built using
> > 2.15.3, which results in the following warning:
> >
> > Warning message:
> > package 'rj' was built under R version 2.15.3
> >
> > I'm still fairly new to R, but is there any way for us to rebuild this
> > package using 2.15.1? It doesn't appear to cause us any issues, but
> > it's still not desirable for users to see that warning.
> >
> > Any help would be appreciated.
>
> Yes, it's quite easy to do so.  StatET probably gives menu options to do
> it,
> but I don't know them:  you might want to ask them.  From the R console,
> try
>
> install.packages("pkgname", type="source")
>
> and if you have the necessary prerequisites (e.g. compilers), you'll get
> it installed from source.   If it fails, post the errors and the results
> of sessionInfo() here, and we'll probably be able to tell you what to do
> next.
>
> Duncan Murdoch
>
> __
> R-help@r-project.org mailing list -- To UNSUBSCRIBE and more, see
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide
> http://www.R-project.org/posting-guide.html
> and provide commented, minimal, self-contained, reproducible code.
>
>

[[alternative HTML version deleted]]

__
R-help@r-project.org mailing list -- To UNSUBSCRIBE and more, see
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.

Re: [R] R package built using newer version of R

2016-01-05 Thread Tyler Auerbeck
Alright, I believe I'm making some progress. I'm now running into the
following error. As I mentioned, I'm new to this whole process, so this may
be something simple. When I run the following:

R.exe CMD INSTALL rj_2.0.3-1.tar.gz

I get the following error:

C:\users\admin\> R.exe CMD INSTALL rj_2.0.3-1.tar.gz
* installing to library 'C:/Program Files/R/R-2.15.1/library'
* installing *source* package 'rj' ...
/bin/sh: h.exe: No such file or directory
ERROR: configuration failed for package 'rj'

In case I hadn't mentioned, I am attempting to do all of this on windows. I
had looked around a little more and noticed that there is a lot of mention
of installing Rtools on the machine as well and making sure certain bin
directories are added to the path. I've ensured the following directories
were added to my path:

C:\Program Files\R\R-2.15.1\bin\x64;
C:\Program Files\R\Rtools\bin
;C:\Program Files\R\Rtools\gcc-4.6.3\bin;
C:\Program Files\R\Rtools\gcc-4.6.3\bin64;
C:\Program Files\R\Rtools\gcc-4.6.3\i686-w64-mingw32\bin

Is there anything that I may be missing that is required. I see the main
problem is that it can't seem to locate h.exe. Is there something I need to
pull down that would provide this? Do I need to add something to my path in
order for the install to find this?

As always, any help would be greatly appreciated.

On Tue, Jan 5, 2016 at 2:34 PM, David Winsemius 
wrote:

>
> > On Jan 5, 2016, at 11:15 AM, Tyler Auerbeck 
> wrote:
> >
> > When I run the install.packages("rj",type="source") I get the following:
> >
> >> install.packages("rj",type="source")
> > Warning message:
> > package ‘rj’ is not available (for R version 2.15.1)
> >
> > I believe this is because this is a package available directly from the
> > creators of StatET. I tried pulling the zip down directly from their
> > website and ran the following:
> >
> >>
> >
> install.packages("C:\\users\\admin\\Downloads\\rj_2.0.3-1.zip",type="source",repos=NULL)
> > package 'rj' successfully unpacked and MD5 sums checked
>
> Despite the your "source" for the type parameter, you still gave it a
> Windows binary file. This is the place to get a source version of
> rj-2.0.3-1:
>
> http://download.walware.de/rj-2.0/src/contrib/rj_2.0.3-2.tar.gz
>
> --
> David.
>
>
> >
> > This installs it directly, but it still installs it as compiled for
> 2.15.3,
> > which we see the same warning I originally mentioned.
> >
> > Here is the sessionInfo() you asked for:
> >
> >> sessionInfo()
> > R version 2.15.1 (2012-06-22)
> > Platform: x86_64-pc-mingw32/x64 (64-bit)
> >
> > locale:
> > [1] LC_COLLATE=English_United States.1252  LC_CTYPE=English_United
> > States.1252LC_MONETARY=English_United States.1252
> > [4] LC_NUMERIC=C   LC_TIME=English_United
> > States.1252
> >
> > attached base packages:
> > [1] stats graphics  grDevices utils datasets  methods   base
> >
> > loaded via a namespace (and not attached):
> > [1] tools_2.15.1
> >
> >
> > If there isn't a good way to compile this for 2.15.1, is there any way to
> > just ignore the warning? I've seen that you can do something like
> >
> > options( warn = -1 )
> >
> > I know this isn't recommended to do on an extended timeframe, but this
> > message only occurs during the first command that you run. Even if we
> > could set up some sort of profile that would set this suppression, run
> > a dummy command and then unset the suppression. I know this is a
> > workaround, but I just wasn't sure what would be the simpler solution.
> >
> > Let me know what you think or what I may be missing.
> >
> > On Tue, Jan 5, 2016 at 3:41 AM, Harrie Robins 
> wrote:
> >
> >> If that fails (sometimes R gives a version error, package not available
> for
> >> R version X.X.X), you could try downloading the source package
> >> (package.tar.gz) and compile it with running from console (or prompt):
> >>
> >> R CMD INSTALL packagename.tar.gz library-location
> >>
> >> Regards,
> >>
> >> Harrie
> >>
> >> -Original Message-
> >> From: R-help [mailto:r-help-boun...@r-project.org] On Behalf Of Duncan
> >> Murdoch
> >> Sent: maandag 4 januari 2016 20:16
> >> To: Tyler Auerbeck ; r-help@r-project.org
> >> Subject: Re: [R] R package built using newer version of R
> >>
> >> On 04/01/2016 2:02 PM, Tyler Auerb

Re: [R] R package built using newer version of R

2016-01-05 Thread Tyler Auerbeck
We can go ahead and ignore that last email. It looks like I had just
configured Rtools incorrectly. Once I resolved that issue I was able to get
this compiled appropriately. Thanks to everyone for the help!

On Wed, Jan 6, 2016 at 1:34 AM, Tyler Auerbeck  wrote:

> Alright, I believe I'm making some progress. I'm now running into the
> following error. As I mentioned, I'm new to this whole process, so this may
> be something simple. When I run the following:
>
> R.exe CMD INSTALL rj_2.0.3-1.tar.gz
>
> I get the following error:
>
> C:\users\admin\> R.exe CMD INSTALL rj_2.0.3-1.tar.gz
> * installing to library 'C:/Program Files/R/R-2.15.1/library'
> * installing *source* package 'rj' ...
> /bin/sh: h.exe: No such file or directory
> ERROR: configuration failed for package 'rj'
>
> In case I hadn't mentioned, I am attempting to do all of this on windows.
> I had looked around a little more and noticed that there is a lot of
> mention of installing Rtools on the machine as well and making sure certain
> bin directories are added to the path. I've ensured the following
> directories were added to my path:
>
> C:\Program Files\R\R-2.15.1\bin\x64;
> C:\Program Files\R\Rtools\bin
> ;C:\Program Files\R\Rtools\gcc-4.6.3\bin;
> C:\Program Files\R\Rtools\gcc-4.6.3\bin64;
> C:\Program Files\R\Rtools\gcc-4.6.3\i686-w64-mingw32\bin
>
> Is there anything that I may be missing that is required. I see the main
> problem is that it can't seem to locate h.exe. Is there something I need to
> pull down that would provide this? Do I need to add something to my path in
> order for the install to find this?
>
> As always, any help would be greatly appreciated.
>
> On Tue, Jan 5, 2016 at 2:34 PM, David Winsemius 
> wrote:
>
>>
>> > On Jan 5, 2016, at 11:15 AM, Tyler Auerbeck 
>> wrote:
>> >
>> > When I run the install.packages("rj",type="source") I get the following:
>> >
>> >> install.packages("rj",type="source")
>> > Warning message:
>> > package ‘rj’ is not available (for R version 2.15.1)
>> >
>> > I believe this is because this is a package available directly from the
>> > creators of StatET. I tried pulling the zip down directly from their
>> > website and ran the following:
>> >
>> >>
>> >
>> install.packages("C:\\users\\admin\\Downloads\\rj_2.0.3-1.zip",type="source",repos=NULL)
>> > package 'rj' successfully unpacked and MD5 sums checked
>>
>> Despite the your "source" for the type parameter, you still gave it a
>> Windows binary file. This is the place to get a source version of
>> rj-2.0.3-1:
>>
>> http://download.walware.de/rj-2.0/src/contrib/rj_2.0.3-2.tar.gz
>>
>> --
>> David.
>>
>>
>> >
>> > This installs it directly, but it still installs it as compiled for
>> 2.15.3,
>> > which we see the same warning I originally mentioned.
>> >
>> > Here is the sessionInfo() you asked for:
>> >
>> >> sessionInfo()
>> > R version 2.15.1 (2012-06-22)
>> > Platform: x86_64-pc-mingw32/x64 (64-bit)
>> >
>> > locale:
>> > [1] LC_COLLATE=English_United States.1252  LC_CTYPE=English_United
>> > States.1252LC_MONETARY=English_United States.1252
>> > [4] LC_NUMERIC=C   LC_TIME=English_United
>> > States.1252
>> >
>> > attached base packages:
>> > [1] stats graphics  grDevices utils datasets  methods   base
>> >
>> > loaded via a namespace (and not attached):
>> > [1] tools_2.15.1
>> >
>> >
>> > If there isn't a good way to compile this for 2.15.1, is there any way
>> to
>> > just ignore the warning? I've seen that you can do something like
>> >
>> > options( warn = -1 )
>> >
>> > I know this isn't recommended to do on an extended timeframe, but this
>> > message only occurs during the first command that you run. Even if we
>> > could set up some sort of profile that would set this suppression, run
>> > a dummy command and then unset the suppression. I know this is a
>> > workaround, but I just wasn't sure what would be the simpler solution.
>> >
>> > Let me know what you think or what I may be missing.
>> >
>> > On Tue, Jan 5, 2016 at 3:41 AM, Harrie Robins 
>> wrote:
>> >
>> >> If that fails (sometimes R gives a version error, package not
&

Re: [R] Parts of Speach Tagging

2013-08-24 Thread Tyler Rinker
Have a look at ?Maxent_POS_Tag_Annotator  The examples show you how to get the 
tagPOS behavior.
Cheers,Tyler

> Date: Sun, 25 Aug 2013 04:11:33 +0530
> From: sid.aru...@gmail.com
> To: r-help@r-project.org; r-h...@stat.math.ethz.ch
> Subject: [R] Parts of Speach Tagging
> 
> I was using tagPOS function from openNLP package for parts-of-speach. Now
> the package is updated and the function is not present. Any suggestions how
> to do it now ?
> 
> Thanks for your help.
> 
> -- 
> Regards,
> 
> Siddharth Arun,
> Contact No. - +91 8880065278
> 
>   [[alternative HTML version deleted]]
> 
> __
> R-help@r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
> and provide commented, minimal, self-contained, reproducible code.
  
[[alternative HTML version deleted]]

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] (no subject)

2013-08-25 Thread Tyler Rinker
Greeting R Community,


Greetings R Community,

I am attempting to make sure a parent package passes all CRAN checks for the 
dev version (R Under development (unstable) (2013-08-24 r63687) -- "Unsuffered 
Consequences").  The parent package I'm compiling relies on another package 
that is purely a dataset package as recommended by  CRAN Repository Policy for 
larger data sets:


"Where a large amount of data is required (even after compression), 
consideration should be given to a separate data-only package which can be 
updated only rarely (since older versions of packages are archived in 
perpetuity)."


The dataset only package is used heavily by the parent in that the data sets 
are frequently used by the parent package's functions and, additionally, it is 
sensible (in my opinion; particularly considering the intended user for my 
package) that the user would not need to manually load the dataset package when 
loading the parent package and still have access to all the data. 


In the past I would have loaded the data sets from the data set only package by 
adding the dataset package to the `Depends:` field in the `DESCRIPTION file` of 
the parent package.  In the R dev version (Windows; R Under development 
(unstable) (2013-08-24 r63687) -- "Unsuffered Consequences") a Note occurs when 
including this dataset only package in the Depends field and nothing is 
actually imported (currently I don't export data sets but maybe I should be 
adding @exports to the roxygen2 documentation):


* checking dependencies in R code ... NOTE
Package in Depends field not imported from: 'qdapDictionaries'
  These packages needs to imported from for the case when
  this namespace is loaded but not attached.
See the information on DESCRIPTION files in the chapter 'Creating R
packages' of the 'Writing R Extensions' manual.


If the datasets were actually functions I could use @importFrom (roxygen2 
documentation) to import the functions, and the Note would go away, but in this 
case the package contains datasets which are not exported (I don't think you're 
supposed to export data sets but would love to be wrong).


What is the best approach to having a data only package be accessible to the 
parent functions and user without explicitly loading the data set package, yet, 
pass the R dev version (R Under development (unstable) (2013-08-24 r63687) -- 
"Unsuffered Consequences") check?



Cheers

Tyler
===


Plain text version of this email: 
https://dl.dropboxusercontent.com/u/61803503/Errors/depends_question.txt

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] Incorporating a dataset only package

2013-08-25 Thread Tyler Rinker
Greetings R Community,

I am attempting to make sure a parent package passes all CRAN checks for the 
dev version (R Under development (unstable) (2013-08-24 r63687) -- "Unsuffered 
Consequences").  The parent package I'm compiling relies on another package 
that is purely a dataset package as recommended by  CRAN Repository Policy for 
larger data sets:


"Where a large amount of data is required (even after compression), 
consideration should be given to a separate data-only package which can be 
updated only rarely (since older versions of packages are archived in 
perpetuity)."


The dataset only package is used heavily by the parent in that the data sets 
are frequently used by the parent package's functions and, additionally, it is 
sensible (in my opinion; particularly considering the intended user for my 
package) that the user would not need to manually load the dataset package when 
loading the parent package and still have access to all the data. 


In the past I would have loaded the data sets from the data set only package by 
adding the dataset package to the `Depends:` field in the `DESCRIPTION file` of 
the parent package.  In the R dev version (Windows; R Under development 
(unstable) (2013-08-24 r63687) -- "Unsuffered Consequences") a Note occurs when 
including this dataset only package in the Depends field and nothing is 
actually imported (currently I don't export data sets but maybe I should be 
adding @exports to the roxygen2 documentation):


* checking dependencies in R code ... NOTE
Package in Depends field not imported from: 'qdapDictionaries'
  These packages needs to imported from for the case when
  this namespace is loaded but not attached.
See the information on DESCRIPTION files in the chapter 'Creating R
packages' of the 'Writing R Extensions' manual.


If the datasets were actually functions I could use @importFrom (roxygen2 
documentation) to import the functions, and the Note would go away, but in this 
case the package contains datasets which are not exported (I don't think you're 
supposed to export data sets but would love to be wrong).


What is the best approach to having a data only package be accessible to the 
parent functions and user without explicitly loading the data set package, yet, 
pass the R dev version (R Under development (unstable) (2013-08-24 r63687) -- 
"Unsuffered Consequences") check?  This desired outcome is very much the way 
the `datasets` package is loaded by default when R starts.



Cheers

Tyler (apologies on the double send as the first one had no subject line)
===


Plain text version of this email: 
https://dl.dropboxusercontent.com/u/61803503/Errors/depends_question.txt

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Trouble with Slidify and Latex

2013-09-01 Thread Tyler Rinker
Is there a reason not to contact the package author directly and ask him?  
slidify isn't on CRAN so likely you got it from GitHub which is where you 
discuss package problems, particularly, a beta package's problems.  Here is the 
link to the issues page for slidify: https://github.com/ramnathv/slidify/issues?state=open

Tyler Rinker 



> From: noahsilver...@ucla.edu
> Date: Sun, 1 Sep 2013 13:40:52 -0700
> To: r-help@r-project.org
> Subject: [R] Trouble with Slidify and Latex
>
> Hi,
>
> (Re-submitting as the original doesn't look like it made it to the list.)
>
> Just starting to play around with the awesome Slidify package.
>
> For some reason, I can't get it to render any Latex in the presentation. Have 
> reviews all the docs and think I'm doing things correctly. Is there something 
> possibly broken with my installation, or am I misunderstanding the markdown 
> syntax?
>
> I have, on a single slide:
>
> Test $A = 1+2$ and some text after
>
> What I see in the Presentation:
>
>
> Test \(A = 1+2\) and some text after
>
>
> Note: This is literally a "slash" followed by a "parenthesis" in the final 
> HTML slide.
>
>
> Any ideas on what's wrong here?
>
> Thanks.
>
> --
> Noah Silverman, C.Phil
> UCLA Department of Statistics
> 8117 Math Sciences Building
> Los Angeles, CA 90095
>
>
> [[alternative HTML version deleted]]
>
> __
> R-help@r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
> and provide commented, minimal, self-contained, reproducible code.
>   
__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] How do I parse text?

2013-09-06 Thread Tyler Rinker
Henry,

Have look at the qdap package's termco, wfm, adjacency_matrix, and (possibly) 
word_associate functions.  I'm not sure if they'll work as you really don't 
give much in the way of what the data is and the desired output (an example of 
the output).

Cheers,
Tyler Rinker

 
> From: htrobert...@seton.org
> To: r-help@r-project.org
> Date: Fri, 6 Sep 2013 21:14:42 +
> Subject: [R] How do I parse text?
>
> I have a data frame with a character field of the form "ACUTE URI NOS", "OPEN 
> WOUND OF FOREHEAD", "CROUP", "STREP SORE THROAT", 
>
> How can I get counts of all the words and their co-occurences? I've spent a 
> long time searching on google, but it just takes me on a wild goose chase of 
> dozens of modules involving advanced natural language processing theory. All 
> I want is word counts and co-occurences.
>
> Thanks
>
>
>
>
> CONFIDENTIALITY NOTICE:\ This email message and any acco...{{dropped:13}}
>
> __
> R-help@r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
> and provide commented, minimal, self-contained, reproducible code.
>   
__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] [R-pkgs] qdap 0.2.0 released

2013-02-04 Thread Tyler Rinker

qdap (Quantitative Discourse Analysis Package) is an R package designed to 
assist in quantitative discourse analysis. The package stands as a bridge 
between qualitative transcripts of dialogue and statistical analysis and 
visualization.


This is the first CRAN release of qdap: 
http://cran.r-project.org/web/packages/qdap/index.html


The qdap package automates many of the tasks associated with quantitative 
discourse analysis of transcripts containing discourse including frequency 
counts of sentence types, words, sentence, turns of talk, syllable counts and 
other assorted analysis tasks. The package provides parsing tools for preparing 
transcript data. Many functions enable the user to aggregate data by any number 
of grouping variables providing analysis and seamless integration with other R 
packages that undertake higher level analysis and visualization of text. This 
provides the user with a more efficient and targeted analysis.


Github development version: https://github.com/trinker/qdap


As qdap is further developed the following are planned: (a) a github hosted 
website via statics (b) a help video section and (c) a vignette detailing 
workflow and use of qdap.


Tyler Rinker  
___
R-packages mailing list
r-packa...@r-project.org
https://stat.ethz.ch/mailman/listinfo/r-packages

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] Issues with installing RBGL package

2013-02-22 Thread jason tyler
Hi all,

I was installing a package *RBGL* of bioconductor. However, I had some
issues while installing it. I asked the devel group of bioconductor and
they told me to consult this group. Here is my conversation with the
bioconductor group related to the problem


*Me->*
I was trying to install the RBGL package using the following command

biocLite("RBGL")

However, I got the following error
*
installing *source* package ‘RBGL’ ...
untarring boost include tree...
** libs
/usr/bin/clang++ -I/usr/local/Cellar/r/2.15.1/R.framework/Resources/include
-DNDEBUG  -I/usr/local/Cellar/readline/6.2.4/include -isystem
/usr/local/include -I/usr/X11/include   -Irbgl_trimmed_boost_1_49_0 -fPIC
-Os -w -pipe -march=native -Qunused-arguments -mmacosx-version-min=10.7  -c
bbc.cpp -o bbc.o
/usr/bin/clang++ -I/usr/local/Cellar/r/2.15.1/R.framework/Resources/include
-DNDEBUG  -I/usr/local/Cellar/readline/6.2.4/include -isystem
/usr/local/include -I/usr/X11/include   -Irbgl_trimmed_boost_1_49_0 -fPIC
-Os -w -pipe -march=native -Qunused-arguments -mmacosx-version-min=10.7  -c
cliques.cpp -o cliques.o
cliques.cpp:26:31: error: redefinition of 'p' as different kind of symbol
std::pair p;
  ^
rbgl_trimmed_boost_1_49_0/boost/mpl/assert.hpp:149:42: note: previous
definition is here
BOOST_MPL_AUX_ASSERT_CONSTANT( bool, p = !p_type::value );
 ^
rbgl_trimmed_boost_1_49_0/boost/mpl/assert.hpp:56:58: note: expanded from
macro 'BOOST_MPL_AUX_ASSERT_CONSTANT'
#   define BOOST_MPL_AUX_ASSERT_CONSTANT(T, expr) enum { expr }
 ^
cliques.cpp:53:19: error: expression is not assignable
p = edge(*va1, *va2, g);
~ ^
cliques.cpp:54:25: error: member reference base type
'mpl_::assert_arg_pred_not>::' is not a
structure or union
if ( !p.second ) return FALSE;*


Any suggestions how to overcome this or what is causing this issue?

*Vincent Gray(of bioconductor group) -> *
please provide sessionInfo() and result of gcc -v

*Me->*
gcc -v
Using built-in specs.
Target: i686-apple-darwin11
Configured with: /private/var/tmp/llvmgcc42/llvmgcc42-2336.11~28/src/configure
--disable-checking --enable-werror --prefix=/Applications/Xcode.
app/Contents/Developer/usr/llvm-gcc-4.2 --mandir=/share/man
--enable-languages=c,objc,c++,obj-c++ --program-prefix=llvm-
--program-transform-name=/^[cg][^.-]*$/s/$/-4.2/ --with-slibdir=/usr/lib
--build=i686-apple-darwin11 --enable-llvm=/private/var/
tmp/llvmgcc42/llvmgcc42-2336.11~28/dst-llvmCore/Developer/usr/local
--program-prefix=i686-apple-darwin11- --host=x86_64-apple-darwin11
--target=i686-apple-darwin11 --with-gxx-include-dir=/usr/include/c++/4.2.1
Thread model: posix
gcc version 4.2.1 (Based on Apple Inc. build 5658) (LLVM build 2336.11.00)


*sessionInfo() Results*

R version 2.15.1 (2012-06-22)
Platform: x86_64-apple-darwin11.4.0 (64-bit)

locale:
[1] en_US.UTF-8/en_US.UTF-8/en_US.UTF-8/C/en_US.UTF-8/en_US.UTF-8

attached base packages:
[1] stats graphics  grDevices utils datasets  methods   base

other attached packages:
[1] BiocInstaller_1.8.3

loaded via a namespace (and not attached):
[1] tools_2.15.1

Thanks

Regards,
Jason

*Vincent Gray -> *
OK, I didn't read your log too well, so I missed that you are using
clang++.  Theoretically it will work

http://blog.llvm.org/2010/05/clang-builds-boost.html

I am using the gcc supplied in Xcode 4.0.2, with

bash-3.2$ gcc -v
Using built-in specs.
Target: i686-apple-darwin10
Configured with: /var/tmp/gcc/gcc-5666.3~123/src/configure
--disable-checking --enable-werror --prefix=/usr --mandir=/share/man
--enable-languages=c,objc,c++,obj-c++
--program-transform-name=/^[cg][^.-]*$/s/$/-4.2/ --with-slibdir=/usr/lib
--build=i686-apple-darwin10 --program-prefix=i686-apple-darwin10-
--host=x86_64-apple-darwin10 --target=i686-apple-darwin10
--with-gxx-include-dir=/include/c++/4.2.1
Thread model: posix
gcc version 4.2.1 (Apple Inc. build 5666) (dot 3)

this has no problem installing RBGL from source.  your R is a little out of
date but I don't think that's an issue.

I have minimal experience with clang++ and you may have to take this to
R-help or R-devel if you are not going to use Xcode- based compilation, as
that's what we are basing our builds/tests on.  I did try

bash-3.2$ clang++ -v
clang version 3.2 (tags/RELEASE_32/final)
Target: x86_64-apple-darwin10.8.0
Thread model: posix
bash-3.2$ clang++ -arch x86_64 -dynamiclib -Wl,-headerpad_max_install_names
-undefined dynamic_lookup -single_module -multiply_defined suppress
-L/Users/stvjc/ExternalSoft/READLINE-62-DIST/lib -lreadline
-L//Users/stvjc/ExternalSoft/LIBICONV-64/lib -liconv
-L/Users/stvjc/ExternalSoft/jpeg-6b -ljpeg -o cliques.so cliques.o
-F/Users/stvjc/ExternalSoft/R-devel-dist/R.framework/.. -framework R
-Wl,-framework -Wl,CoreFoundation

and this aping of the R CMD SHLIB command with clang++ quietly produced a
.so.  Upon linking this 

Re: [R] Word Frequency for each row

2013-03-09 Thread Tyler Rinker

I think the qdap package's termco (termo count) function will do what you want. 
Read the specifics as spacing around the word matters.
    


library(qdap);       
termco(DATA$state, 1:nrow(DATA), c("it"))
    




> Date: Fri, 8 Mar 2013 21:34:31 +0530
> From: sudipanal...@gmail.com
> To: r-help@r-project.org
> Subject: [R] Word Frequency for each row
>
> Hi All,
>
> I am wondering if there is any examples where you can count your
> interested "word" in each row. For an example if you have data with *'ID*'
> and '*write-up*' for 100 rows, how would I calculate the word frequency for
> each row ?
>
> Thank you for all your time.
>
> [[alternative HTML version deleted]]
>
> __
> R-help@r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
> and provide commented, minimal, self-contained, reproducible code.
  
__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Word Frequency for each row

2013-03-09 Thread Tyler Rinker


I see you provided sample data.  Here it is with that:


library(qdap)
termco(dat$Data, dat$ID, c(" oranges "))



> From: tyler_rin...@hotmail.com
> To: sudipanal...@gmail.com; r-help@r-project.org
> Date: Sat, 9 Mar 2013 17:20:24 -0500
> Subject: Re: [R] Word Frequency for each row
>
>
> I think the qdap package's termco (termo count) function will do what you 
> want. Read the specifics as spacing around the word matters.
>
>
>
> library(qdap);
> termco(DATA$state, 1:nrow(DATA), c("it"))
>
>
>
>
> 
> > Date: Fri, 8 Mar 2013 21:34:31 +0530
> > From: sudipanal...@gmail.com
> > To: r-help@r-project.org
> > Subject: [R] Word Frequency for each row
> >
> > Hi All,
> >
> > I am wondering if there is any examples where you can count your
> > interested "word" in each row. For an example if you have data with *'ID*'
> > and '*write-up*' for 100 rows, how would I calculate the word frequency for
> > each row ?
> >
> > Thank you for all your time.
> >
> > [[alternative HTML version deleted]]
> >
> > __
> > R-help@r-project.org mailing list
> > https://stat.ethz.ch/mailman/listinfo/r-help
> > PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
> > and provide commented, minimal, self-contained, reproducible code.
>
> __
> R-help@r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
> and provide commented, minimal, self-contained, reproducible code.
  
__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] [R-pkgs] reports 0.1.2 released

2013-03-12 Thread Tyler Rinker

I'm very pleased to announce the release of reports: An R package to assist in 
the workflow of writing academic articles and other reports.


This is a bug fix release of reports: 
http://cran.r-project.org/web/packages/reports/index.html


The reports package assists in writing reports and presentations by providing a 
frame work that brings together existing R, LaTeX/.docx and Pandoc tools. The 
package is designed to be used with RStudio, MiKTex/Tex Live/LibreOffice, 
knitr, knitcitations, Pandoc and pander. The user will want to download these 
free programs/packages to maximize the effectiveness of the reports package. 
Functions with two letter names are general text formatting functions for 
copying text from articles for inclusion as a citation.


Github development version: https://github.com/trinker/reports


As reports is further developed the following are planned: (a) a help video 
section and (b) a vignette detailing workflow and use of reports.


Tyler Rinker

  
___
R-packages mailing list
r-packa...@r-project.org
https://stat.ethz.ch/mailman/listinfo/r-packages

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] custom startup/welcome message

2013-04-04 Thread Tyler Rinker
What is your OS?


> Fom: michael.weyla...@gmail.com
> Date: Thu, 4 Apr 2013 15:31:31 -0500
> To: pelj...@yahoo.co.uk
> CC: r-help@r-project.org
> Subject: Re: [R] custom startup/welcome message
>
> On Thu, Apr 4, 2013 at 9:49 AM, lejeczek  wrote:
> > yeap, I've done it,
> > I was hoping for a complete customization,
> > and even Rprofile.site adds only to what is being printed by default anyway,
> > I mean that header is always there. R version. untill "Type 'q()'..
> > and R -q silences everything :(
> >
>
> I'm not sure what the question is now (and please "context post"
> instead of top-posting) -- I suppose you could patch the C code that
> prints that message if you want it to say something else. You'd have
> to re-build R but it's easy code to change.
>
> It's at the top of $R_HOME/src/main/version.c:
> http://svn.r-project.org/R/trunk/src/main/version.c
>
> Michael
>
> >
> >
> > On 04/04/13 15:01, Michael Weylandt wrote:
> >>
> >> On Apr 4, 2013, at 6:20, lejeczek  wrote:
> >>
> >>> hi everybody
> >>>
> >>> I wonder if there is a simple way, but not simple would be
> >>> ok too,
> >>> to customize info/welcome page at session start time?
> >>
> >> Probably easiest to do it by way of some cat() calls in your .Rprofile.
> >> See ?Startup for details.
> >>
> >> MW
> >>
> >>> what I'd like to do is to put together simple short howto /
> >>> dos & don'ts page for users,
> >>> I'm thinking it would be great if it was possible
> >>>
> >>> many thanks
> >>>
> >>> [[alternative HTML version deleted]]
> >>>
> >>> __
> >>> R-help@r-project.org mailing list
> >>> https://stat.ethz.ch/mailman/listinfo/r-help
> >>> PLEASE do read the posting guide
> >>> http://www.R-project.org/posting-guide.html
> >>> and provide commented, minimal, self-contained, reproducible code.
> >
> >
> > __
> > R-help@r-project.org mailing list
> > https://stat.ethz.ch/mailman/listinfo/r-help
> > PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
> > and provide commented, minimal, self-contained, reproducible code.
>
> __
> R-help@r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
> and provide commented, minimal, self-contained, reproducible code.
>   
__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] Writing contrast statements to test difference of slope in linear regressions

2013-04-23 Thread Hallman, Tyler
Hi Everyone,

I am uncertain that I am writing the contrast statements correctly. Basically, 
I'm unsure when to use a -1 and a 1 when writing the contrasts. Specifically I 
am interested in comparing the slopes between different temperature regimes. 
Temperature is therefore a factor. Time and percent are numerical. Using the 
gmodels package I made the following model:

m2<-lm(Percent~Time+Temperature, data=Hchrys.Temp);summary(m2)

# results from m2
Call:
lm(formula = Percent ~ Time + Temperature, data = Hchrys.Temp)

Residuals:
  Min1QMedian3Q   Max
-0.098333 -0.031667 -0.00  0.026667  0.101667

Coefficients:
   Estimate Std. Error t value Pr(>|t|)
(Intercept)  -0.023  0.0147504  -1.582 0.119413
Time  0.0007639  0.0001774   4.306 6.91e-05 ***
Temperature22:18  0.013  0.0170324   0.783 0.437088
Temperature22:20  0.013  0.0170324   0.783 0.437088
Temperature22:22  0.067  0.0170324   3.914 0.000252 ***
---
Signif. codes:  0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1

Residual standard error: 0.04665 on 55 degrees of freedom
Multiple R-squared: 0.3997, Adjusted R-squared: 0.356
F-statistic: 9.154 on 4 and 55 DF,  p-value: 9.653e-06

# I then built the contrasts based on the order of the coefficients above.

cm1<-rbind("1:Temperature 20:20 v.22:22 " = c(0,0,0,0,1),
 "2:Temperature 22:18 v.22:22 " = c(0,0,-1,0,1),
 "3:Temperature 22:20 v.22:22 " = c(0,0,0,-1,1),
 "4:Temperature 20:20 v.22:20 " = c(0,0,0,1,0),
 "5:Temperature 22:18 v.22:20 " = c(0,0,-1,1,0),
 "6:Temperature 20:20 v.22:18 " = c(0,0,1,0,0))

# To compare between them I used:
estimable(m2,cm1)

# The results of which are below.
Estimate Std. Error   t value DF 
Pr(>|t|)
1:Temperature 20:20 v.22:22   6.67e-02 0.01703235  3.914120e+00 55 
0.0002522632
2:Temperature 22:18 v.22:22   5.33e-02 0.01703235  3.131296e+00 55 
0.0027865530
3:Temperature 22:20 v.22:22   5.33e-02 0.01703235  3.131296e+00 55 
0.0027865530
4:Temperature 20:20 v.22:20   1.33e-02 0.01703235  7.828240e-01 55 
0.4370882317
5:Temperature 22:18 v.22:20  -3.469447e-17 0.01703235 -2.036975e-15 55 
1.00
6:Temperature 20:20 v.22:18   1.33e-02 0.01703235  7.828240e-01 55 
0.4370882317

Did I write the contrasts correctly? And does this then indicate that the slope 
of 22:22 was significantly different from all others but none of the others 
were different?

Help with comparing the slopes between these regressions would be wonderful.

Cheers,

-
Tyler Hallman, M.S.
Ph.D. Student
The Robinson Lab
Department of Fisheries and Wildlife
Oregon State University Corvallis

[[alternative HTML version deleted]]

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] Creating multiple maps so points don't overlap

2012-09-22 Thread Tyler Petroelje
Hello,

I am working within package 'maptools' to plot a number of collared animal
locations by reading in shapefiles of locations, roads, hydrology, and
landownership as imported layers.

The trouble I have is that some individual locations are overlapping and I
would like to "zoom" into or create new plots for overlapping points/points
that are too close together. I will be making many of these maps, so I
would like to not have to manually select the limits for each area where
points are overlapping.

I would be grateful for any information on how to code for a new plot to be
created or zoomed in upon when points are "x" close together.

Thank you,

- MIPP

Please see image for visual explanation

[[alternative HTML version deleted]]

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] Question about use of sort.list(sort.list(x)) in rank.r

2012-10-16 Thread Tyler Ritchie
I was looking at rank() and I came across:

...
"first" = sort.list(sort.list(xx)), ...

line 32 of rank.r [1]

sort.list(x) returns the indices of the values of x in ascending (by
default) order. So sort.list(sort.list(x)) returns the same list.

So, what am I missing here?

-Tyler

[1] view-source:http://svn.r-project.org/R/trunk/src/library/base/R/rank.R

[[alternative HTML version deleted]]

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] Major discrepancy between R and Stata for ARIMA

2014-04-09 Thread Benster, Tyler
Hi all,

I've been looking through documentation to try to understand why Stata and
R occasionally come up with very different parameter estimates for ARIMA,
and am stumped. Existing discussion on this question, including code, can
be found here:
https://stackoverflow.com/questions/22443395/major-discrepancies-between-r-and-stata-for-arima.
I will summarize below for convenience.

Using historical Lynx Pelt data (
https://www.dropbox.com/s/v0h9oywa4pdjblu/Lynxpelt.csv), here are two
tables of AIC values from R and Stata for ARIMA(p,q) models for 0<=p<=5 and
0<=q<=5. Note that while most estimates match to seven significant digits,
several estimates diverge wildly, like the (1,3), (4,2) and the (3,2).

AIC calculations from STATA with technique(bfgs) for ARIMA(p,q):
   q0 q1 q2 q3 q4
p0  145.25614  100.20123   87.45929  77.570744  85.863777
p1  101.54848  84.916921   82.11809  86.444131  74.263937
p2  63.411671  49.424167  44.149023  40.966325  42.760294
p3  52.260723  49.196628  40.442078  43.498413  43.622292
p4  46.196192  48.195322  42.396986  42.289595  0

R results from above for easy comparison:

AIC calculations from R for ARIMA(p,q)
  q0q1   q2   q3   q4
p0 145.25613 100.20123 87.45927 77.57073 85.86376
p1 101.54847  84.91691 82.11806 77.15318 74.26392
p2  63.41165  49.42414 44.14899 40.96787 44.33848
p3  52.26069  49.19660 52.00560 43.50156 45.17175
p4  46.19617  48.19530 49.50422 42.43198 45.71375


Note that I manually forced Stata to us BFGS as the optimization method to
match R, as the default usually alternates 5 steps BHHH and 10 steps BFGS.
In R, I turned off transformation of parameters & forced use of maximum
likelihood.

Do these differences result from starting values? The parameter estimates
(and acf/pacf) are sufficiently different in Stata & R that one could
logically arrive at different model specifications based solely on the
statistical program used.

Thank you for your help!

Twitter: @tbenst 
LinkedIn: tylerbenster 

[[alternative HTML version deleted]]

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] Long equation in documentation

2012-12-09 Thread Tyler Rinker

I have a long equation that I need to break in the R documentation of a package 
or it trails off the right hand side of the page. Here's the formula:


\deqn{Cov(r_{ist}, r_{iuv})= [.5\rho_{ist}\rho_{iuv}(\rho_{isu}^2 + 
\rho_{isv}^2 + \rho_{itu}^2 + \rho_{itv}^2) + \rho_{isu}\rho_{itv}+ 
\rho_{isv}\rho_{itu}-(\rho_{ist}\rho_{isu}\rho_{isv} + 
\rho_{its}\rho_{itu}\rho_{itv}) + \rho_{ius}\rho_{iut}\rho_{iuv} + 
\rho_{ivs}\rho_{ivt}\rho_{ivu}]/n_i}



How can I break the formula and optionally indent the second lower piece; 
though I'd settle for break it right now? 

Tyler Rinker

Note:  Cross posted here after no viable answer on stackoverflow: 
http://stackoverflow.com/questions/13780190/break-long-formula-r-documentation

Plain txt file attached in case message is garbled. 
  I have a long formula that I need to break in the R documentation of a package 
or it trails off the right hand side of the page. Here's the formula:



\deqn{Cov(r_{ist}, r_{iuv})= [.5\rho_{ist}\rho_{iuv}(\rho_{isu}^2 + 
\rho_{isv}^2 + \rho_{itu}^2 + \rho_{itv}^2) + \rho_{isu}\rho_{itv}+ 
\rho_{isv}\rho_{itu}-(\rho_{ist}\rho_{isu}\rho_{isv} + 
\rho_{its}\rho_{itu}\rho_{itv}) + \rho_{ius}\rho_{iut}\rho_{iuv} + 
\rho_{ivs}\rho_{ivt}\rho_{ivu}]/n_i}




How can I break the formula and optionally indent it, though I'd settle for 
break it right now? 


Tyler Rinker


Note:  Cross posted here after no viable answer on stackoverflow: 
http://stackoverflow.com/questions/13780190/break-long-formula-r-documentation__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Creating an R package in windows- where to put images?

2012-12-09 Thread Tyler Rinker

I recently included a .bib file in a package in the directory: 
package_name/inst/extdata


I then recall this file using:  <- system.file("extdata/bibTest.bib", package = 
"metaDAT")


I assume something similar could be helpful here.
> From: jdnew...@dcn.davis.ca.us
> Date: Sun, 9 Dec 2012 22:11:47 -0800
> To: su...@sssu.edu.in; r-help@r-project.org
> Subject: Re: [R] Creating an R package in windows- where to put images?
> 
> http://cran.r-project.org/doc/manuals/r-release/R-exts.html#Data-in-packages
> 
> Don't get stuck on your idea that the file is a gif... once you load it into 
> memory it is an R object.
> ---
> Jeff NewmillerThe .   .  Go Live...
> DCN:Basics: ##.#.   ##.#.  Live Go...
>   Live:   OO#.. Dead: OO#..  Playing
> Research Engineer (Solar/BatteriesO.O#.   #.O#.  with
> /Software/Embedded Controllers)   .OO#.   .OO#.  rocks...1k
> --- 
> Sent from my phone. Please excuse my brevity.
> 
> Subramanian S  wrote:
> 
> >My R version: 2.15.1- windows
> >
> >While creating an R package in windows, i want to include an image (a
> >.gif
> >image) that will finally be in the package. A function in my package
> >will
> >call the gif image. Where should i put the images before running "Rcmd
> >build mypkg" or "Rcmd INSTALL --build mypkg", so that after creating
> >the
> >package, the .zip file / .tar.gz/ .tgz would have the gif image in the
> >correct place in each OS.  Basically two questions (1) which directory
> >should i put the gif image before building the pkg and (2)how do i
> >specify
> >the path in the function that will call the gif image?
> >
> > [[alternative HTML version deleted]]
> >
> >__
> >R-help@r-project.org mailing list
> >https://stat.ethz.ch/mailman/listinfo/r-help
> >PLEASE do read the posting guide
> >http://www.R-project.org/posting-guide.html
> >and provide commented, minimal, self-contained, reproducible code.
> 
> __
> R-help@r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
> and provide commented, minimal, self-contained, reproducible code.
  
[[alternative HTML version deleted]]

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] clusterboot function in the fpc package in R

2013-09-17 Thread Hallman, Tyler
Hi Everyone,

So I am trying to use the clusterboot function in the fpc package in R. Really 
I want some measures of cluster stability. I would love to get something like 
p-values on the different branches of the dendrogram.  I've tried to use this 
function now but I haven't yet been able to. I'm not quite sure what I'm doing 
wrong. I create a dissimilarity matrix of my data and then run the function 
with the code below:

dMOFF.2007<-dist(MOFF.2007)
cf1 <- 
clusterboot(MOFF.2007,B=3,bootmethod=boot,bscompare=TRUE,multipleboot=TRUE,clustermethod=hclust)

I get the error:
Error in if (is.na(n) || n > 65536L) stop("size cannot be NA nor exceed 65536") 
:
  missing value where TRUE/FALSE needed

I have no idea what this means. Does anyone have any suggestions? Also 
suggestions on how to use this function in general would be great as well. Any 
other way to get metrics on the validity of clusters would be great too.

Cheers,


-
Tyler Hallman, M.S.
Ph.D. Student
The Robinson Lab
Department of Fisheries and Wildlife
Oregon State University Corvallis

[[alternative HTML version deleted]]

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] help, bifurcation diagram efficiency

2010-06-24 Thread Tyler Massaro
Hello all -

This code will run, but it bogs down my computer when I run it for finer and
finer time increments and more generations.  I was wondering if there is a
better way to write my loops so that this wouldn't happen.  Thanks!

-Tyler

#

# Bifurcation diagram

# Using Braaksma system of equations

# We have however used a Fourier analysis

# to get a forcing function similar to

# cardiac action potential...

#

require(odesolve)



# We get this s_of_t function from Maple ws

s_of_t = function(t)

{

(1/10) * (( (1/2) + (1/2) * (sin((1/4)*pi*t))/(abs(sin((1/4)*pi*t * (
6.588315815*sin((1/4)*pi*t) - 1.697435362*sin((1/2)*pi*t) - 1.570296922*sin
((3/4)*pi*t) + 0.3247901958*sin(pi*t) + 0.7962749105*sin((5/4)*pi*t) +
0.07812230515*sin((3/2)*pi*t) - 0.3424877143*sin((7/4)*pi*t) - 0.1148306748*
sin(2*pi*t) + 0.1063696962*sin((9/4)*pi*t) + 0.02812403009*sin((5/2)*pi*t)))

 }


ModBraaksma = function(t, n, p)

{


 dx.dt = (1/0.01)*(n[2]-((1/2)*n[1]^2+(1/3)*n[1]^3))

 dy.dt = -(n[1]+p["alpha"]) + 0.032 * s_of_t(t)

list(c(dx.dt, dy.dt))

}


initial.values = c(0.1, -0.02)


alphamin = 0.01

alphamax = 0.02


alphas = seq(alphamin, alphamax, by = 0.1)


TimeInterval = 100


times = seq(0.001, TimeInterval, by = 0.001)


plot(1, xlim = c(alphamin, alphamax), ylim = c(0,0.3), type = "n",xlab
= "Values
of alpha", ylab = "Approximate loop size for a limit cycle", main =
"Bifurcation
Diagram")


for (i in 1:length(alphas)){

 params = c(alpha=alphas[i])

out = lsoda(initial.values, times, ModBraaksma, params)

X = out[,2]

Y = out[,3]

 for(j in 200:length(times)){

 if (abs(X[j]) < 0.001) {

points(alphas[i], Y[j], pch = ".")

 }

}

}

[[alternative HTML version deleted]]

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] Quantmod Error Message

2010-07-12 Thread Tyler Campbell
I am trying to create a model using the Quantmod package in R. I am
using the following string of commands:
> ema<-read.csv(file="ESU0 Jul 7 1 sec data.csv")
> Bid=(ema$Bid)
> twentysell=EMA(Bid,n=1200)
> fortysell=EMA(Bid,n=2400)
> sigup<-ifelse(twentysell>fortysell,1,0)
> sigdn<-ifelse(twentysell specifyModel(Next(sigup)~lag(sigup,1) + Next(sigdn)~lag(sigdn,1), 1:31624)

After this last command, I get this error message:
Error in as.Date.default(x, origin = "1970-01-01") :
  do not know how to convert 'x' to class "Date"

I've thought it was a time series issue, but I have tried converting
the "sigup" and "sigdn" to a time series using
>sigup_ts=ts(sigup)
>sigdn_ts=ts(sigdn)
But the error still comes up. Any help on this issue would be greatly
appreciated.

Thanks,
Tyler Campbell
tyler.campb...@tradeforecaster.com

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] GLM Starting Values

2010-07-22 Thread Tyler Williamson
Hello,

Suppose one is interested in fitting a GLM with a log link to binomial data.  
How does R choose starting values for the estimation procedure?  Assuming I 
don't supply them.

Thanks,
Tyler
__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Dual colour ramps based on pos/neg values

2011-05-10 Thread Tyler Hayes
My apologies for the late reply but I was out of town for a while. The
solution I wound up using is below. Sorry about the text if it didn't
wrap well. You should be able to pick out the code from the comments
though.

Thanks for all the help!

Cheers,

t.


## > Start hack
##
##*** OLD COLOUR STYLES ***
## r2b <- c("#0571B0", "#92C5DE", "#F7F7F7", "#F4A582", "#CA0020") #red to blue
## r2g <- c("Red", "DarkRed", "Green", "Chartreuse")
## w2b <- c("#045A8D", "#2B8CBE", "#74A9CF", "#BDC9E1", "#F1EEF6")
#white to blue
## assign("col.sty", get(color))
## calendar.pal <- colorRampPalette((col.sty), space = "Lab")
##
## *** NEW METHOD FOR ZERO CROSSING RAMP SCALES ***
##
## First low colour is the MIN value; second is closest to 0.0
## First high is the closest to 0.0; second is the MAX value.
## If names are not known to the graphics driver, use HEX values
## I tend to put the extremes as DARK and the low values as MUTED/LIGHT colours
lowColFun  <- colorRampPalette(c("#80","#FF","#FF82AB","#FFE4E1"),
space = "Lab")
highColFun <- colorRampPalette(c("#BDFCC9","#7FFF00","#00EE00","#008000"),
space = "Lab")
## These are hard coded, but should be made available to tune
scaleFac   <- 1.001
ncolors<- 99  # Should be odd for now b/c of tmpXEnd calculations below
## Define middle cutoff values, this could also be arbitrary for
## highlighting specific regions
tmpMid <- c(-0.0001,0.0001)
## Cuts where to put the colors
tmpLowEnd  <- 
seq(from=min(values)*scaleFac,to=tmpMid[1]*scaleFac,length=((ncolors-1)/2))
tmpHighEnd <- 
seq(from=tmpMid[2]*scaleFac,to=max(values)*scaleFac,length=((ncolors-1)/2))
tmpAtVals  <- c(tmpLowEnd,tmpMid,tmpHighEnd)
## Create final values for levelplot
my.at  <- c(tmpLowEnd,tmpMid,tmpHighEnd)
my.col.reg <- c(lowColFun(length(tmpLowEnd)),
rep("black",length(tmpMid)), highColFun(length(tmpHighEnd)) )
my.cuts<- length(my.col.reg)-1
##
## > End hack

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] draw text outside plot boundaries

2011-06-06 Thread Tyler Rinker

Erik,
 
To add to what epter said...
I created this little function for clicking text anywhere on the plot (I 
probably stole the idea from a list serve or Dalgaard's book or someplace like 
that).  Anyway it is helpful to me and may be of use to you too.  Very basic 
but I use it a ton.  You modify to suit your needs.
 
#TEXT CLICK FUNCTION
textClick<-function(express,col="black",cex=NULL){
par(mar = rep(0, 4),xpd=NA)
text(locator(1),express,col=col,cex=cex)
}
 
#EXAMPLE
frame()
par(mfrow=c(2,2))
with(mtcars,plot(mpg~cyl));with(mtcars,plot(mpg~cyl))
with(mtcars,plot(mpg~cyl));with(mtcars,plot(mpg~cyl))
textClick(expression(sum((bar(X)-X^2))),"pink",.5)
 
Cheers
Tyler
 
> Date: Mon, 6 Jun 2011 15:09:29 -0700
> From: ehl...@ucalgary.ca
> To: e...@q32.com
> CC: r-help@r-project.org
> Subject: Re: [R] draw text outside plot boundaries
> 
> On 2011-06-06 08:44, Erik Aronesty wrote:
> > i'd like to use the text() function to annotate some points, but the
> > labels get cropped, if the point is on the right
> >
> > is there a way to prevent this, and tell the text() function to allow
> > writing outside the boundaries of the current plot?
> 
> Go to ?par and check out the 'xpd' parameter.
> 
> Peter Ehlers
> 
> >
> > i don't mind if it looks "messy" and steps on the margin a bit.
> >
> > - erik
> >
> > __
> > R-help@r-project.org mailing list
> > https://stat.ethz.ch/mailman/listinfo/r-help
> > PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
> > and provide commented, minimal, self-contained, reproducible code.
> 
> __
> R-help@r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
> and provide commented, minimal, self-contained, reproducible code.
  
[[alternative HTML version deleted]]

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] Score Test Function

2011-06-11 Thread Tyler Rinker

Greeting R Community,
 
I'm trying to learn Logistic Regression on my own and am using An Introduction 
to Logistic Regression Analysis and Reporting (Peng, C., Lee, K., & Ingersoll, 
G. ,2002). This article uses a Score Test Stat as a measure of overall fit for 
a logistic regression model.  The author calculates this statistic using SAS.  
I am looking for an [R] function that can compute this stat and a p=value 
(please don't critique the stat itself).  As far as I can tell glm.scoretest() 
from library(statmod) is the only function for this but it does not give a 
pvalue and appears to not correspond with the author's example.  Some chat on 
the discussion board a while back indicated that this was a well known test 
Click here for that thread but difficult to calculate in [R].  I'm wondering if 
there is a way to do this fairly easily in [R] at this time?
 
I am working with the data set from the study and am looking to get close to 
the 9.5177 score test stat and .0086 p-value with 2 degrees freedom.
Below is the code for the data set and some descriptives so you can see the 
data set is the same as  from the study.  I highlighted my attempt to get a 
score test statistic and am curious if this is it (minus the p-value).
 
#BEGINNING OF CODE
id<-factor(1:189)
gender<-factor(c("Boy","Boy","Girl","Girl","Girl","Boy","Girl","Boy","Girl","Girl","Boy","Boy","Boy","Boy","Girl","Girl","Boy","Girl","Boy","Girl","Boy","Girl","Girl",
"Boy","Boy","Girl","Girl","Girl","Boy","Boy","Boy","Girl","Boy","Girl","Boy","Girl","Girl","Girl","Girl","Girl","Boy","Girl","Boy","Girl","Girl","Girl",
"Girl","Boy","Girl","Boy","Girl","Boy","Girl","Girl","Boy","Boy","Boy","Boy","Boy","Boy","Boy","Boy","Boy","Girl","Boy","Boy","Boy","Boy","Girl","Boy",
"Girl","Boy","Boy","Boy","Girl","Boy","Girl","Girl","Boy","Girl","Girl","Girl","Boy","Boy","Boy","Boy","Boy","Girl","Girl","Girl","Girl","Boy","Girl",
"Girl","Girl","Girl","Girl","Girl","Girl","Girl","Girl","Girl","Boy","Girl","Boy","Boy","Girl","Girl","Girl","Boy","Girl","Boy","Girl","Girl","Girl","Boy",
"Girl","Boy","Girl","Boy","Girl","Boy","Girl","Girl","Girl","Girl","Girl","Girl","Girl","Girl","Boy","Girl","Boy","Boy","Boy","Boy","Boy","Boy","Boy","Girl",
"Girl","Girl","Boy","Boy","Girl","Girl","Boy","Girl","Boy","Boy","Boy","Girl","Girl","Girl","Girl","Boy","Boy","Girl","Boy","Boy","Girl","Boy","Boy","Boy",
"Boy","Girl","Boy","Boy","Girl","Girl","Boy","Boy","Boy","Boy","Boy","Girl","Girl","Girl","Girl","Boy","Boy","Boy","Girl","Boy","Girl","Boy","Boy","Boy","Girl"))
reading.score<-c(91.0,77.5,52.5,54.0,53.5,62.0,59.0,51.5,61.5,56.5,47.5,75.0,47.5,53.5,50.0,50.0,49.0,59.0,60.0,60.0,
60.5,50.0,101.0,60.0,60.0,83.5,61.0,75.0,84.0,56.5,56.5,45.0,60.5,77.5,62.5,70.0,69.0,62.0,107.5,54.5,92.5,94.5,65.0,
80.0,45.0,45.0,66.0,66.0,57.5,42.5,60.0,64.0,65.0,47.5,57.5,55.0,55.0,76.5,51.5,59.5,59.5,59.5,55.0,70.0,66.5,84.5,
57.5,125.0,70.5,79.0,56.0,75.0,57.5,56.0,67.5,114.5,70.0,67.0,60.5,95.0,65.5,85.0,55.0,63.5,61.5,60.0,52.5,65.0,87.5,
62.5,66.5,67.0,117.5,47.5,67.5,67.5,77.0,73.5,73.5,68.5,55.0,92.0,55.0,55.0,60.0,120.5,56.0,84.5,60.0,85.0,93.0,60.0,
65.0,58.5,85.0,67.0,67.5,65.0,60.0,47.5,79.0,80.0,57.5,64.5,65.0,60.0,85.0,60.0,58.0,61.5,60.0,65.0,93.5,52.5,42.5,
75.0,48.5,64.0,66.0,82.5,52.5,45.5,57.5,65.0,46.0,75.0,100.0,77.5,51.5,62.5,44.5,51.0,56.0,58.5,69.0,65.0,60.0,65.0,
65.0,40.0,55.0,52.5,54.5,74.0,55.0,60.5,50.0,48.0,51.0,55.0,93.5,61.0,52.5,57.5,60.0,71.0,65.0,60.0,55.0,60.0,77.0,
52.5,95.0,50.0,47.5,50.0,47.0,71.0,65.0)
reading.recommendation<-as.factor(c(rep("no",130),rep("yes",59)))
DF<-data.frame(id,gender,reading.score,reading.recommendation)
head(DF)
#=
#  DESCRIPTIVES
#=
table(DF[2:4])  #BREAKDOWN OF SCORES BY GENDER AND REMEDIAL READING 
RECOMENDATIONS
table(DF[c(2)])  #TABLE OF GENDER
print(table(DF[c(2)])/sum(table(DF[c(4)]))*100,digits=4)#PERCENT GENDER 
BREAKDOWN
table(DF[c(4)])  #TABLE RECOMENDDED FOR REMEDIAL READING
print(table(DF[c(4)])/sum(table(DF[c(4)]))*100,digits=4)#Probability of 
Reccomended
table(DF[c(2,4)])  #TABLE OF GIRLS AND BOYS RECOMENDDED FOR REMEDIAL READING
print(prop.table(table(DF[c(2,4)]),1)*100,digits=4)#Probability of Reccomended
#=
#ANALYSIS
#=
(mod1<-glm(reading.recommendation~reading.score+gender,family=binomial,data=DF))
 
library(statmod)
with(DF,glm.scoretest(mod1, c(0,2,3), dispersion=NULL))^2
#If I move the decimal over 2 to the right I get close to the 9.5177 from the 
study
(with(DF,glm.scoretest(mod1, c(0,2,3), dispersion=NULL))^2)*100 #is this it?
#END OF CODE
 
I am running R 2.13.0 in a windows 7 machine.
 
Peng, C., Lee, K., & Ingersoll, G. (2002). An Introduction to Logistic 
Regression Analy

[R] Somers Dyx

2011-06-12 Thread Tyler Rinker

Hello R Community,
 
I'm continuing to work through logistic regression (thanks for all the help on 
score test) and have come up against a new opposition.
 
I'm trying to compute Somers Dyx as some suggest this is the preferred method 
to Somers Dxy (Demaris, 1992).  I have searchered the [R] archieves to no avail 
for a function or code to compute Dyx (not Dxy).  The overview of Hmisc has 
mention of Dyx for the rcorr.cens function but this appears to be a misprint 
because the manual states the function finds Dxy.  Peng and So (1998) state 
that the Dyx is easily calculated in SAS (which tells me the same is possible 
for [R]).  
 
Yang, K., Miller, G. J., & Miller, G. state that:
 
(Tau-b)^2=Somers Dxy * Somers Dyx 
 
…so maybe an approach would be to write a function that is:
 
Somers Dyx<-(Tau-b)^2/Somers Dxy  
 
I just don't want to waste time if this is incorrect logic and/or there's an 
easier way to calculate this thing; perhaps there’s a ‘golden’ function already 
created in an [R] package that I'm overlooking.
 
Thanks in advance,
Tyler 
[[alternative HTML version deleted]]

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] Help writing a Scheffe Contrast function for R

2011-03-09 Thread Tyler Rinker








Hello,
 
As a new  user of R (less than a month) I have got my hands on several books 
and am pouting through the net looking for help in gaining understanding of 
this powerful tool.  I am becoming more proficient with using basic functions 
to conduct basic statistics.  I am now looking to learn how to write code.  
After a few small worthless functions I decided to try to create a function 
that was of real use, a Scheffe test for doing post hoc comparisons.  I am not 
aware of any functions written for this though there may be some.  I wrote the 
function and have run each line of the function through R.  It seems to work 
(though can use some improvements later on by splicing code from aov or anova 
functions to eliminate the need for inputting the MSw).  The problem is when i 
try to run the code as a whole I get several error messages that I don't know 
how to fix (despite searching).  The output section seems to be error riddled.  
I wanted the output to display a message of not sig or s!
 ig as well as means and n for both groups and the total N, as well as the 
t-value and critical t.  I am using a formula supplied by a professor for 
Scheffe (when comparing two groups): 
 
critical value--> k=sqrt((J-1)FJ-1,N-J,alpha)
test stat-->(mean1-mean2)/(sqrt(MSw((1^2)/n1)+(1^2)/n2)))
 
I am a windows 7 user with the latest version of R.
 
My Question:  What do I need to do to correct the three error codes R gives me 
and make the function run correctly?
 
This is the session, code and R's error message when supplied with data: 
> rm(list=ls())
> dat1<-read.table("dat1.csv", header=TRUE, sep=",",na.strings="999")
> attach(dat1)
> dat1
   student  rti score
11   ns 2
22   ns 5
33   ns 2
44   ns11
55 wk2o10
66 wk2o11
77 wk2o 7
88 wk2o12
99 wk5o12
10  10 wk5o12
11  11 wk2w 5
12  12 wk2w 6
13  13 wk2w12
14  14 wk5w 5
15  15 wk5w 6
16  16 wk5w13
> anova(lm(score~rti))
Analysis of Variance Table
Response: score
  Df  Sum Sq Mean Sq F value Pr(>F)
rti4  83.771  20.943  1.7107 0.2174
Residuals 11 134.667  12.242   
> #so MSw is 12.242
> 
> #my code for scheffe's post hoc comparison
> 
> scheffe<- function(IV,DV,data,group1,group2,MSw,alpha) {
+  result<-0
+  J<-length(levels(IV))
+  d1<-subset(data, IV == "group1") 
+  d2<-subset(data, IV == "group2")
+  g1<-d1$DV
+  g2<-d2$DV
+  y.1<-mean(g1)
+  y.2<-mean(g2) 
+  n1<- length(g1)
+  n2<- length(g2)
+  N<-length(IV)
+  psi.hat<-y.1-y.2
+  se.psi.hat<- sqrt(MSw*((1/n1)+(1/n2)))
+  t<- psi.hat/se.psi.hat
+  t.compare<-abs(t)
+  k<-sqrt((J-1)*( qf((1-alpha),(J-1),(N-J
+   if(t.compare > k) return<-c("reject H0") else result<- c("accept H0")
+   return(list(cbind(result)), Mean.Group.1 = y.1,n.for.1=n1,   
+   Mean.Group.2=y.2,n.for.2=n2,N.Data=N,tValue=t,critical.value=k)
+  }
> #
> #doesn't seem to be a problem thus far
> #let's enter some data
> 
> #scheffe(IV,DV,data,group1,group2,MSw,alpha)
> 
> scheffe(rti,score,dat1,ns,wk5o,12.242,.05)
Error in if (t.compare > k) return <- c("reject H0") else result <- c("accept 
H0") : 
  missing value where TRUE/FALSE needed
In addition: Warning messages:
1: In mean.default(g1) : argument is not numeric or logical: returning NA
2: In mean.default(g2) : argument is not numeric or logical: returning NA
> 

  
[[alternative HTML version deleted]]

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] is gzcon w/ urls not implemented or used differently on linux?

2011-03-11 Thread Tyler Backman
I wrote some code which reads a gzipped text file directly from the web with 
gzcon(url()) and it works perfectly on OSX, but I cannot get it to work on 
linux at all, trying several different R versions and linux distributions. Any 
ideas?

Here's an example of my code:
z <- 
gzcon(url("ftp://ftp-private.ncbi.nlm.nih.gov/pubchem/.fetch/8897497837079742771.sdf.gz";))
sdf <- readLines(z)
close(z)

On linux it produces the following error:
Error in readLines(z) : cannot open the connection

The non-gzipped version works flawlessly on linux:
con <- url("http://chemmine.ucr.edu/ChemMineToolsV2/static/example_db.sdf";)
sdf <- readLines(con)
close(con)

As an analog, gzcon does work with non-url files on linux:
system("wget 
ftp://ftp-private.ncbi.nlm.nih.gov/pubchem/.fetch/8897497837079742771.sdf.gz";)
z <- gzcon(file("8897497837079742771.sdf.gz", "rb"))
sdf <- readLines(z)
close(z)

But this doesn't help me, because I need my code to be cross platform!

> sessionInfo()
R version 2.12.2 (2011-02-25)
Platform: x86_64-unknown-linux-gnu (64-bit)

locale:
[1] LC_CTYPE=en_US.UTF-8   LC_NUMERIC=C  
[3] LC_TIME=en_US.UTF-8LC_COLLATE=en_US.UTF-8
[5] LC_MONETARY=C  LC_MESSAGES=en_US.UTF-8   
[7] LC_PAPER=en_US.UTF-8   LC_NAME=C 
[9] LC_ADDRESS=C   LC_TELEPHONE=C
[11] LC_MEASUREMENT=en_US.UTF-8 LC_IDENTIFICATION=C   

attached base packages:
[1] stats graphics  grDevices utils datasets  methods   base 

> system("uname -a")
Linux biocluster 2.6.26-2-openvz-amd64 #1 SMP Tue Jan 25 06:04:33 UTC 2011 
x86_64 GNU/Linux

Thank you,
Tyler William H Backman
Cheminformatics Programmer
Department of Botany and Plant Sciences
E-mail: tyler.back...@ucr.edu
1207E Genomics Building
University of California
Riverside, CA 92521

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] Passing a character argument onto a function

2011-03-11 Thread Tyler Rinker

I am a new R user and am beginning to employ function creation in my 
statistical work.  I am running into a problem when I want to pass on a 
character (text) to the function as an argument.  I have a simple example below 
to demonstrate this problem.  I cannot seem to find a fix in my R book or in 
the blog posts.  I'm sure this has been covered before but my newbie status 
means I lack the R vocabulary to even search for this problem (I've tried for a 
few days to no avail).  Someone has already attempted to explain this to me.  I 
learn best by seeing.  Could someone rewrite my code so that the function 
works.  The function is very simple as is the data set so it should be pretty 
easy for an experienced R user to correct.  The problem is that R doesn't 
transfer the "blue" subgroup from the argument to the function.  I am excited 
with the potential of R and look forward to your help.
 
I am a Windows user running R 2.12.2

 
CODE for TEST FUNCTION
> TEST<-function(DV,IV,group1) {
+  g1<-DV[IV=="group1"]
+  p<-mean(g1)
+ list(g1,p)
+ }

R's OUTPUT
> TEST(frequency,color,blue)
[[1]]
integer(0)
[[2]]
[1] NaN

 
The DATA FRAME
TEST<-read.table("TEST.csv", header=TRUE, sep=",",na.strings="999")
> attach(TEST)

  color frequency
1  blue 3
2  blue 4
3  blue 3
4 green 5
5 green 2
6 green 4
7 green 5
8 green 1
  
[[alternative HTML version deleted]]

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] AOV() may misslabel random effects.

2011-03-15 Thread Tyler Rinker

Mr. Giles Crane,
 
I am new to R (only a month in).  My response is as best as I understand the 
workings of R (so if I'm wrong more experienced people plese help me out).
 
AOV is not really appropriate for an unbalanced model.  This is where you can 
rely on the lm() function using these steps:
(I find it easiest to show you with a real data set)
student gender male female computer
11   male  7.09.5  6.0
22   male  7.57.0  4.0
33   male  6.07.0  2.5
44   male  4.08.0  3.0
55   male  7.5   10.0  3.5
.
.
n  etc...
 
1)  Create a vector of levels for the measurement  points (1 for each 
measurement point):
 
meals <- c(1, 2, 3)
 
Where meals is the new vector name (a factor), and the numbers represent each 
measurement point.
 
2)  Create a within groups measurement point factor to house the levels you 
just created (this will be used later in our data frame(matrix style)) and in 
our Anova anaylsis):
 
meal.factor<- as.factor(meals)
 
Where meal.factor is the new factor with n levels to house our levels that 
describe our n numeric columns(measurement points).
 
3)  Create a matrix style data frame from the factor and levels that will 
be used to describe our numeric columns(measurement points):
 
meal.frame <- data.frame(meal.factor)
 
4)  Now create a bound vector containing the n numeric columns for later 
use in the linear model:
 
meal.bind<-cbind(breakfast , lunch, dinner)
 
5)  Create a linear model with the bound vector you just created.
 
meal.model<-lm(meal.bind~1)
 
6)  Use the Anova function from the car package to analyze our data (notice 
we are using the measurement point matrix style data frame and corresponding 
within groups factors as well as the linear model we just created):

 
analysis3 <- Anova(meal.model, idata = meal.frame, idesign = ~meal.factor) 
 
Note: we could have added the argument ,type=”III” but the default of Anova is 
to switch from type II to type III SS when there is only one intercept
 
7)  Now create a summary of the anova tables and information:
   summary(analysis)Look below at the summary:
 
NOTE: Aova (from car) will give you type II SS and you can also specify type 
III SS using the ,type=”III” argument at the end of step 6.  The function anove 
on step 6 gives you type I SS.  Research each one of these SS and detemrine 
what works best for you.   

 
 
 
 
> From: pda...@gmail.com
> Date: Mon, 14 Mar 2011 20:58:17 +0100
> To: gilescr...@verizon.net
> CC: r-help@r-project.org
> Subject: Re: [R] AOV() may misslabel random effects.
> 
> 
> On Mar 14, 2011, at 17:57 , Giles Crane wrote:
> 
> > Greetings,
> > 
> > The aov() function may mislabel
> > the random effects as in the example below:
> > Has anybody else noticed this?
> 
> What's "mislabeled" about it??? Looks like you nave an unbalanced design (in 
> which case, aov() may be the wrong tool.)
> 
> -pd
> 
> > 
> > Cordially,
> > Giles Crane, MPH, ASA, NJPHA
> > gilescr...@verizon.net
> > 
> > > m2
> > 
> > Call:
> > aov(formula = y ~ ap + pe + Error(ju), data = d)
> > 
> > Grand Mean: 77.50667
> > 
> > Stratum 1: ju
> > 
> > Terms:
> > ap
> > Sum of Squares 4322.538
> > Deg. of Freedom 12
> > 
> > 13 out of 25 effects not estimable
> > Estimated effects may be unbalanced
> > 
> > Stratum 2: Within
> > 
> > Terms:
> > ap pe Residuals
> > Sum of Squares 7047.885 255.034 2981.290
> > Deg. of Freedom 25 2 35
> > 
> > Residual standard error: 9.229285
> > Estimated effects may be unbalanced
> > 
> > __
> > R-help@r-project.org mailing list
> > https://stat.ethz.ch/mailman/listinfo/r-help
> > PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
> > and provide commented, minimal, self-contained, reproducible code.
> 
> -- 
> Peter Dalgaard
> Center for Statistics, Copenhagen Business School
> Solbjerg Plads 3, 2000 Frederiksberg, Denmark
> Phone: (+45)38153501
> Email: pd@cbs.dk Priv: pda...@gmail.com
> 
> __
> R-help@r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
> and provide commented, minimal, self-contained, reproducible code.
  
[[alternative HTML version deleted]]

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] How do I delete multiple blank variables from a data frame?

2011-03-19 Thread Tyler Rinker

I actually prefer to do this portion of the work (data prep) inside of excel.  
When you export the data as an cvs doc the NA's will be in the excel 
spreadsheet.  Now the search and/or the search and replace option become very 
handy.  Probably a better way in [R] though.
 
Tyler

 
> Date: Fri, 18 Mar 2011 18:35:20 -0700
> From: jwiley.ps...@gmail.com
> To: ritacarre...@hotmail.com
> CC: r-help@r-project.org
> Subject: Re: [R] How do I delete multiple blank variables from a data frame?
> 
> Hi Rita,
> 
> This is far from the most efficient or elegant way, but:
> 
> ## two column data frame, one all NAs
> d <- data.frame(1:10, NA)
> ## use apply to create logical vector and subset d
> d[, apply(d, 2, function(x) !all(is.na(x)))]
> 
> I am just apply()ing to each column (the 2) of d, the function
> !all(is.na(x)) which will return FALSE if all of x is missing and TRUE
> otherwise. The result is a logical vector the same length as the
> number of columns in d that is used to subset only the d columns with
> at least some non-missing values. For documentation see:
> 
> ?apply
> ?is.na
> ?all
> ?"["
> ?Logic
> 
> HTH,
> 
> Josh
> 
> On Fri, Mar 18, 2011 at 3:35 PM, Rita Carreira  
> wrote:
> >
> > Dear List Members,I have 55 data frames, each of which with 272 variables 
> > and 267 observations. Some of these variables are blanks but the blanks are 
> > not the same for every data frame. I would like to write a procedure in 
> > which I import a data frame, see which variables are blank, and delete 
> > those variables. My data frames have variables named P1 to P136 and Q1 to 
> > Q136.
> > I have a couple of questions regarding this issue:
> > 1) Is a loop an efficient way to address this problem? If not, what are my 
> > alternatives and how do I implement them?2) I have been playing with a 
> > single data frame to try to figure out a way of having R go through the 
> > columns and see which ones it should delete. I have figured out how to 
> > delete rows with missing data (newdata <- na.omit(olddata)) but how do I do 
> > it for columns???
> > Thank you very much for your help and have a great weekend!
> > Rita  "If you think education is 
> > expensive, try ignorance"--Derek Bok
> >
> >
> >
> >[[alternative HTML version deleted]]
> >
> > __
> > R-help@r-project.org mailing list
> > https://stat.ethz.ch/mailman/listinfo/r-help
> > PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
> > and provide commented, minimal, self-contained, reproducible code.
> >
> 
> 
> 
> -- 
> Joshua Wiley
> Ph.D. Student, Health Psychology
> University of California, Los Angeles
> http://www.joshuawiley.com/
> 
> __
> R-help@r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
> and provide commented, minimal, self-contained, reproducible code.
  
[[alternative HTML version deleted]]

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] Using the Mahalanobis Function

2011-03-19 Thread Tyler Rinker




Hello all,
 
I am a 2 month newbie to R and am stumped.  I have a data set that I've run 
multivariate stats on using the manova function (I included the data set).  Now 
it comes time for a table of effect sizes with significance.  The univariate 
tests are easy.  Where I run into trouble filling in the table of effect sizes 
is the Mahalanobis D as an effect size.  I've included the table so you can see 
what group's I'm comparing.  I know there's a great function for filling in ?1 
and ?2 : mahalanobis(x, center, cov, inverted=FALSE, ...)

The problem is I lack the knowledge around cluster analysis of what goes into 
the function for [x, center, & cov.]  I have only a basic understanding of this 
topic (a picture of a measured distance between two clusters on a graph).  
Could someone please use my data set or a similar one (a multivariate with at 
least 3 outcome variables) and actually run this function (mahalanobis).  Then 
please send me your output from [R] starting from the data set all the way to 
the statistic.  PS I know the Mahalanobis D should be ?1=3.93 & ?2=3.04.
 
The Y,M,O in the data set stands for young, middle, old.
 
I am running the latest version of R on a windows 7 machine.  
 





Effect Sizes



Contrasts

Dependent Variables

 

 


Friends

Parents

Strangers

All


Young-Middle

-1.8768797*

-3.2842941*

-1.1094004*

?1


Middle-Old

1.34900725*

1.54919532*

-2.0107882*

?2
 
  Gender Age Friend.Agression Parent.Agression Stranger.Agression
1   f   y87  8
2   f   y56  8
3   f   y63  7
4   f   y55  7
5   f   m   15   13 10
6   f   m   13   11  9
7   f   m   12   12  9
8   f   m   18   10  7
9   f   o   11   11 10
10  f   o   104 12
11  f   o   129 12
12  f   o98 14
13  m   y   137  7
14  m   y95 10
15  m   y   114  4
16  m   y   153  4
17  m   m   14   12  8
18  m   m   10   15 11
19  m   m   12   11  8
20  m   m   109  9
21  m   o   108 11
22  m   o   13   11 13
23  m   o98 12
24  m   o79 16
  
[[alternative HTML version deleted]]

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Using the Mahalanobis Function

2011-03-20 Thread Tyler Rinker

My aplogies:
 
The Table of effects did not come through as I had intended them to.  HEre they 
are reformatted:
Again I would like to see someone actually run mahalanobis() for this data set 
to arrive at ?1 and ?2.  I do not know what exactly (after reading the manual) 
goes in for x,center,or cov.  Seeing an actual [R] printout of it being done 
(from start to finish) would go a long way towards my learning.
 
 
 
Effect Sizes 
Contrasts|   Dependent Variables
  Friends  Parents  Strangers All   

Young-Middle  -1.8768797*   -3.2842941*   -1.1094004*   ?1
Middle-Old1.34900725*   1.54919532*   -2.0107882*   ?2
 
 
 
 
Gender Age, Friend.Agression, Parent.Agression, Stranger.Agression
1 f y 8 7 8
2 f y 5 6 8
3 f y 6 3 7
4 f y 5 5 7
5 f m 15 13 10
6 f m 13 11 9
7 f m 12 12 9
8 f m 18 10 7
9 f o 11 11 10
10 f o 10 4 12
11 f o 12 9 12
12 f o 9 8 14
13 m y 13 7 7
14 m y 9 5 10
15 m y 11 4 4
16 m y 15 3 4
17 m m 14 12 8
18 m m 10 15 11
19 m m 12 11 8
20 m m 10 9 9
21 m o 10 8 11
22 m o 13 11 13
23 m o 9 8 12
24 m o 7 9 16

 
> From: tyler_rin...@hotmail.com
> To: r-help@r-project.org
> Date: Sun, 20 Mar 2011 01:53:47 -0400
> Subject: [R] Using the Mahalanobis Function
> 
> 
> 
> 
> 
> Hello all,
> 
> I am a 2 month newbie to R and am stumped. I have a data set that I've run 
> multivariate stats on using the manova function (I included the data set). 
> Now it comes time for a table of effect sizes with significance. The 
> univariate tests are easy. Where I run into trouble filling in the table of 
> effect sizes is the Mahalanobis D as an effect size. I've included the table 
> so you can see what group's I'm comparing. I know there's a great function 
> for filling in ?1 and ?2 : mahalanobis(x, center, cov, inverted=FALSE, ...)
> 
> The problem is I lack the knowledge around cluster analysis of what goes into 
> the function for [x, center, & cov.] I have only a basic understanding of 
> this topic (a picture of a measured distance between two clusters on a 
> graph). Could someone please use my data set or a similar one (a multivariate 
> with at least 3 outcome variables) and actually run this function 
> (mahalanobis). Then please send me your output from [R] starting from the 
> data set all the way to the statistic. PS I know the Mahalanobis D should be 
> ?1=3.93 & ?2=3.04.
> 
> The Y,M,O in the data set stands for young, middle, old.
> 
> I am running the latest version of R on a windows 7 machine. 
> 
> 
> 
> 
> 
> 
> Effect Sizes
> 
> 
> 
> Contrasts
> 
> Dependent Variables
> 
> 
> 
> 
> 
> 
> Friends
> 
> Parents
> 
> Strangers
> 
> All
> 
> 
> Young-Middle
> 
> -1.8768797*
> 
> -3.2842941*
> 
> -1.1094004*
> 
> ?1
> 
> 
> Middle-Old
> 
> 1.34900725*
> 
> 1.54919532*
> 
> -2.0107882*
> 
> ?2
> 
> Gender Age Friend.Agression Parent.Agression Stranger.Agression
> 1 f y 8 7 8
> 2 f y 5 6 8
> 3 f y 6 3 7
> 4 f y 5 5 7
> 5 f m 15 13 10
> 6 f m 13 11 9
> 7 f m 12 12 9
> 8 f m 18 10 7
> 9 f o 11 11 10
> 10 f o 10 4 12
> 11 f o 12 9 12
> 12 f o 9 8 14
> 13 m y 13 7 7
> 14 m y 9 5 10
> 15 m y 11 4 4
> 16 m y 15 3 4
> 17 m m 14 12 8
> 18 m m 10 15 11
> 19 m m 12 11 8
> 20 m m 10 9 9
> 21 m o 10 8 11
> 22 m o 13 11 13
> 23 m o 9 8 12
> 24 m o 7 9 16
> 
> [[alternative HTML version deleted]]
> 
> __
> R-help@r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
> and provide commented, minimal, self-contained, reproducible code.
  
[[alternative HTML version deleted]]

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] Using the mahalanobis( ) function

2011-03-21 Thread Tyler Rinker

Hello all,
 
I am a 2 month newbie to R and am stumped.  I have a data set that I've run 
multivariate stats on using the manova function (I included the data set).  Now 
it comes time for a table of effect sizes with significance.  The univariate 
tests are easy.  Where I run into trouble filling in the table of effect sizes 
is the Mahalanobis D as an effect size.  I've included the table so you can see 
what group's I'm comparing.  I know there's a great function for filling in ?1 
and ?2 : mahalanobis(x, center, cov, inverted=FALSE, ...)  I need to turn the 
sub groups scores for y (young), m (middle) and o (old) into clusters.

The problem is I lack the knowledge around cluster analysis of what goes into 
the function for [x, center, & cov.]  I have only a basic understanding of this 
topic (a picture of a measured distance between two clusters on a graph).  I 
think I have to turn the data into a matrix but lack direction.  Could someone 
please use my data set or a similar one (a multivariate with at least 3 outcome 
variables) and actually run this function (mahalanobis).  Then please send me 
your output from [R] starting from the data set all the way to the statistic.  
PS I know the Mahalanobis D should be ?1=3.93 & ?2=3.04.
 
I’ve read and reread the manual around mahalanobis() and have searched through 
the list serve for information.  The info is for people who already have a 
grasp of how to implement this concept.
 
I am running the latest version of R on a windows 7 machine.  
 
Effect Sizes 
Contrasts|   Dependent Variables
  Friends  ParentsStrangersAll  
 
Young-Middle  -1.8768797* -3.2842941* -1.1094004*  ?1   
 
Middle-Old1.34900725* 1.54919532* -2.0107882*  ?2   

 
(sorry the column names and values don’t line up)
Age  Friend Agression Parent Agression Stranger Agression
y  8  7  8
y  5  6  8
y  6  3  7
y  5  5  7
m 151310
m 13119
m 12129
m 18107
o  111110
o  104  12
o  129  12
o  9  8  14
y  137  7
y  9  5  10
y  114  4
y  153  4
m 14128
m 101511
m 12118
m 109  9
o  108  11
o  131113
o  9  8  12
o  7  9  16   
[[alternative HTML version deleted]]

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Using the mahalanobis( ) function

2011-03-21 Thread Tyler Rinker

This is what I've tried so far and just can't get it.  I know I want a value of 
3.93 (for Age= y and m) using mahalanobis d as an effect size for a follow up 
to an MANOVA:
 
age.frame<-data.frame(Age, Friend.Agression, Parent.Agression, 
Stranger.Agression)
> age.frame
   Age Friend.Agression Parent.Agression Stranger.Agression
1y87  8
2y56  8
3y63  7
4y55  7
5m   15   13 10
6m   13   11  9
7m   12   12  9
8m   18   10  7
9o   11   11 10
10   o   104 12
11   o   129 12
12   o98 14
13   y   137  7
14   y95 10
15   y   114  4
16   y   153  4
17   m   14   12  8
18   m   10   15 11
19   m   12   11  8
20   m   109  9
21   o   108 11
22   o   13   11 13
23   o98 12
24   o79 16
 
ay<-subset(nd,Age=="y")
> ay
   Gender Age Friend.Agression Parent.Agression Stranger.Agression
1   f   y87  8
2   f   y56  8
3   f   y63  7
4   f   y55  7
13  m   y   137  7
14  m   y95 10
15  m   y   114  4
16  m   y   153  4
> am<-subset(nd,Age=="m")
> am
   Gender Age Friend.Agression Parent.Agression Stranger.Agression
5   f   m   15   13 10
6   f   m   13   11  9
7   f   m   12   12  9
8   f   m   18   10  7
17  m   m   14   12  8
18  m   m   10   15 11
19  m   m   12   11  8
20  m   m   109  9
> ao<-subset(nd,Age=="o")
> ao
   Gender Age Friend.Agression Parent.Agression Stranger.Agression
9   f   o   11   11 10
10  f   o   104 12
11  f   o   129 12
12  f   o98 14
21  m   o   108 11
22  m   o   13   11 13
23  m   o98 12
24  m   o79 16
> amm<-cbind(am$Friend.Agression, am$Parent.Agression,am$Stranger.Agression)
> amm
 [,1] [,2] [,3]
[1,]   15   13   10
[2,]   13   119
[3,]   12   129
[4,]   18   107
[5,]   14   128
[6,]   10   15   11
[7,]   12   118
[8,]   1099
> aym<-cbind(ay$Friend.Agression, ay$Parent.Agression,ay$Stranger.Agression)
> aym
 [,1] [,2] [,3]
[1,]878
[2,]568
[3,]637
[4,]557
[5,]   1377
[6,]95   10
[7,]   1144
[8,]   1534
> aom<-cbind(ao$Friend.Agression, ao$Parent.Agression,ao$Stranger.Agression)
> aom
 [,1] [,2] [,3]
[1,]   11   11   10
[2,]   104   12
[3,]   129   12
[4,]98   14
[5,]   108   11
[6,]   13   11   13
[7,]98   12
[8,]79   16
 
 
> mean(aym)
[1] 6.958333
> mean(amm)
[1] 11.16667
> mean(aom)
[1] 10.375
 
> ascores<-cbind(Friend.Agression, Parent.Agression, Stranger.Agression)
> ascores
  Friend.Agression Parent.Agression Stranger.Agression
 [1,]87  8
 [2,]56  8
 [3,]63  7
 [4,]55  7
 [5,]   15   13 10
 [6,]  

Re: [R] Looking for a repeated measure two groups comparison and a two factor ANOVA in Circular distribution

2011-03-22 Thread Tyler Rinker

I don't know if this is what you're looking for but it describes how to do a 2 
way repeated measures with R.  You problem may be different and I lack the 
stats knowledge to know that.  If that's the case I apoligize:

http://rtutorialseries.blogspot.com/2011/02/r-tutorial-series-two-way-repeated.html
 
Tyler
 
From: tintin...@hotmail.com
To: r-help@r-project.org
Date: Tue, 22 Mar 2011 15:56:27 +0100
Subject: [R] Looking for a repeated measure two groups comparison and a two 
factor ANOVA in Circular distribution

 
Hi,
I am looking for a way to study some phase data with a circular distribution 
measured in rad.I would like to do a two way ANOVA (if possible mixed, with 
inter and intrasubject).I haven´t found a package that does that in R?Does 
sombeody know if there is one or how to do the analysis.Thanks in advance
J ToledoCNDR UPenn
[[alternative HTML version deleted]]
 

__ R-help@r-project.org mailing 
list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting 
guide http://www.R-project.org/posting-guide.html and provide commented, 
minimal, self-contained, reproducible code. 
[[alternative HTML version deleted]]

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] Using the mahalanobis( ) function

2011-03-22 Thread Tyler Rinker

I want to calculate the Manhalanobis D as an effect size for a follow up to a 
MANOVA.  I think I'm getting further but still not there.  No one has weighed 
in yet to lend help and I would much appreciate it, particulalry those who are 
familiar with cluster analysis or MANOVA follow up/effect sizes.  
 
According to my stats professor I know the Mahalanobis D should be 3.93 & 3.04 
for the distance between the center of the y to m cluster and the center of the 
m to o cluster respectively (groups under the Age variable).  I get 4.233462 &  
3.857911 respectively.  I'm still messing it up.

So far here is what I've done:
 
> am
   Gender Age Friend.Agression Parent.Agression Stranger.Agression
5   f   m   15   13 10
6   f   m   13   11  9
7   f   m   12   12  9
8   f   m   18   10  7
17  m   m   14   12  8
18  m   m   10   15 11
19  m   m   12   11  8
20  m   m   109  9
> ao<-subset(nd,Age=="o")
> ao
   Gender Age Friend.Agression Parent.Agression Stranger.Agression
9   f   o   11   11 10
10  f   o   104 12
11  f   o   129 12
12  f   o98 14
21  m   o   108 11
22  m   o   13   11 13
23  m   o98 12
24  m   o79 16
> amm<-cbind(am$Friend.Agression, am$Parent.Agression,am$Stranger.Agression)
> amm
 [,1] [,2] [,3]
[1,]   15   13   10
[2,]   13   119
[3,]   12   129
[4,]   18   107
[5,]   14   128
[6,]   10   15   11
[7,]   12   118
[8,]   1099
> aym<-cbind(ay$Friend.Agression, ay$Parent.Agression,ay$Stranger.Agression)
> aym
 [,1] [,2] [,3]
[1,]878
[2,]568
[3,]637
[4,]557
[5,]   1377
[6,]95   10
[7,]   1144
[8,]   1534
> aom<-cbind(ao$Friend.Agression, ao$Parent.Agression,ao$Stranger.Agression)
> aom
 [,1] [,2] [,3]
[1,]   11   11   10
[2,]   104   12
[3,]   129   12
[4,]98   14
[5,]   108   11
[6,]   13   11   13
[7,]98   12
[8,]79   16

> mean(aym)
[1] 6.958333
> mean(amm)
[1] 11.16667
> mean(aom)
[1] 10.375
> ascores<-cbind(Friend.Agression, Parent.Agression, Stranger.Agression)
> ascores
  Friend.Agression Parent.Agression Stranger.Agression
 [1,]87  8
 [2,]56  8
 [3,]63  7
 [4,]55  7
 [5,]   15   13 10
 [6,]   13   11  9
 [7,]   12   12  9
 [8,]   18   10  7
 [9,]   11   11 10
[10,]   104 12
[11,]   129 12
[12,]98 14
[13,]   137  7
[14,]95 10
[15,]   114  4
[16,]   153  4
[17,]   14   12  8
[18,]   10   15 11
[19,]   12   11  8
[20,]   109  9
[21,]   108 11
[22,]   13   11 13
[23,]98 12
[24,]79 16
> mean(ascores)
[1] 9.5
meany<-colMeans(aym, na.rm = FALSE, dims = 1)
meany
meanm<-colMeans(amm, na.rm = FALSE, dims = 1)
meanm
meano<-colMeans(aom, na.rm = FALSE, dims = 1)
meano
> S<-cov(ascores)
> S
   Friend.Agression Parent.Agression Stranger.Agression
Friend.Agression  10.476449 4.461957  -2.003623
Parent.Agression   4.46195710.940217   3.489130
Stranger.Agression-2.003623 3.489130   8.427536
> meany<-colMeans(x, na.rm = FALSE, dims = 1)
Error in inherits(x, "data.frame") : object 'x' not found
> meany<-colMeans(aym, na.rm = FALSE, dims = 1)
> meany
[1] 9.000 5.000 6.875
> meanm<-colMeans(amm, na.rm = 

Re: [R] Using the mahalanobis( ) function

2011-03-22 Thread Tyler Rinker

In my haste I did not include the full printout of my R session.  My apologies.
 
 nd<-read.table("ex20.csv", header=TRUE, sep=",",na.strings="NA")
attach(nd)
age.frame<-data.frame(Age, Friend.Agression, Parent.Agression, 
Stranger.Agression)
> age.frame
   Age Friend.Agression Parent.Agression Stranger.Agression
1y87  8
2y56  8
3y63  7
4y55  7
5m   15   13 10
6m   13   11  9
7m   12   12  9
8m   18   10  7
9o   11   11 10
10   o   104 12
11   o   129 12
12   o98 14
13   y   137  7
14   y95 10
15   y   114  4
16   y   153  4
17   m   14   12  8
18   m   10   15 11
19   m   12   11  8
20   m   109  9
21   o   108 11
22   o   13   11 13
23   o98 12
24   o79 16
ay<-subset(nd,Age=="y")
> ay
   Gender Age Friend.Agression Parent.Agression Stranger.Agression
1   f   y87  8
2   f   y56  8
3   f   y63  7
4   f   y55  7
13  m   y   137  7
14  m   y95 10
15  m   y   114  4
16  m   y   153  4
> am<-subset(nd,Age=="m")
> am
   Gender Age Friend.Agression Parent.Agression Stranger.Agression
5   f   m   15   13 10
6   f   m   13   11  9
7   f   m   12   12  9
8   f   m   18   10  7
17  m   m   14   12  8
18  m   m   10   15 11
19  m   m   12   11  8
20  m   m   109  9
> ao<-subset(nd,Age=="o")
> ao
   Gender Age Friend.Agression Parent.Agression Stranger.Agression
9   f   o   11   11 10
10  f   o   104 12
11  f   o   129 12
12  f   o98 14
21  m   o   108 11
22  m   o   13   11 13
23  m   o98 12
24  m   o79 16
> amm<-cbind(am$Friend.Agression, am$Parent.Agression,am$Stranger.Agression)
> amm
 [,1] [,2] [,3]
[1,]   15   13   10
[2,]   13   119
[3,]   12   129
[4,]   18   107
[5,]   14   128
[6,]   10   15   11
[7,]   12   118
[8,]   1099
> aym<-cbind(ay$Friend.Agression, ay$Parent.Agression,ay$Stranger.Agression)
> aym
 [,1] [,2] [,3]
[1,]878
[2,]568
[3,]637
[4,]557
[5,]   1377
[6,]95   10
[7,]   1144
[8,]   1534
> aom<-cbind(ao$Friend.Agression, ao$Parent.Agression,ao$Stranger.Agression)
> aom
 [,1] [,2] [,3]
[1,]   11   11   10
[2,]   104   12
[3,]   129   12
[4,]98   14
[5,]   108   11
[6,]   13   11   13
[7,]98   12
[8,]79   16

> mean(aym)
[1] 6.958333
> mean(amm)
[1] 11.16667
> mean(aom)
[1] 10.375
> ascores<-cbind(Friend.Agression, Parent.Agression, Stranger.Agression)
> ascores
  Friend.Agression Parent.Agression Stranger.Agression
 [1,]87  8
 [2,]56  8
 [3,]63  7
 [4,]55  7
 [5,]   15   13 10
 [6,]   13   11

[R] Sequential multiple regression

2011-03-31 Thread Tyler Rinker

Hello,
 
In the past I have tended to reside more in the ANOVA camp but am trying to 
become more familiar with regression techniques in R.  I would like to get the 
F change from a model as I take away factors:
 
SO...
 
mod1<-lm(y~x1+x2+x3)...mod2<-lm(y~x1,x2)...mod3<-lm(y~x1)
 
 
I can do this by hand by running several models in R and taking the MSr1/MSe1, 
MSr2/MSe2...  This is slow and I know there's a better way.
 
In SPSS (which I no longer use) I could easily obtain these results (F-change) 
as documented by Professor Andy Fields:
http://www.statisticshell.com/multireg.pdf
 
You can see the F changes for his two IV model yielding 2 F changes.  Maybe 
it's the language I'm using (sequential multiple regression) that yields me 
poor results in searching the archives and Rseek.  The results tend to be 
around hierarchal regression (I'm not familiar with this terminology being in 
the ANOVA camp).  When I look at the hier.part package and run the examples it 
doesn't seem to give me the F change I'm looking for.  The step function in the 
base program reduces the model but takes away the non sig. IV's (which is a 
great approach but I'm really after those F changes).  As is the usually the 
case I'm sure R does this simply and beautifully, I'm just not experienced with 
the statistical vocabulary and techniques around regression to find what I'm 
looking for.
 
F-change values with R:  Any help would be appreciated.
 
Thank you in advance,
Tyler 
[[alternative HTML version deleted]]

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Sequential multiple regression

2011-03-31 Thread Tyler Rinker

Bert and anyone else with info,
 
First, Bert thank you for your quick reply.  drop1 gives the results as a type 
II anova.  Is there a way to make drop1 give you type I anova (the args don't 
appear to have a way to do so)?  Another package/function perhaps?
 
Tyler 
 
> Date: Thu, 31 Mar 2011 09:32:02 -0700
> Subject: Re: [R] Sequential multiple regression
> From: gunter.ber...@gene.com
> To: tyler_rin...@hotmail.com
> CC: r-help@r-project.org
> 
> ?drop1
> 
> -- Bert
> 
> On Thu, Mar 31, 2011 at 9:24 AM, Tyler Rinker  
> wrote:
> >
> > Hello,
> >
> > In the past I have tended to reside more in the ANOVA camp but am trying to 
> > become more familiar with regression techniques in R.  I would like to get 
> > the F change from a model as I take away factors:
> >
> > SO...
> >
> > mod1<-lm(y~x1+x2+x3)...mod2<-lm(y~x1,x2)...mod3<-lm(y~x1)
> >
> >
> > I can do this by hand by running several models in R and taking the 
> > MSr1/MSe1, MSr2/MSe2...  This is slow and I know there's a better way.
> >
> > In SPSS (which I no longer use) I could easily obtain these results 
> > (F-change) as documented by Professor Andy Fields:
> > http://www.statisticshell.com/multireg.pdf
> >
> > You can see the F changes for his two IV model yielding 2 F changes.  Maybe 
> > it's the language I'm using (sequential multiple regression) that yields me 
> > poor results in searching the archives and Rseek.  The results tend to be 
> > around hierarchal regression (I'm not familiar with this terminology being 
> > in the ANOVA camp).  When I look at the hier.part package and run the 
> > examples it doesn't seem to give me the F change I'm looking for.  The step 
> > function in the base program reduces the model but takes away the non sig. 
> > IV's (which is a great approach but I'm really after those F changes).  As 
> > is the usually the case I'm sure R does this simply and beautifully, I'm 
> > just not experienced with the statistical vocabulary and techniques around 
> > regression to find what I'm looking for.
> >
> > F-change values with R:  Any help would be appreciated.
> >
> > Thank you in advance,
> > Tyler
> >[[alternative HTML version deleted]]
> >
> > __
> > R-help@r-project.org mailing list
> > https://stat.ethz.ch/mailman/listinfo/r-help
> > PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
> > and provide commented, minimal, self-contained, reproducible code.
> >
> 
> 
> 
> -- 
> "Men by nature long to get on to the ultimate truths, and will often
> be impatient with elementary studies or fight shy of them. If it were
> possible to reach the ultimate truths without the elementary studies
> usually prefixed to them, these would not be preparatory studies but
> superfluous diversions."
> 
> -- Maimonides (1135-1204)
> 
> Bert Gunter
> Genentech Nonclinical Biostatistics
  
[[alternative HTML version deleted]]

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] Function for finding NA's

2011-04-03 Thread Tyler Rinker

Quick question,
 
I tried to find a function in available packages to find NA's for an entire 
data set (or single variables) and report the row of missing values (NA's for 
each column).  I searched the typical routes through the blogs and the help 
manuals for 15 minutes.  Rather than spend any more time searching I created my 
own function to do this (probably in less time than it would have taken me to 
find the function).  
 
Now I still have the same question:  Is this function (NAhunter I call it) 
already in existence?  If so please direct me (because I'm sure they've written 
better code more efficiently).  I highly doubt I'm this first person to want to 
find all the missing values in a data set so I assume there is a function for 
it but I just didn't spend enough time looking.  If there is no existing 
function (big if here), is this something people feel is worthwhile for me to 
put into a package of some sort?  
 
Tyler
 
Here's the code:
 
NAhunter<-function(dataset)
{
find.NA<-function(variable)
{
if(is.numeric(variable)){
n<-length(variable)
mean<-mean(variable, na.rm=T)
median<-median(variable, na.rm=T)
sd<-sd(variable, na.rm=T)
NAs<-is.na(variable)
total.NA<-sum(NAs)
percent.missing<-total.NA/n
descriptives<-data.frame(n,mean,median,sd,total.NA,percent.missing)
rownames(descriptives)<-c(" ")
Case.Number<-1:n
Missing.Values<-ifelse(NAs>0,"Missing Value"," ")
missing.value<-data.frame(Case.Number,Missing.Values)
missing.values<-missing.value[ which(Missing.Values=='Missing Value'),]
list("NUMERIC DATA","DESCRIPTIVES"=t(descriptives),"CASE # OF MISSING 
VALUES"=missing.values[,1])
}
else{
n<-length(variable)
NAs<-is.na(variable)
total.NA<-sum(NAs)
percent.missing<-total.NA/n
descriptives<-data.frame(n,total.NA,percent.missing)
rownames(descriptives)<-c(" ")
Case.Number<-1:n
Missing.Values<-ifelse(NAs>0,"Missing Value"," ")
missing.value<-data.frame(Case.Number,Missing.Values)
missing.values<-missing.value[ which(Missing.Values=='Missing Value'),]
list("CATEGORICAL DATA","DESCRIPTIVES"=t(descriptives),"CASE # OF MISSING 
VALUES"=missing.values[,1])
}
}
dataset<-data.frame(dataset)
options(scipen=100)
options(digits=2)
lapply(dataset,find.NA)
} 
[[alternative HTML version deleted]]

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Function for finding NA's

2011-04-03 Thread Tyler Rinker

aThanks David,
 
After seeing the simplicity of your function versus the convoluted mess I 
worked up I now understand why it's not necessary to have a package to find 
NA's (and from what you said is a part of other packages such as Hmisc 
already).  
 
I am at the 2 1/2 month mark as an R user and have loads to learn.  Simpler is 
better.  Thanks David for your time and I will take the information you gave 
and put it to use in new situations.
 
Tyler
 
> CC: r-help@r-project.org
> From: dwinsem...@comcast.net
> To: tyler_rin...@hotmail.com
> Subject: Re: [R] Function for finding NA's
> Date: Sun, 3 Apr 2011 14:19:40 -0400
> 
> 
> On Apr 3, 2011, at 1:44 PM, Tyler Rinker wrote:
> 
> >
> > Quick question,
> >
> > I tried to find a function in available packages to find NA's for an 
> > entire data set (or single variables) and report the row of missing 
> > values (NA's for each column). I searched the typical routes 
> > through the blogs and the help manuals for 15 minutes. Rather than 
> > spend any more time searching I created my own function to do this 
> > (probably in less time than it would have taken me to find the 
> > function).
> >
> > Now I still have the same question: Is this function (NAhunter I 
> > call it) already in existence? If so please direct me (because I'm 
> > sure they've written better code more efficiently). I highly doubt 
> > I'm this first person to want to find all the missing values in a 
> > data set so I assume there is a function for it but I just didn't 
> > spend enough time looking. If there is no existing function (big if 
> > here), is this something people feel is worthwhile for me to put 
> > into a package of some sort?
> 
> I'm not sure that it would have occurred to people to include it in a 
> package. Consider:
> 
> getNa <- function(dfrm) lapply(dfrm, function(x) which(is.na(x) ) )
> 
> > cities
> long lat city pop
> 1 -58.38194 -34.59972 Buenos Aires NA
> 2 14.25000 40.8  NA
> > getNa(cities)
> $long
> integer(0)
> 
> $lat
> integer(0)
> 
> $city
> [1] 2
> 
> $pop
> [1] 1 2
> 
> There are several packages with functions by the name `describe` that 
> do most or all of rest of what you have proposed. I happen to use 
> Harrell's Hmisc but the other versions should also be reviewed if you 
> want to avoid re-inventing the wheel.
> -- 
> David.
> 
> >
> > Tyler
> >
> > Here's the code:
> >
> > NAhunter<-function(dataset)
> > {
> > find.NA<-function(variable)
> > {
> > if(is.numeric(variable)){
> > n<-length(variable)
> > mean<-mean(variable, na.rm=T)
> > median<-median(variable, na.rm=T)
> > sd<-sd(variable, na.rm=T)
> > NAs<-is.na(variable)
> > total.NA<-sum(NAs)
> > percent.missing<-total.NA/n
> > descriptives<-data.frame(n,mean,median,sd,total.NA,percent.missing)
> > rownames(descriptives)<-c(" ")
> > Case.Number<-1:n
> > Missing.Values<-ifelse(NAs>0,"Missing Value"," ")
> > missing.value<-data.frame(Case.Number,Missing.Values)
> > missing.values<-missing.value[ which(Missing.Values=='Missing 
> > Value'),]
> > list("NUMERIC DATA","DESCRIPTIVES"=t(descriptives),"CASE # OF 
> > MISSING VALUES"=missing.values[,1])
> > }
> > else{
> > n<-length(variable)
> > NAs<-is.na(variable)
> > total.NA<-sum(NAs)
> > percent.missing<-total.NA/n
> > descriptives<-data.frame(n,total.NA,percent.missing)
> > rownames(descriptives)<-c(" ")
> > Case.Number<-1:n
> > Missing.Values<-ifelse(NAs>0,"Missing Value"," ")
> > missing.value<-data.frame(Case.Number,Missing.Values)
> > missing.values<-missing.value[ which(Missing.Values=='Missing 
> > Value'),]
> > list("CATEGORICAL DATA","DESCRIPTIVES"=t(descriptives),"CASE # OF 
> > MISSING VALUES"=missing.values[,1])
> > }
> > }
> > dataset<-data.frame(dataset)
> > options(scipen=100)
> > options(digits=2)
> > lapply(dataset,find.NA)
> > } 
> > [[alternative HTML version deleted]]
> >
> > __
> > R-help@r-project.org mailing list
> > https://stat.ethz.ch/mailman/listinfo/r-help
> > PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
> > and provide commented, minimal, self-contained, reproducible code.
> 
> David Winsemius, MD
> West Hartford, CT
> 
  
[[alternative HTML version deleted]]

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] Automated Fixed Order Stepwise Regression Function

2011-04-07 Thread Tyler Rinker

Greetings,
 
I am interested in creating a stepwise fixed order regression function.  
There's a function for this already called add1( ).  The F statistics are 
calculated using type 2 anova (the SS and the F changes don't match SPSS's).  
You can see my use of this at the very end of the email.
 
What I want: a function to make an anova table with f changes and delt R^2.  
 
I ran into 10 snags to making this a fully automated function using the full 
linear model (order matters here).  Each snag is marked with a Comment #.  Some 
snags are repeated because I couldn't do the first time and certainly couldn't 
do it after that. Help with any/all snags would be very appreciated. 
 
I'm a 2 1/2 month [R] user who's reading everything online (incl. manuals) & 
ordering every book I can (looking at Dalgaard, Crawly's and Teetor's very 
helpful books right now).  Loops and their usage is a foreign thing to me, 
despite studying them, and unfortunately I think that my function may call for 
them.  I also realize that beyond 10 predictors may make this function way to 
bulky.
 
I'm running the latest version of R (2.12.2)on a windows 7 machine.

DATASET
 
mtcars
full.model<-lm(mpg~cyl+disp+hp+drat, data=mtcars)
 
CODE

stepFO<-function(model)
{
m<-data.frame(model.frame(model))
num.of.var<-length(colnames(m))
mod1<-lm(m[,1]~m[,2])
mod2<-lm(m[,1]~m[,2]+m[,3])
mod3<-lm(m[,1]~m[,2]+m[,3]+m[,4])
mod4<-lm(m[,1]~m[,2]+m[,3]+m[,4]+m[,5])
#Comment 1--I don't know how to automated this process(above) of adding 
#...additional variables.  Probably a loop is needed but I don't understand 
#...how to apply it here.  Maybe update.model [1:num.ofvar]?
a1<-anova(mod1)
a2<-anova(mod2)
a3<-anova(mod3)
a4<-anova(mod4)
#Comment 2--SAME AS COMMENT 1 except applied to the anova tables.  How do I make
#...[R] add a5, a6...an   as necessary?

rb<-rbind(a1,a2,a3,a4)
#Comment 3--again I can't automate this to make the  addition of a's automated

anova1<-rbind(rb[1,],rb[4,],rb[8,],rb[13,],rb[14,])
#Comment 4--the rb's follow a pattern of 1+3+4+5...+n variables
#then I row bind these starting with 1 and rowbind one more after the last 
#...rb to include the bottom piece of the anova table that tells 
#...about residuals (how do I aoutomate this?)

anova<-anova1[,1:num.of.var]
anova.table<-data.frame(anova)
#Comment 5--Something that bugs me here is that I have to turn it into a 
dataframe to 
#...add the totals row and the delta R^2 (tried playing w/ tkinsert to no avail)
#...I miss the stuff that's at the bottom of the anova table (sig values)
#Comment 6--I'd love to turn the place value to round to 2 after the decimal.
#...I've worked this many ways including changing my options but this does 
#...not seem to apply to a data frame

Total<-c(sum(anova[,1]),sum(anova[,2])," ", " ", " ")
anova.table<-rbind(anova.table,Total)
R1<-summary(mod1)[[8]][[1]]
R2<-summary(mod2)[[8]][[1]]
R3<-summary(mod3)[[8]][[1]]
R4<-summary(mod4)[[8]][[1]]
#Comment 7--SAME AS COMMENT 2.  How do I make
#...[R] add R5, R6...Rn   as necessary?

deltaR.1<-R1
deltaR.2<-R2-R1
deltaR.3<-R3-R2
deltaR.4<-R4-R3
#Comment 8--SAME AS COMMENT 7.  How would I aoutomate this process?

Delta.R.Squared<-c(deltaR.1,deltaR.2,deltaR.3,deltaR.4," ","")
#Comment 9--I need a way to add as many deltaR's as 
#...necessary(n of R = n of predictors)

anova.table<-cbind(anova.table, Delta.R.Squared)
colnames(anova.table)<-c("df","Sum Sq","Mean Sq","F-change",
"P-value","Delta.R.Squared")
rownames(anova.table)<-c("X1", "X2 elminating for X1", 
"X3 eliminating for X2 & X3", "X4 eliminating for X1,X2, & X3","Residuals",
 "Total")
anova.table
}
#Comment 10--Again I would need [R] to automate the list for row names as we 
#...add more predictors.
#See the final product below I'm after (with two places after the decimal)
> anova.table
df   Sum Sq  Mean Sq 
F-changeP-value Delta.R.Squared
X1   1 817.712952354614 817.712952354614 
79.5610275293349 6.112687142581e-10   0.726180005093805
X2 elminating for X1 1 37.5939529324851 37.5939529324851  
4.0268283172755 0.0541857201845285  0.0333857704630917
X3 eliminating for X2 & X3   1 9.37092926438942 9.37092926438942 
1.00388976918267  0.324951851250774 0.00832196853596723
X4 eliminating for X1,Xx2, & X3  1 16.4674349222492 16.4674349222492 
1.81550535203668  0.189048514740917  0.0146241073243205
Residuals   27 244.901918026262 9.07044140838007

Total   31 1126.0471875  
 
 
USING THE ADD1() FUNCTION>  NOT WHAT I WANT> 
 
mod1<-lm(mpg~cyl, data=mtcars)
add1(mod1,~cyl+disp+hp+drat, data=mtcars, test="F")

Model:
mpg ~ cyl
   Df Sum of SqRSSAIC F value   Pr(F)  
  308.33 76.494  
disp137.594 270.74 74.334  4.0268 0.05419 .
hp  116.360 291.98 76.750  1.6249 0.21253  
drat115.841 292.4

[R] 2-parameter MLE problems

2011-04-12 Thread Tyler Schartel
Hi all,

Sorry for the re-post, I sent my previous e-mail before it was complete. I
am trying to model seroprevalence using the differential equation: dP/dt =
beta*seronegative*.001*(seropositive)-0.35*(0.999)*(seropositive)-r*seropositive.
I would like to estimate my two parameters, beta and r, using maximum
likelihood methods. I have included my code below:

summary=read.delim('summary.txt',header=T)
summary
  Year   N SeroPos SeroNeg
11  75   1  74
22  12   3   9
33 139  11 128
44 178  22 156
55 203  18 185
66 244  37 207
attach(summary)
poisNLL=function(P){
lambda=P[1]*SeroNeg*0.001*SeroPos-0.35*0.999*SeroPos-P[2]*SeroPos
v=-sum(dpois(SeroNeg,lambda=lambda,log=TRUE))
if (!is.finite(v)) v<- -200
v
}
opt1=optim(poisNLL,start=c(10,.1),method='BFGS')

I receive the following error from this code: "Error in optim(poisNLL, start
= c(10, 0.1), method = "BFGS") :
  cannot coerce type 'closure' to vector of type 'double'"

Any assistance provided would be greatly appreciated!

Best,
Tyler

[[alternative HTML version deleted]]

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] My added packages won't load in r.2.13.0

2011-04-16 Thread Tyler Rinker

Problem:
 
I updated from r.2.12 to r.2.13 and when I use library(car) for example it says:
 
> library(car)
Error in library(car) : there is no package called 'car'
 
So I found that the packages I had before are located in:
C:\Users\Documents\R\win-library\2.12
 
Now they're in:
C:\Users\Documents\R-2.13.0\src\library
 
I tried to use control+a and drag them over to the folder 
C:\Users\Documents\R-2.13.0\src\library
 
and then  load car.
 
I still get the same error message.
 
How do I move my packages so the new version of R can find them?
 
  
[[alternative HTML version deleted]]

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Simple Missing cases Function

2011-04-19 Thread Tyler Rinker

I use the following code/function which gives me some quick descriptives about 
each variable (ie. n of missing values, % missing, case #'s missing, etc.):
Fairly quick, maybe not pretty but effective on either single variables or 
entire data sets.
 
NAhunter<-function(dataset)
{
find.NA<-function(variable)
{
if(is.numeric(variable)){
n<-length(variable)
mean<-mean(variable, na.rm=T)
median<-median(variable, na.rm=T)
sd<-sd(variable, na.rm=T)
NAs<-is.na(variable)
total.NA<-sum(NAs)
percent.missing<-total.NA/n
descriptives<-data.frame(n,mean,median,sd,total.NA,percent.missing)
rownames(descriptives)<-c(" ")
Case.Number<-1:n
Missing.Values<-ifelse(NAs>0,"Missing Value"," ")
missing.value<-data.frame(Case.Number,Missing.Values)
missing.values<-missing.value[ which(Missing.Values=='Missing Value'),]
list("NUMERIC DATA","DESCRIPTIVES"=t(descriptives),"CASE # OF MISSING 
VALUES"=missing.values[,1])
}
else{
n<-length(variable)
NAs<-is.na(variable)
total.NA<-sum(NAs)
percent.missing<-total.NA/n
descriptives<-data.frame(n,total.NA,percent.missing)
rownames(descriptives)<-c(" ")
Case.Number<-1:n
Missing.Values<-ifelse(NAs>0,"Missing Value"," ")
missing.value<-data.frame(Case.Number,Missing.Values)
missing.values<-missing.value[ which(Missing.Values=='Missing Value'),]
list("CATEGORICAL DATA","DESCRIPTIVES"=t(descriptives),"CASE # OF MISSING 
VALUES"=missing.values[,1])
}
}
dataset<-data.frame(dataset)
options(scipen=100)
options(digits=2)
lapply(dataset,find.NA)
}

 
> From: tesut...@hku.hk
> To: r-help@r-project.org
> Date: Tue, 19 Apr 2011 15:29:08 +0800
> Subject: [R] Simple Missing cases Function
> 
> Dear all
> 
> 
> 
> I have written a function to perform a very simple but useful task which I
> do regularly. It is designed to show how many values are missing from each
> variable in a data.frame. In its current form it works but is slow because I
> have used several loops to achieve this simple task. 
> 
> 
> 
> Can anyone see a more efficient way to get the same results? Or is there
> existing function which does this?
> 
> 
> 
> Thanks for your help
> 
> Tim
> 
> 
> 
> Function:
> 
> miss <- function (data) 
> 
> {
> 
> miss.list <- list(NA)
> 
> for (i in 1:length(data)) {
> 
> miss.list[[i]] <- table(is.na(data[i]))
> 
> }
> 
> for (i in 1:length(miss.list)) {
> 
> if (length(miss.list[[i]]) == 2) {
> 
> miss.list[[i]] <- miss.list[[i]][2]
> 
> }
> 
> }
> 
> for (i in 1:length(miss.list)) {
> 
> if (names(miss.list[[i]]) == "FALSE") {
> 
> miss.list[[i]] <- 0
> 
> }
> 
> }
> 
> data.frame(names(data), as.numeric(miss.list))
> 
> }
> 
> 
> 
> Example:
> 
> data(ToothGrowth)
> 
> data.m <- ToothGrowth
> 
> data.m$supp[sample(1:nrow(data.m), size=25)] <- NA
> 
> miss(data.m)
> 
> 
> [[alternative HTML version deleted]]
> 
> __
> R-help@r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
> and provide commented, minimal, self-contained, reproducible code.
  
[[alternative HTML version deleted]]

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] Dual colour ramps based on pos/neg values

2011-04-21 Thread Tyler Hayes
Hi Everyone:

I'm going a little nuts here and am hoping someone might have some
ideas to help out. Here is my problem:

I am using the calendarHeatMap function
(http://blog.revolutionanalytics.com/2009/11/charting-time-series-as-calendar-heat-maps-in-r.html)
to plot some values of percentages above or below a watermark. In
other words, I have a time series whose data can range arbitrarily
from -0.34 to +1.9, for example.

However, for the visualization to be effective, I need to be able to
distinguish conclusively where the division between positive and
negative takes place. My original thought was to just modify the
colorRampPalette function inputs to achieve the effect. Unfortunately,
because of the smooth blending, it washes out the middle. Not to
mention the middle of the colour range is not always zero.

What I would to do is concatenate two colour ranges such that:

bright red (max negative) -> dark red (min negative)
dark green (min positive) -> chartreuse (max positive)

I know, chartreuse. Not to mention the fact that the these ranges will
change with each dataset I apply. Now, believe me, I have tried
searches for colorramp range, positive, and so on, but can't seem to
find a smoking gun that will work with the function above. I came
across the ggplot package as well, which looks promising (book ordered
and en route), but I believe this function uses a different graphic
methodology.

Any help in the right direction is greatly appreciated.

Cheers,

t.

GPG Key ID: 0xF38F6BEE

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] excel dates and times in R

2011-08-05 Thread Tyler Rinker

You can also make the change in the excel file first. 
 
In excel highlight the date column-> right click-> format cells ->under number 
tab click custom ->in the Type field type the following "-mm-dd"
 
Now save and import.
 

> Date: Fri, 5 Aug 2011 13:59:16 +0100
> From: ted.hard...@wlandres.net
> To: r-help@r-project.org
> CC: emma.ra...@jbaconsulting.co.uk
> Subject: Re: [R] excel dates and times in R
> 
> On 05-Aug-11 10:27:28, bevare wrote:
> > Hello,
> > I am having some fun dealing with dates and times. My input
> > is a excel csv file with two columns with data in the following
> > format: 
> > 
> > date time
> > 25-Jun-1961 04:00:00
> > 
> > i.e. day - month - year hour:min:sec
> > 
> > I would like to have a single object in R that combines these
> > and converts them into a sensible R format (e.g.
> > ISOdatetime(1961,06,25,04,00,00,tz="GMT").
> > 
> > I have played with the function chron and also strptime but
> > can't seem to get them to work. 
> > 
> > Can anybody help me out please? 
> > 
> > Thanks
> > Bevare
> 
> I know almost nothing about using the "time" functions in R,
> but since I see that I get:
> 
> ISOdatetime(1961,06,25,04,00,00,tz="GMT")
> # [1] "1961-06-25 04:00:00 GMT"
> 
> it would seem that you already have it almost made in the
> original Excel fields, namely:
> 
> paste("1961-06-25","04:00:00","GMT",sep=" ")
> # [1] "1961-06-25 04:00:00 GMT"
> 
> So you could write this as a function
> 
> getISOdt <- function(date,time,zone){
> paste(date,time,zone,sep=" ")
> }
> 
> where the parameters date, time and zone are supplied as
> character strings:
> 
> getISOdt("1961-06-25","04:00:00","GMT")
> [1] "1961-06-25 04:00:00 GMT"
> 
> Then you can apply() this to the two comlumns in the dataframe
> you get when reading in the Excel CSV file, with zone being
> supplied independently.
> 
> Hoping this helps,
> Ted.
> 
> 
> E-Mail: (Ted Harding) 
> Fax-to-email: +44 (0)870 094 0861
> Date: 05-Aug-11 Time: 13:59:14
> -- XFMail --
> 
> __
> R-help@r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
> and provide commented, minimal, self-contained, reproducible code.
  
[[alternative HTML version deleted]]

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] plotting many unique colors with categorical data

2011-08-05 Thread Tyler Rinker

col=sample(colors()[-1], ncol(dataframe), replace = FALSE)
 
This may help but since it's randomized it's a crap shoot but the colors are 
likely to be more distinct.
  

> Date: Fri, 5 Aug 2011 15:38:57 -0400
> From: sarah.gos...@gmail.com
> To: rloise...@usgs.gov
> CC: r-help@r-project.org
> Subject: Re: [R] plotting many unique colors with categorical data
> 
> That's far too many to easily distinguish by color, especially if they
> need to be
> distinct, and not levels within a larger class. For the latter, you could get
> by with say 10 shades of red, 10 shades of blue, etc for related factors.
> But it doesn't sound like that's what you have. I don't think there's any
> way to create a palette of 75 distinguishable colors.
> 
> If you really want to try, colors() gives you all the named colors. You
> can also use c() to combine several brewer palettes.
> 
> One thing that I've done in similar cases (for viewing and data snooping,
> not for presentation), is to set up a loop through all the factor levels.
> Set par(ask=TRUE), and for each iteration plot all the points in black,
> except make that level a brighter color, and maybe larger symbol.
> 
> It gives you a quick way to start to see the differences between groups,
> though obviously isn't suitable for publication.
> 
> Sarah
> 
> On Fri, Aug 5, 2011 at 1:46 PM, SavageMaDaMe  wrote:
> > Hi- I am trying to plot a matrix of categorical values across time using
> > color to represent each individual factor. For example:
> >
> >   1982 1983 1984 1985 1986 1987
> > 119   19   68   68   19   19   68
> > 268   68   19   19   68   68   19
> > 326   26   34   34   26   26   26
> > 457   34   57   57   34   57   34
> > 534   57   26   26   57   34   57
> > 628   28   28   28   28   58   58
> > 760   10   58   58   58   28   28
> > 858   58   42   27   10   39   39
> > 922   39   22   42   42   27   42
> > 10   39   22   10   39   39   20   10
> >
> >
> >  I have 75 factors which could be in different positions through out time
> > (26 years).  I've successfully created a plot using both ggplot() and
> > color2D.matplot(), but can not select enough distinct colors from the
> > default color palettes available to be able to view differences in the data.
> > I've tried messing with RGB values, Brewer palettes, etc.
> >
> > How can I select colors from a list of available colors without choosing
> > ones which are too close in similarity to each other. For instance, I could
> > have several very similar blues, but if the Hue or saturation was different
> > on each, it would be fairly easy to tell the difference?
> >
> > Maybe there are too many factors to make this visual representation
> > effective?
> >
> > Thanks in advance for your help!
> >
> -- 
> Sarah Goslee
> http://www.functionaldiversity.org
> 
> __
> R-help@r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
> and provide commented, minimal, self-contained, reproducible code.
  
[[alternative HTML version deleted]]

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] Question

2011-08-08 Thread Tyler Gruhn
I just signed up for R Help, so please let me know if I'm doing this
incorrectly.

 

I've been working with R for a couple of weeks, attempting to process
microarray data.  This means 20,000+ rows of data to work with x 24
columns.  I am trying to produce heatmaps and found that the best
computer I have available to me can process the data without any
clustering (Rowv=NA, Colv=NA), but my supervisor has made it clear that
I need to have the clusters present.  I have been messing around with
the memory allocation commands, but I can't seem to get it to process
even still.  Are there any methods of getting large amounts of data to
process into heatmaps that I should look into?

Thank you very much,

Tyler


[[alternative HTML version deleted]]

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Renaming levels of a factor in a dataframe

2011-08-14 Thread Tyler Rinker

Here's an example of relevel used to relevel and combine groups
 
InsectSprays2<-InsectSprays
levels(InsectSprays2$spray)
levels(InsectSprays2$spray)<-list(new1=c("A","C"),YEPS=c("B","D","E"),LASTLY="F")
levels(InsectSprays2$spray)
InsectSprays2

So for you try...
levels (Data1$Site) <- list(Fw =c( "AB"), Est = c("DE"))
 

> From: deel...@hotmail.com
> To: r-help@r-project.org
> Date: Sun, 14 Aug 2011 12:56:25 -0300
> Subject: [R] Renaming levels of a factor in a dataframe
> 
> 
> 
> Dear Helplist:
> 
> 
> 
> I am trying, unsuccessfully, to rename levels of a factor in a dataframe. The 
> dataframe consists of two factor variables and one numeric variable as 
> follows:
> 
> Factor Site has 2 levels AB and DE, factor Fish has 30 levels, 15 associated 
> with each Site e.g. 1-1, 1-2,.2-1, 2-2 I am trying to rename the 
> levels of factor Site from AB to Fw and DE to Est while keeping them as 
> factors. The following 2 approaches do not work, each giving a NULL response 
> and creating a character string. 
> 
> 
> 
> levels (Data1$Site <- c("Fw", "Est")) This simply gives an alternating list 
> of Fw, Est, Fw, Est... not the desired 15 concurrent rows of Fw followed by 
> 15 of Est.
> 
> 
> 
> #levels (Data1$Site <- list(Fw = "AB", Est = "DE")) This gives the same 
> result. I have tried other approaches to no avail. It seems a simple problem 
> but has not been so. 
> 
> 
> 
> Any suggestions for solving this problem would be much appreciated.
> 
> 
> 
> Regards,
> 
> BJ 
> __
> R-help@r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
> and provide commented, minimal, self-contained, reproducible code.
  
[[alternative HTML version deleted]]

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] Opening package manual from within R

2011-08-23 Thread Tyler Rinker

Simple question but searching rseek did not yield the results I wanted.
 
Question:  Is there a way to open a help manual for a package from within R.
 
For instance I would like to type a function in r for the tm package and R 
would open that PDF as seen here:
http://cran.r-project.org/web/packages/tm/tm.pdf
 
-The vignette function exists for vignettes [vignette("package.name")] so I 
assume the same exists for manuals.
 
-I do not want library(help="package.name") as this is not detailed enough.
 
I am running R 2.14.0 beta on a windows 7 machine
Reproducible code does not seem appropriate in this case.   
  
[[alternative HTML version deleted]]

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Opening package manual from within R

2011-08-23 Thread Tyler Rinker

I don't think help.start is what I'm looking for but I may be doing it wrong.  
I tried:
 
library(tm)
help.start("tm")
 
This may be inappropriate as it returns:
 
> library(tm)
> help.start("tm")
Error in if (update) make.packages.html(temp = TRUE) : 
  argument is not interpretable as logical
 
Just typing help.start() takes me to a web page but I still have to search.
 
I wrote a function to do what I want that I could place in my  .First() or a 
premade package but why bother if there's a way to already do this?
#=
#   FUNCTION
#=
manual <- function(library){

LIB <- substitute(library)
LIB <- as.character(LIB)
browseURL(paste("http://cran.r-project.org/web/packages/",LIB,"/",LIB,".pdf";, 
sep = ""))
}
 
#=
#EXAMPLES
#=
manual(plyr)
manual(tm)
 
 

 

> Date: Tue, 23 Aug 2011 15:10:51 -0700
> Subject: Re: [R] Opening package manual from within R
> From: gunter.ber...@gene.com
> To: tyler_rin...@hotmail.com
> CC: r-help@r-project.org
> 
> After loading the package, does help.start() do what you want?
> 
> -- Bert
> 
> On Tue, Aug 23, 2011 at 2:32 PM, Tyler Rinker  
> wrote:
> >
> > Simple question but searching rseek did not yield the results I wanted.
> >
> > Question:  Is there a way to open a help manual for a package from within R.
> >
> > For instance I would like to type a function in r for the tm package and R 
> > would open that PDF as seen here:
> > http://cran.r-project.org/web/packages/tm/tm.pdf
> >
> > -The vignette function exists for vignettes [vignette("package.name")] so I 
> > assume the same exists for manuals.
> >
> > -I do not want library(help="package.name") as this is not detailed enough.
> >
> > I am running R 2.14.0 beta on a windows 7 machine
> > Reproducible code does not seem appropriate in this case.
> >[[alternative HTML version deleted]]
> >
> > __
> > R-help@r-project.org mailing list
> > https://stat.ethz.ch/mailman/listinfo/r-help
> > PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
> > and provide commented, minimal, self-contained, reproducible code.
> >
  
[[alternative HTML version deleted]]

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Opening package manual from within R

2011-08-23 Thread Tyler Rinker

David,
 
For me, on a windows machine, help(package=) results in a summary window 
opening in R rather than the detailed help manual that is available through 
CRAN.  Others suggested help.start() which takes me to a CRAN library site but 
requires me to still click with the mouse to locate the manual for the specific 
package.  My function:
 
manual <- function(library){

LIB <- substitute(library)
LIB <- as.character(LIB)
browseURL(paste("http://cran.r-project.org/web/packages/",LIB,"/",LIB,".pdf";, 
sep = ""))
}
 
...takes you right to the online package.  Does R have a similar function 
already that opens the detailed pdf as the vignette function does.  I don't 
know if the help manuals are downloaded automatically when I download a package 
but this would be even better as I would not require internet access to retieve 
the help manual.  If r doesn't have a function to do this but does download the 
help manuals automatically when I download a package I could use shell.exec()  
with the library file path to do the same thing as the function above.  Again 
this is all provided that R does not already have a function to do this, 
however, I'm guessing there is because of the existance of the vignette() 
function
 
Cheers
Tyler

 

> CC: gunter.ber...@gene.com; r-help@r-project.org
> From: dwinsem...@comcast.net
> To: tyler_rin...@hotmail.com
> Subject: Re: [R] Opening package manual from within R
> Date: Tue, 23 Aug 2011 18:22:38 -0400
> 
> 
> Try:
> 
> help(package=tm)
> 
> (You do not need library(). )
> -- 
> David.
> 
> On Aug 23, 2011, at 6:17 PM, Tyler Rinker wrote:
> 
> >
> > I don't think help.start is what I'm looking for but I may be doing 
> > it wrong. I tried:
> >
> > library(tm)
> > help.start("tm")
> >
> > This may be inappropriate as it returns:
> >
> >> library(tm)
> >> help.start("tm")
> > Error in if (update) make.packages.html(temp = TRUE) :
> > argument is not interpretable as logical
> >
> > Just typing help.start() takes me to a web page but I still have to 
> > search.
> >
> > I wrote a function to do what I want that I could place in 
> > my .First() or a premade package but why bother if there's a way to 
> > already do this?
> > #=
> > # FUNCTION
> > #=
> > manual <- function(library){
> >
> > LIB <- substitute(library)
> > LIB <- as.character(LIB)
> > browseURL(paste("http://cran.r-project.org/web/ 
> > packages/",LIB,"/",LIB,".pdf", sep = ""))
> > }
> >
> > #=
> > # EXAMPLES
> > #=====
> > manual(plyr)
> > manual(tm)
> >
> >
> >
> >
> >
> >> Date: Tue, 23 Aug 2011 15:10:51 -0700
> >> Subject: Re: [R] Opening package manual from within R
> >> From: gunter.ber...@gene.com
> >> To: tyler_rin...@hotmail.com
> >> CC: r-help@r-project.org
> >>
> >> After loading the package, does help.start() do what you want?
> >>
> >> -- Bert
> >>
> >> On Tue, Aug 23, 2011 at 2:32 PM, Tyler Rinker  >> > wrote:
> >>>
> >>> Simple question but searching rseek did not yield the results I 
> >>> wanted.
> >>>
> >>> Question: Is there a way to open a help manual for a package from 
> >>> within R.
> >>>
> >>> For instance I would like to type a function in r for the tm 
> >>> package and R would open that PDF as seen here:
> >>> http://cran.r-project.org/web/packages/tm/tm.pdf
> >>>
> >>> -The vignette function exists for vignettes 
> >>> [vignette("package.name")] so I assume the same exists for manuals.
> >>>
> >>> -I do not want library(help="package.name") as this is not 
> >>> detailed enough.
> >>>
> >>> I am running R 2.14.0 beta on a windows 7 machine
> >>> Reproducible code does not seem appropriate in this case.
> >>> [[alternative HTML version deleted]]
> >>>
> >>> __
> >>> R-help@r-project.org mailing list
> >>> https://stat.ethz.ch/mailman/listinfo/r-help
> >>> PLEASE do read the posting guide 
> >>> http://www.R-project.org/posting-guide.html
> >>> and provide commented, minimal, self-contained, reproducible code.
> >>>
> > 
> > [[alternative HTML version deleted]]
> >
> > __
> > R-help@r-project.org mailing list
> > https://stat.ethz.ch/mailman/listinfo/r-help
> > PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
> > and provide commented, minimal, self-contained, reproducible code.
> 
> David Winsemius, MD
> West Hartford, CT
> 
  
[[alternative HTML version deleted]]

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Opening package manual from within R

2011-08-24 Thread Tyler Rinker

Apparently my request to view the help pages is not a popular method among R 
users for gaining information.  for me these pages are very helpful so I will 
follow up to completed this thread for future searchers.
 
First thanks fo Prof. Brian Ripley.  Your idea was spot on what I was looking 
for for generating a pdf from the package library.
 
I worked out the following code that I added to my .First() library:
#=manual
 <- function(library, method=web){

LIB <- substitute(library)
LIB <- as.character(LIB)

METH <- substitute(method)
METH <- as.character(METH)

switch(METH,
   web = 
browseURL(paste("http://cran.r-project.org/web/packages/",LIB,"/",LIB,".pdf";, 
sep = "")),
   system =  {unlink(paste(getwd(),"/",LIB,".pdf",sep=""))
 path <- find.package(LIB)
 system(paste(shQuote(file.path(R.home("bin"), "R")),"CMD", 
"Rd2pdf",shQuote(path)))})   
}
#=
 
library is the package name
method is either web or system (web is Internet based and faster where as 
system creates the pdf from the library latex code and is slower)
 
#=
Thanks for your responses!

Tyler
 

> Date: Wed, 24 Aug 2011 07:12:24 +0100
> From: rip...@stats.ox.ac.uk
> To: tyler_rin...@hotmail.com
> CC: r-help@r-project.org
> Subject: Re: [R] Opening package manual from within R
> 
> On Tue, 23 Aug 2011, Tyler Rinker wrote:
> 
> >
> > Simple question but searching rseek did not yield the results I wanted.
> >
> > Question: Is there a way to open a help manual for a package from within R.
> >
> > For instance I would like to type a function in r for the tm package 
> > and R would open that PDF as seen here: 
> > http://cran.r-project.org/web/packages/tm/tm.pdf
> >
> > -The vignette function exists for vignettes 
> > [vignette("package.name")] so I assume the same exists for manuals.
> 
> You assume wrong. Vignettes PDFs are installed as part of the package 
> (and often take minutes to regenerate): the PDF version of the help 
> pages (what you seem to call 'the package manual') is not (in 
> general). In many cases what other people (including the author, e.g. 
> me for RODBC) call the 'package manual' is a PDF in the doc directory 
> (which may or may not be a vignette).
> 
> The assumption is that people will use search facilities or the hints 
> given by the help titles in help(package="tm") or browse the HTML 
> version of the same information (e.g. via help.start).
> 
> But you can (provided you have pdflatex etc in your path) generate the 
> PDF version of the help pages by
> 
> R CMD Rd2pdf /path/to/installed/package
> 
> It will even open it in a browser for you (unless you use 
> --no-preview). You could easily encapsulate this in a function by 
> e.g.
> 
> showPDFmanual <- function(package, lib.loc=NULL)
> {
> path <- find.package(package, lib.loc)
> system(paste(shQuote(file.path(R.home("bin"), "R")),
> "CMD", "Rd2pdf",
> shQuote(path)))
> }
> 
> Alternatively *for packages on CRAN only* you can access the version 
> on CRAN by browseURL.
> 
> > -I do not want library(help="package.name") as this is not detailed enough.
> >
> > I am running R 2.14.0 beta on a windows 7 machine
> > Reproducible code does not seem appropriate in this case.
> 
> But accurate 'at a minimum' information (and no HTML) does. There is 
> no such version as '2.14.0 beta', and will not be for a couple of 
> months. If you are running a beta version of R it is old, so please 
> update to a released or patched version. (Also, any version 
> calling itself '2.14.0 Under development' is old and needs updating: 
> the current R-devel displays no version number.)
> 
> 
> > [[alternative HTML version deleted]]
> >
> > __
> > R-help@r-project.org mailing list
> > https://stat.ethz.ch/mailman/listinfo/r-help
> > PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
> > and provide commented, minimal, self-contained, reproducible code.
> >
> 
> -- 
> Brian D. Ripley, rip...@stats.ox.ac.uk
> Professor of Applied Statistics, http://www.stats.ox.ac.uk/~ripley/
> University of Oxford, Tel: +44 1865 272861 (self)
> 1 South Parks Road, +44 1865 272866 (PA)
> Oxford OX1 3TG, UK Fax: +44 1865 272595
  
[[alternative HTML version deleted]]

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] Construct a File Path: File Path Unknown

2011-08-25 Thread Tyler Rinker






I am not a programmer and am self-taught so I may lack the
language to ask this appropriately (perhaps why an rseek search was unfruitful).

 

Let's say I saved a file to my desktop called foo.pdf.  Then I want R to return 
the file path of
foo.pdf (pretend I don't know the location(path) of foo.pdf).

 

Question: How would I get R to return the unknown file path for
foo.pdf.

 

I hypothesize that the find find.package() function code contains
the secret for doing this but am unable to parse out the snippet to do so.

 

I attempted file.path("foo.pdf")

which R returns [1] "foo.pdf"  #not what I want

 

===

R version 2.14 beta

Windows 7

Reproducible code is not appropriate for this query

  
[[alternative HTML version deleted]]

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Construct a File Path: File Path Unknown

2011-08-25 Thread Tyler Rinker

Jean, Thank you.  It's slow but it works. dir("C:/", pattern="plotrix.pdf", 
full.names=T, ignore.case=T, recursive=T)   Does anyone have a faster way? If 
it helps using:  shell.exec("search-ms://query=plotrix.pdf")utilizes 
windows's search bar to find the file (this is quick), however it opens a 
screen that finds the file rather than providing me with the search path.  For 
a look at what that looks like on a windows machine click here: 
http://windows.microsoft.com/en-US/windows7/products/features/windows-search 
I'm thinking there's a way to use this method to extract the path even faster 
than dir(). TylerTo: tyler_rin...@hotmail.com
CC: r-help@r-project.org
Subject: Re: [R] Construct a File Path: File Path Unknown
From: jvad...@usgs.gov
Date: Thu, 25 Aug 2011 13:19:37 -0500



Try the dir() function.



?dir



# for example

dir("c:/", pattern="foo.pdf",
full.names=T, ignore.case=T, recursive=T)



Jean





Tyler Rinker wrote on 08/25/2011 11:54:28 AM:

> 

> I am not a programmer and am self-taught so I may lack the

> language to ask this appropriately (perhaps why an rseek search was


> unfruitful).

> 

> Let's say I saved a file to my desktop called foo.pdf.  Then
I want 

> R to return the file path of

> foo.pdf (pretend I don't know the location(path) of foo.pdf).

> 

> Question: How would I get R to return the unknown file path for

> foo.pdf.

> 

> I hypothesize that the find find.package() function code contains

> the secret for doing this but am unable to parse out the snippet to
do so.

> 

> I attempted file.path("foo.pdf")

> 

> which R returns [1] "foo.pdf"  #not what I want

> 

> ===

> 

> R version 2.14 beta

> 

> Windows 7

> 

> Reproducible code is not appropriate for this query

> 

> __

> R-help@r-project.org mailing list

> https://stat.ethz.ch/mailman/listinfo/r-help

> PLEASE do read the posting guide http://www.R-project.org/posting-guide.html

> and provide commented, minimal, self-contained, reproducible code.

  
[[alternative HTML version deleted]]

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Construct a File Path: File Path Unknown

2011-08-25 Thread Tyler Rinker


 Bill, Thank you very much!  That's very fast.  Exactly what I was looking for. 
 Jean thank you for your response as well. Tyler> From: wdun...@tibco.com
> To: tyler_rin...@hotmail.com; jvad...@usgs.gov
> CC: r-help@r-project.org
> Subject: RE: [R] Construct a File Path: File Path Unknown
> Date: Thu, 25 Aug 2011 20:25:15 +
> 
> Try using normalizePath("foo.pdf") after creating
> the file.  It should return an absolute path to
> an existing file.
> 
> Bill Dunlap
> Spotfire, TIBCO Software
> wdunlap tibco.com 
> 
> > -Original Message-
> > From: r-help-boun...@r-project.org [mailto:r-help-boun...@r-project.org] On 
> > Behalf Of Tyler Rinker
> > Sent: Thursday, August 25, 2011 12:43 PM
> > To: jvad...@usgs.gov
> > Cc: r-help@r-project.org
> > Subject: Re: [R] Construct a File Path: File Path Unknown
> > 
> > 
> > Jean, Thank you.  It's slow but it works. dir("C:/", pattern="plotrix.pdf", 
> > full.names=T,
> > ignore.case=T, recursive=T)   Does anyone have a faster way? If it helps 
> > using:
> > shell.exec("search-ms://query=plotrix.pdf")utilizes windows's search 
> > bar to find the file (this is
> > quick), however it opens a screen that finds the file rather than providing 
> > me with the search path.
> > For a look at what that looks like on a windows machine click here: 
> > http://windows.microsoft.com/en-
> > US/windows7/products/features/windows-search I'm thinking there's a way to 
> > use this method to extract
> > the path even faster than dir(). TylerTo: tyler_rin...@hotmail.com
> > CC: r-help@r-project.org
> > Subject: Re: [R] Construct a File Path: File Path Unknown
> > From: jvad...@usgs.gov
> > Date: Thu, 25 Aug 2011 13:19:37 -0500
> > 
> > 
> > 
> > Try the dir() function.
> > 
> > 
> > 
> > ?dir
> > 
> > 
> > 
> > # for example
> > 
> > dir("c:/", pattern="foo.pdf",
> > full.names=T, ignore.case=T, recursive=T)
> > 
> > 
> > 
> > Jean
> > 
> > 
> > 
> > 
> > 
> > Tyler Rinker wrote on 08/25/2011 11:54:28 AM:
> > 
> > >
> > 
> > > I am not a programmer and am self-taught so I may lack the
> > 
> > > language to ask this appropriately (perhaps why an rseek search was
> > 
> > 
> > > unfruitful).
> > 
> > >
> > 
> > > Let's say I saved a file to my desktop called foo.pdf.  Then
> > I want
> > 
> > > R to return the file path of
> > 
> > > foo.pdf (pretend I don't know the location(path) of foo.pdf).
> > 
> > >
> > 
> > > Question: How would I get R to return the unknown file path for
> > 
> > > foo.pdf.
> > 
> > >
> > 
> > > I hypothesize that the find find.package() function code contains
> > 
> > > the secret for doing this but am unable to parse out the snippet to
> > do so.
> > 
> > >
> > 
> > > I attempted file.path("foo.pdf")
> > 
> > >
> > 
> > > which R returns [1] "foo.pdf"  #not what I want
> > 
> > >
> > 
> > > ===
> > 
> > >
> > 
> > > R version 2.14 beta
> > 
> > >
> > 
> > > Windows 7
> > 
> > >
> > 
> > > Reproducible code is not appropriate for this query
> > 
> > >
> > 
> > > __
> > 
> > > R-help@r-project.org mailing list
> > 
> > > https://stat.ethz.ch/mailman/listinfo/r-help
> > 
> > > PLEASE do read the posting guide 
> > > http://www.R-project.org/posting-guide.html
> > 
> > > and provide commented, minimal, self-contained, reproducible code.
> > 
> > 
> > [[alternative HTML version deleted]]
> > 
> > __
> > R-help@r-project.org mailing list
> > https://stat.ethz.ch/mailman/listinfo/r-help
> > PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
> > and provide commented, minimal, self-contained, reproducible code.
  
[[alternative HTML version deleted]]

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] Make a function work on an environemnt

2011-08-26 Thread Tyler Rinker

In my R learning I've come across a situation in which a piece of code that 
works on the work space outside a function does not work inside the function. 
WARNING THIS EMAIL CONTAINES THE CODE:#rm(list=ls()) THIS WILL CLEAR ALL 
OBJECTS FROM YOUR WORKSPACE! When I use rm(list=ls()) and then ls() it shows 
character(0) So I tried to make a quick function to speed this up as follows: 
#
#ATTEMPT 1
#
clear <- function()rm(list=ls())clear()ls()  #all objects are still attached
#
#ATTEMPT 2
#
clear <- function(){
{CLEAR <- function()rm(list=ls())}
eapply(globalenv(),CLEAR)
}clear()ls()
#
#ERROR MESSAGE FRPM ATTEMPT 2
#
> clear()
Error in FUN(list(function (x)  : unused argument(s) (list(function (x) 
QUESTIONS:Why does this code not work inside the function?  Please critique 
both my attempts?What would I need to do to make the pieces of code work inside 
the function? Windows 7R 2.14 beta Thanks in advance,Tyler Rinker   
  
[[alternative HTML version deleted]]

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Make a function work on an environemnt

2011-08-27 Thread Tyler Rinker

A previous attempt at this question resulted in the message running together, 
making the message difficult to read and the code lines hard to distinquinsh. 
In my R learning I've come across a situation in which a piece of code that 
works on the work space outside a function does not work inside the function. 
WARNING THIS EMAIL CONTAINES THE CODE:#rm(list=ls()) THIS WILL CLEAR ALL 
OBJECTS FROM YOUR WORKSPACE! When I use rm(list=ls()) and then ls() it shows 
character(0) So I tried to make a quick function to speed this up as follows: 
 
#
#ATTEMPT 1
#
clear <- function()rm(list=ls())clear()
ls()  #all objects are still attached
#
#ATTEMPT 2
#
clear <- function(){
{CLEAR <- function()rm(list=ls())}
eapply(globalenv(),CLEAR)
}clear()ls()
#
#ERROR MESSAGE FRPM ATTEMPT 2
#
 clear()
Error in FUN(list(function (x)  : unused argument(s) (list(function (x) 
 
QUESTIONS:Why does this code not work inside the function?  Please critique 
both my attempts.
What would I need to do to make the pieces of code work inside the function? 
Windows 7
R version 2.14 beta 
 
Thanks in advance,
Tyler Rinker

  
[[alternative HTML version deleted]]

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Make a function work on an environemnt

2011-08-27 Thread Tyler Rinker


 Michael, Thank you for that information.  It was very insightful.   Anyone 
else with why my second attempt does not work (using eapply)? ThanksTylerFrom: 
michael.weyla...@gmail.com
Date: Sat, 27 Aug 2011 12:01:02 -0400
Subject: Re: [R] Make a function work on an environemnt
To: tyler_rin...@hotmail.com
CC: r-help@r-project.org

Well, here's one way you could do it: 

# Don't run this unless you really mean it
clear <- function(){rm(list=ls(.GlobalEnv), envir = .GlobalEnv)}

Both calls to .GlobalEnv seem necessary so that both rm() and ls() go 
everywhere with it. However, this certainly isn't the most useful code because 
it clears itself...



I'm not the best with environments so I'll let someone else work out the 
problems with your other attempts, but I believe the problem with the first is 
that it only executes inside the function environment and not the global 
environment. Not sure about the second...



Michael Weylandt


On Sat, Aug 27, 2011 at 9:25 AM, Tyler Rinker  wrote:




A previous attempt at this question resulted in the message running together, 
making the message difficult to read and the code lines hard to distinquinsh. 
In my R learning I've come across a situation in which a piece of code that 
works on the work space outside a function does not work inside the function. 
WARNING THIS EMAIL CONTAINES THE CODE:#rm(list=ls()) THIS WILL CLEAR ALL 
OBJECTS FROM YOUR WORKSPACE! When I use rm(list=ls()) and then ls() it shows 
character(0) So I tried to make a quick function to speed this up as follows:





#

#ATTEMPT 1

#

clear <- function()rm(list=ls())clear()

ls()  #all objects are still attached

#

#ATTEMPT 2

#

clear <- function(){

{CLEAR <- function()rm(list=ls())}

eapply(globalenv(),CLEAR)

}clear()ls()

#

#ERROR MESSAGE FRPM ATTEMPT 2

#

 clear()

Error in FUN(list(function (x)  : unused argument(s) (list(function (x)



QUESTIONS:Why does this code not work inside the function?  Please critique 
both my attempts.

What would I need to do to make the pieces of code work inside the function?

Windows 7

R version 2.14 beta



Thanks in advance,

Tyler Rinker





[[alternative HTML version deleted]]



__

R-help@r-project.org mailing list

https://stat.ethz.ch/mailman/listinfo/r-help

PLEASE do read the posting guide http://www.R-project.org/posting-guide.html

and provide commented, minimal, self-contained, reproducible code.


  
[[alternative HTML version deleted]]

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Make a function work on an environemnt

2011-08-27 Thread Tyler Rinker

Michael, Michael wrote: "However, this certainly isn't the most useful code 
because it clears itself..." If you were to put this code in a package, .Rdata 
file, or .First() script it could be recalled in that way.  So it could serve a 
purpose.  The exercise was more about me learning how to apply the function to 
global environment though.  You were certainly helpful there.  


 From: michael.weyla...@gmail.com
Date: Sat, 27 Aug 2011 12:01:02 -0400
Subject: Re: [R] Make a function work on an environemnt
To: tyler_rin...@hotmail.com
CC: r-help@r-project.org

Well, here's one way you could do it: 

# Don't run this unless you really mean it
clear <- function(){rm(list=ls(.GlobalEnv), envir = .GlobalEnv)}

Both calls to .GlobalEnv seem necessary so that both rm() and ls() go 
everywhere with it. However, this certainly isn't the most useful code because 
it clears itself...



I'm not the best with environments so I'll let someone else work out the 
problems with your other attempts, but I believe the problem with the first is 
that it only executes inside the function environment and not the global 
environment. Not sure about the second...



Michael Weylandt


On Sat, Aug 27, 2011 at 9:25 AM, Tyler Rinker  wrote:




A previous attempt at this question resulted in the message running together, 
making the message difficult to read and the code lines hard to distinquinsh. 
In my R learning I've come across a situation in which a piece of code that 
works on the work space outside a function does not work inside the function. 
WARNING THIS EMAIL CONTAINES THE CODE:#rm(list=ls()) THIS WILL CLEAR ALL 
OBJECTS FROM YOUR WORKSPACE! When I use rm(list=ls()) and then ls() it shows 
character(0) So I tried to make a quick function to speed this up as follows:





#

#ATTEMPT 1

#

clear <- function()rm(list=ls())clear()

ls()  #all objects are still attached

#

#ATTEMPT 2

#

clear <- function(){

{CLEAR <- function()rm(list=ls())}

eapply(globalenv(),CLEAR)

}clear()ls()

#

#ERROR MESSAGE FRPM ATTEMPT 2

#

 clear()

Error in FUN(list(function (x)  : unused argument(s) (list(function (x)



QUESTIONS:Why does this code not work inside the function?  Please critique 
both my attempts.

What would I need to do to make the pieces of code work inside the function?

Windows 7

R version 2.14 beta



Thanks in advance,

Tyler Rinker





[[alternative HTML version deleted]]



__

R-help@r-project.org mailing list

https://stat.ethz.ch/mailman/listinfo/r-help

PLEASE do read the posting guide http://www.R-project.org/posting-guide.html

and provide commented, minimal, self-contained, reproducible code.


  
[[alternative HTML version deleted]]

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] url prep function (backslash issue)

2011-08-30 Thread Tyler Rinker



Greeting R
Community, 

I am a
windows user so this problem may be specific to windows. I often want to source
files from within R 

such as:
C:\Users\Rinker\Desktop\Research & Law\Data\School Data 09-10. To source
this file I need to go 

through the
path and replace all the backslashes (\) with forward slashes (/). I usually do
this in MS Word

using the
replace option, however, I'd like to do this in R. Attempting to write a
function to do this runs into 

problems: 

When I
enter the following:

readyPath
<- function(path){

z <- gsub("\", "/", path)

return(z)

} 

I get:

>
readyPath <- function(path){

+ z <- gsub("\", "/", path)

+ return(z)

+ }

+  

...meaning
R can't close the sequence (presumably because of the backslash which has
special meaning).

So I tried
(\\):



readyPath <- function(path){

z <- gsub("\\", "/", path)

return(z)

}This allows
the function to be stored as an object but I'm not sure if this is correct.

When I try
the function the backslash gets me again: 

>
readyPath("C:\Users\Rinker\Desktop\Research & Law\Data\School Data
09-10")

Error: '\U' used without hex digits in character string starting
"C:\U" 

This is
what I'd like the function to return:

[1]
"C:/Users/Rinker/Desktop/Research & Law/Data/School Data 09-10" 

I want a
function in which I enter a path and it returns the path with backslashes 

replaced
with forward slashes. Is there a way to make a function to do this? 

Windows 7
user

R version
2.14 beta 

Thank you,

Tyler
Rinker

 

  
[[alternative HTML version deleted]]

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] url prep function (backslash issue)

2011-08-30 Thread Tyler Rinker

Thank you Brian. When I wrote the email I typed url into the subject line by 
accident.  I mean path.   Thank you,Tyler

 > Date: Tue, 30 Aug 2011 14:00:22 +0100
> From: rip...@stats.ox.ac.uk
> To: tyler_rin...@hotmail.com
> CC: r-help@r-project.org
> Subject: Re: [R] url prep function (backslash issue)
> 
> You seem to be looking for chartr("\\", "/", path) (and FAQ Q7.8)
> 
> What does any of this have to do with 'url prep': URLs are never 
> written with backslashes?
> 
> On Tue, 30 Aug 2011, Tyler Rinker wrote:
> 
> >
> >
> >
> > Greeting R
> > Community,
> >
> > I am a
> > windows user so this problem may be specific to windows. I often want to 
> > source
> > files from within R
> >
> > such as:
> > C:\Users\Rinker\Desktop\Research & Law\Data\School Data 09-10. To source
> > this file I need to go
> >
> > through the
> > path and replace all the backslashes (\) with forward slashes (/). I 
> > usually do
> > this in MS Word
> >
> > using the
> > replace option, however, I'd like to do this in R. Attempting to write a
> > function to do this runs into
> >
> > problems:
> >
> > When I
> > enter the following:
> >
> > readyPath
> > <- function(path){
> >
> > z <- gsub("\", "/", path)
> >
> > return(z)
> >
> > }
> >
> > I get:
> >
> >>
> > readyPath <- function(path){
> >
> > + z <- gsub("\", "/", path)
> >
> > + return(z)
> >
> > + }
> >
> > +
> >
> > ...meaning
> > R can't close the sequence (presumably because of the backslash which has
> > special meaning).
> >
> > So I tried
> > (\\):
> >
> >
> >
> > readyPath <- function(path){
> >
> > z <- gsub("\\", "/", path)
> >
> > return(z)
> >
> > }This allows
> > the function to be stored as an object but I'm not sure if this is correct.
> 
> It isn't: please do read the help for gsub (\ is a metacharacter).
> 
> > When I try
> > the function the backslash gets me again:
> >
> >>
> > readyPath("C:\Users\Rinker\Desktop\Research & Law\Data\School Data
> > 09-10")
> >
> > Error: '\U' used without hex digits in character string starting
> > "C:\U"
> 
> You cannot do that: you have to scan a file or escape \
> 
> > This is
> > what I'd like the function to return:
> >
> > [1]
> > "C:/Users/Rinker/Desktop/Research & Law/Data/School Data 09-10"
> >
> > I want a
> > function in which I enter a path and it returns the path with backslashes
> >
> > replaced
> > with forward slashes. Is there a way to make a function to do this?
> 
> ?normalizePath
> chartr("\\", "/", path)
> 
> > Windows 7
> > user
> >
> > R version
> > 2.14 beta
> >
> > Thank you,
> >
> > Tyler
> > Rinker
> >
> >
> >
> >
> > [[alternative HTML version deleted]]
> >
> > __
> > R-help@r-project.org mailing list
> > https://stat.ethz.ch/mailman/listinfo/r-help
> > PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
> > and provide commented, minimal, self-contained, reproducible code.
> >
> 
> -- 
> Brian D. Ripley,  rip...@stats.ox.ac.uk
> Professor of Applied Statistics,  http://www.stats.ox.ac.uk/~ripley/
> University of Oxford, Tel:  +44 1865 272861 (self)
> 1 South Parks Road, +44 1865 272866 (PA)
> Oxford OX1 3TG, UKFax:  +44 1865 272595
  
[[alternative HTML version deleted]]

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] url prep function (backslash issue)

2011-08-30 Thread Tyler Rinker

Duncan, Thanks.  Combined with what Brian Ripley wrote it all works.   For 
future thread searchers this worked: oldstring <- readline()
C:\Users\Rinker\Desktop\Research& Law\Data\School Data 09-10
chartr("\\", "/",oldstring) Thank you both,Tyler
 #> Date: Tue, 30 Aug 2011 09:35:58 
-0400
> From: murdoch.dun...@gmail.com
> To: tyler_rin...@hotmail.com
> CC: r-help@r-project.org
> Subject: Re: [R] url prep function (backslash issue)
> 
> Brian Ripley told you how to do the translation, but there's another 
> problem:
> 
> On 30/08/2011 8:14 AM, Tyler Rinker wrote:
> 
> [ much deleted ]
> > When I try
> > the function the backslash gets me again:
> >
> > >
> > readyPath("C:\Users\Rinker\Desktop\Research&  Law\Data\School Data
> > 09-10")
> 
> The problem is that you haven't entered a string containing backslashes, 
> you've tried to enter a string containing escapes.  The parser sees a 
> single backslash and attaches it to the next letter, so \U is taken to 
> be the start of a Unicode character, and you get the error
> > Error: '\U' used without hex digits in character string starting
> > "C:\U"
> >
> 
> The way around this is to avoid the parser, by something like this:
> 
> oldstring <- readline()
> 
> C:\Users\Rinker\Desktop\Research&  Law\Data\School Data 09-10
> 
> 
> and then applying chartr to oldstring.
> 
> Duncan Murdoch
> 
> 
> > This is
> > what I'd like the function to return:
> >
> > [1]
> > "C:/Users/Rinker/Desktop/Research&  Law/Data/School Data 09-10"
> >
> > I want a
> > function in which I enter a path and it returns the path with backslashes
> >
> > replaced
> > with forward slashes. Is there a way to make a function to do this?
> >
> > Windows 7
> > user
> >
> > R version
> > 2.14 beta
> >
> > Thank you,
> >
> > Tyler
> > Rinker
> >
> >
> >
> > 
> > [[alternative HTML version deleted]]
> >
> > __
> > R-help@r-project.org mailing list
> > https://stat.ethz.ch/mailman/listinfo/r-help
> > PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
> > and provide commented, minimal, self-contained, reproducible code.
> 
  
[[alternative HTML version deleted]]

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] Generating data when mean and 95% CI are known

2011-09-07 Thread Tyler Hicks

Is there a function in R that will generate data from a known mean and 95% CI? 
I do not know the distribution or sample size of the original data.

Cheers, 



Tyler L Hicks
PhD Student
Washington State University - Vancouver

E-mail: tyler_hi...@wsu.edu
Website: www.thingswithwings.org

"Back off man, I'm a scientist!" - Bill Murray, Ghostbusters 
  
[[alternative HTML version deleted]]

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] Package dependency

2011-09-20 Thread Tyler Rinker

Greetings R community,
 
I am making my first package and have run into the need to use other packages.  
I  pass all the checks in the command prompt running Rcmd check package.name.  
In the Description file I have included:
 
Depends: R (>= 2.13), plotrix
Repository: CRAN
 
Now I create the zip file for windows 7.  I delete my plotrix package from my 
library to create a setup where others might encounter when installing my 
package (perhaps they don't have the package dependency plotrix installed).  I 
now install the zip file in my R library and try to load it in R generating the 
following error:
 
> library(genTools)
Loading required package: plotrix
Error: package ‘plotrix’ could not be loaded
In addition: Warning message:
In library(pkg, character.only = TRUE, logical.return = TRUE, lib.loc = 
lib.loc) :
  there is no package called ‘plotrix’
 
Now the question:  How do I get my package to automatically download 
dependencies from CRAN as other CRAN packages do when I install them to my 
library for the first time?
 
Tyler Rinker
 
R version 2.14 (beta)   
Windows 7 
[[alternative HTML version deleted]]

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] R help on write.csv

2011-09-21 Thread Tyler Rinker

You haven't followed the rules of the posting guide.  No reproducible code.  No 
OS or R version.  I'm guessing you are a newer R user and didn't know this.  So 
please read that guide.  It'll help others to help you more quickly.
 
If you're new you may not know about using ?object.  So if you type 
"?write.csv" into the r console it will take you to a help page.  There you 
will see info about the object and in this case append is what you will most 
likely want to look at.
 
Tyler

 

> From: ashish.ku...@esteeadvisors.com
> To: r-help@r-project.org
> Date: Wed, 21 Sep 2011 17:31:01 +0530
> Subject: [R] R help on write.csv
> 
> Hi,
> 
> 
> 
> I wanted to write the data created using R on existing csv file. However
> everytime I use write.csv, it overwrites the values already there in the
> existing csv file. Any workaround on this. 
> 
> 
> 
> Thanks for your help
> 
> 
> 
> Ashish Kumar
> 
> 
> 
> Estee Advisors Pvt. Ltd.
> 
> Email: ashish.ku...@esteeadvisors.com
> 
> Cell: +91-9654072144
> 
> Direct: +91-124-4637-713
> 
> 
> 
> 
> [[alternative HTML version deleted]]
> 
> __
> R-help@r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
> and provide commented, minimal, self-contained, reproducible code.
  
[[alternative HTML version deleted]]

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] how to simulate Likert-type data using R

2011-06-26 Thread Tyler Rinker

?sample
 

Date: Sun, 26 Jun 2011 02:26:10 -0700
From: wjca...@hotmail.com
To: r-help@r-project.org
Subject: [R] how to simulate Likert-type data using R

Dear R members 
 
Could someone tell me how to simulate Likert-type data using the rnorm
function. 
 
Let's say, 200*15 random numbers in a variable that goes from 1 to 4 in
steps of 
1 (i.e., 1, 2, 3, 4)  belonging to a normal distribution? 
 
random.data <— matrix(rnorm(200 * 15), nrow = 200, ncol = 15) 
random.data 
 
The result cannot be reached.
 
could one help me revise the syntax?
 
Cheers 
 
cao
 
 
--
View this message in context: 
http://r.789695.n4.nabble.com/how-to-simulate-Likert-type-data-using-R-tp3625664p3625664.html
Sent from the R help mailing list archive at Nabble.com.
[[alternative HTML version deleted]]
 

__ R-help@r-project.org mailing 
list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting 
guide http://www.R-project.org/posting-guide.html and provide commented, 
minimal, self-contained, reproducible code. 
[[alternative HTML version deleted]]

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] library(doBy) will not load

2011-06-28 Thread Tyler Rinker

Greetings R Community,
 
One of my favorite packages won't load and I'm not sure why.  It loaded earlier 
today. The problem appears with the snow package, which doBy requires.  I tried 
reinstalling both packages again ,shutting [R] down, reinstalling [R] in the 
workspace (shortcut). Here's the weird thing.  The same exact library loads in 
another workspace using the same version of [R] with no problem.
 
OS:  Windows 7 
R version 2.14 (in development)
 
> library(doBy)
Loading required package: survival
Loading required package: splines
Loading required package: R2HTML
Loading required package: multcomp
Loading required package: mvtnorm
Loading required package: lme4
Loading required package: Matrix
Loading required package: lattice
Attaching package: ‘Matrix’
The following object(s) are masked from ‘package:base’:
det

Attaching package: ‘lme4’
The following object(s) are masked from ‘package:stats’:
AIC, BIC
Loading required package: snow
Error in as.character(t) : 't' is missing
Error: package ‘snow’ could not be loaded
> recodeVar
Error: object 'recodeVar' not found
 
Thank you in advance,
Tyler
  
[[alternative HTML version deleted]]

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] library(doBy) will not load

2011-06-28 Thread Tyler Rinker

This is the error I get when I try to load snow.
 
> library(snow)
Error in as.character(t) : 't' is missing
Error in library(snow) : .First.lib failed for ‘snow’
 

From: tyler_rin...@hotmail.com
To: r-help@r-project.org
Date: Tue, 28 Jun 2011 22:47:06 -0400
Subject: [R] library(doBy) will not load

 
Greetings R Community,
 
One of my favorite packages won't load and I'm not sure why.  It loaded earlier 
today. The problem appears with the snow package, which doBy requires.  I tried 
reinstalling both packages again ,shutting [R] down, reinstalling [R] in the 
workspace (shortcut). Here's the weird thing.  The same exact library loads in 
another workspace using the same version of [R] with no problem.
 
OS:  Windows 7 
R version 2.14 (in development)
 
> library(doBy)
Loading required package: survival
Loading required package: splines
Loading required package: R2HTML
Loading required package: multcomp
Loading required package: mvtnorm
Loading required package: lme4
Loading required package: Matrix
Loading required package: lattice
Attaching package: ‘Matrix’
The following object(s) are masked from ‘package:base’:
det
 
Attaching package: ‘lme4’
The following object(s) are masked from ‘package:stats’:
AIC, BIC
Loading required package: snow
Error in as.character(t) : 't' is missing
Error: package ‘snow’ could not be loaded
> recodeVar
Error: object 'recodeVar' not found
 
Thank you in advance,
Tyler
  
[[alternative HTML version deleted]]
 

__ R-help@r-project.org mailing 
list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting 
guide http://www.R-project.org/posting-guide.html and provide commented, 
minimal, self-contained, reproducible code. 
[[alternative HTML version deleted]]

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] library(doBy) will not load

2011-06-28 Thread Tyler Rinker

Problem solved.  I lazily forgot to clean up my workspace with rm(list=ls()).  
A good reminder to do so after each exit.  Sorry for the wasted server space.
 

From: tyler_rin...@hotmail.com
To: r-help@r-project.org
Date: Tue, 28 Jun 2011 22:58:47 -0400
Subject: Re: [R] library(doBy) will not load

 
This is the error I get when I try to load snow.
 
> library(snow)
Error in as.character(t) : 't' is missing
Error in library(snow) : .First.lib failed for ‘snow’
 
 
From: tyler_rin...@hotmail.com
To: r-help@r-project.org
Date: Tue, 28 Jun 2011 22:47:06 -0400
Subject: [R] library(doBy) will not load
 
 
Greetings R Community,
 
One of my favorite packages won't load and I'm not sure why.  It loaded earlier 
today. The problem appears with the snow package, which doBy requires.  I tried 
reinstalling both packages again ,shutting [R] down, reinstalling [R] in the 
workspace (shortcut). Here's the weird thing.  The same exact library loads in 
another workspace using the same version of [R] with no problem.
 
OS:  Windows 7 
R version 2.14 (in development)
 
> library(doBy)
Loading required package: survival
Loading required package: splines
Loading required package: R2HTML
Loading required package: multcomp
Loading required package: mvtnorm
Loading required package: lme4
Loading required package: Matrix
Loading required package: lattice
Attaching package: ‘Matrix’
The following object(s) are masked from ‘package:base’:
det
 
Attaching package: ‘lme4’
The following object(s) are masked from ‘package:stats’:
AIC, BIC
Loading required package: snow
Error in as.character(t) : 't' is missing
Error: package ‘snow’ could not be loaded
> recodeVar
Error: object 'recodeVar' not found
 
Thank you in advance,
Tyler
  
[[alternative HTML version deleted]]
 
 
__ R-help@r-project.org mailing 
list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting 
guide http://www.R-project.org/posting-guide.html and provide commented, 
minimal, self-contained, reproducible code. 
[[alternative HTML version deleted]]
 

__ R-help@r-project.org mailing 
list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting 
guide http://www.R-project.org/posting-guide.html and provide commented, 
minimal, self-contained, reproducible code. 
[[alternative HTML version deleted]]

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Storing and managing custom R functions for re-use

2011-07-09 Thread Tyler Rinker

I personally place functions like this in my .First function under the 
.Rprofile, making them instantly accessible.  I also keep a function called 
my.fun() which lists a data frame containing a column of all the function 
names, one for arguments, and a brief description.  This also goes in .First.  
Again, this makes it easy to quickly reference what you have and makes all your 
functions quickly accessible.
 
Tyler 
 

> Date: Sat, 9 Jul 2011 13:30:51 +0200
> From: s.chamai...@yahoo.fr
> To: r-help@r-project.org
> Subject: [R] Storing and managing custom R functions for re-use
> 
> Dear all,
> 
> sorry if this is a bit on the sidetrack for R-help.
> 
> As a regular R user I have developed quite a lot of custom R functions, 
> to the point of not always remembering what I have already programmed, 
> where the file is and so on.
> I was wondering what other people do in this regards. A basic file with 
> all your functions, or a custom R package, or directly integrated into a 
> profile file ??? I'm considering that a blog with tagged posts may be a 
> good solution (and really good ones could join R-bloggers maybe).
> 
> If someone is happy to share what (s)he considers good practice, thanks.
> 
> simon
> 
> __
> R-help@r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
> and provide commented, minimal, self-contained, reproducible code.
  
[[alternative HTML version deleted]]

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] (no subject)

2011-07-14 Thread Tyler Rinker


Good Afternoon R Community,
 
I often work with very large data bases and want to search for select cases by 
a particular word or numeric value.  I created the following simple function to 
do just that.  It searchs a particular column for the phrase and returns a data 
frame with the rows that contain that phrase (for a particular column).  
 
Search<-function(term, dataframe, column.name, variation=.02,...){
te<-substitute(term)
   te<-as.character(te)
   cn<-substitute(column.name)
  cn<-as.character(cn)
  HUNT<-agrep(te,dataframe[,cn],ignore.case 
=TRUE,max.distance=variation,...)
   dataframe[c(HUNT),]
}
 
I would like to modify this to search all columns for the phrase keep only the 
unique rows and return a data frame for any columns (minus repeated rows) that 
contain the phrase.
 
I assumed this would be an easy task for me using sapply() and unique() or 
union().  Because this argument takes more than one argument (vector{column} is 
not the only argument) I don’t know how to set it up.  Could someone tell me 
how to apply this function to multiple columns and return one data frame with 
all the agrep matches (I’ll figure out how to deal with duplicates after that; 
that’s the easy part).
 
Thank you in advance for your help,
Tyler Rinker
 
PS if your idea is a for loop please explain it well or provide the code 
because I do not have a programming background and for loops are very difficult 
to wrap my head around.
 
Running windows 7
R version 2.14.0 (beta)   
[[alternative HTML version deleted]]

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] color of math annotation in legend

2011-07-28 Thread Tyler Rinker

Use the text.col argument as below
?legend
 
x=y=1:100
z=seq(0.5,50,by=0.5)
plot(x,y,type='l',col='black')
lines(x,z,col='red')
legend('topleft',c(expression(paste(alpha," = ", 1)),
expression(paste(alpha," = ", 2))),text.col=c("black","red"))
 
 

 

> Date: Thu, 28 Jul 2011 02:32:04 -0500
> From: zhongyi-y...@uiowa.edu
> To: r-help@r-project.org
> Subject: [R] color of math annotation in legend
> 
> Dear useRs,
> 
> Can someone help me to adjust the color of math annotation in a legend? The
> following code gives me a black "alpha = 2".
> 
> x=y=1:100
> z=seq(0.5,50,by=0.5)
> plot(x,y,type='l',col='black')
> lines(x,z,col='red')
> legend('topleft',c(expression(paste(alpha," = ", 1)),
> expression(paste(alpha," = ", 2))),col=c('black','red'))
> 
> Is it possible to make it red? Thanks in advance for your help.
> 
> Best,
> Zhongyi Yuan
> 
> [[alternative HTML version deleted]]
> 
> __
> R-help@r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
> and provide commented, minimal, self-contained, reproducible code.
  
[[alternative HTML version deleted]]

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] ifelse returns

2011-07-29 Thread Tyler Rinker

Greetings R Community,
 
I am working with the ifelse function and it is returning something unexpected. 
 In the code the line with the MODE1 assignment the output is a vector [1] 4 5 
6  but when I put the MODE1 object into the ifelse function [R}'s output for 
MODE1 is the first number from the string (4).  Why is this?  Given the 
supplied vector of x I would assume both the MODE1 and ifelse() lines to return 
the same result.  I would like the ifelse to return the entire vector [1] 4 5 6 
as in the previous line.
 
OS: Win7
R version 2.14 beta


#===
#Beginning of code
#===
x<-c(2,3,4,4,5,5,6,6,8,10)
  
df<-as.data.frame(table(x))   
df<-df[order(df$Freq),] 
m<-max(df$Freq)
(MODE1<-as.vector(as.numeric(as.character(subset(df,Freq==m)[,1]
ifelse(sum(df$Freq)/length(df$Freq)==1,warning("No Mode: Frequency of all 
values is 1", call. = FALSE),MODE1)
#===
# End of code
#===
 
R Console Output
> (MODE1<-as.vector(as.numeric(as.character(subset(df,Freq==m)[,1]
[1] 4 5 6
> ifelse(sum(df$Freq)/length(df$Freq)==1,warning("No Mode: Frequency of all 
> values is 1", call. = FALSE),MODE1)
[1] 4
 
Thank you in advance,
Tyler 
[[alternative HTML version deleted]]

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] ifelse returns

2011-07-29 Thread Tyler Rinker

Thanks toy Jim and Michael for their response.  The suggestion to use regular 
if/else was spot on.
 

> From: tyler_rin...@hotmail.com
> To: r-help@r-project.org
> Date: Sat, 30 Jul 2011 01:48:21 -0400
> Subject: [R] ifelse returns
> 
> 
> Greetings R Community,
> 
> I am working with the ifelse function and it is returning something 
> unexpected. In the code the line with the MODE1 assignment the output is a 
> vector [1] 4 5 6 but when I put the MODE1 object into the ifelse function 
> [R}'s output for MODE1 is the first number from the string (4). Why is this? 
> Given the supplied vector of x I would assume both the MODE1 and ifelse() 
> lines to return the same result. I would like the ifelse to return the entire 
> vector [1] 4 5 6 as in the previous line.
> 
> OS: Win7
> R version 2.14 beta
> 
> 
> #===
> # Beginning of code
> #===
> x<-c(2,3,4,4,5,5,6,6,8,10)
> 
> df<-as.data.frame(table(x)) 
> df<-df[order(df$Freq),] 
> m<-max(df$Freq) 
> (MODE1<-as.vector(as.numeric(as.character(subset(df,Freq==m)[,1]
> ifelse(sum(df$Freq)/length(df$Freq)==1,warning("No Mode: Frequency of all 
> values is 1", call. = FALSE),MODE1)
> #===
> # End of code
> #===
> 
> R Console Output
> > (MODE1<-as.vector(as.numeric(as.character(subset(df,Freq==m)[,1]
> [1] 4 5 6
> > ifelse(sum(df$Freq)/length(df$Freq)==1,warning("No Mode: Frequency of all 
> > values is 1", call. = FALSE),MODE1)
> [1] 4
> 
> Thank you in advance,
> Tyler 
> [[alternative HTML version deleted]]
> 
> __
> R-help@r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
> and provide commented, minimal, self-contained, reproducible code.
  
[[alternative HTML version deleted]]

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] First value in a row

2012-07-24 Thread Tyler Rinker

This would work:
X <- lapply(1:nrow(dat1), function(i) rev(dat1[i, -c(1:2)]))sapply(X, 
function(x) x[!is.na(x)][1])


> Date: Tue, 24 Jul 2012 23:56:17 -0300
> From: cm...@dal.ca
> To: smartpink...@yahoo.com
> CC: r-help@r-project.org; henrik.singm...@psychologie.uni-freiburg.de
> Subject: Re: [R] First value in a row
> 
> Hi Henrik and Arun,
> 
> I now understand the script you provided. Very smart solution I think.  
> I wonder, however, if there is an alternative way as to count the last  
> number in a row?.
> For instance, considering the following dataframe
> 
> dat1<-read.table(text="
> Lat  Lon  x1  x2  x3
> 0112  .4  .5  .6
> 0112  .2  .3  NA
> 0111  .1  NA  NA
> 0110  NA  NA  NA
> ",sep="",header=TRUE)
> 
> the last value (from left to right) should be:
> .6
> .3
> .1
> NA
> 
> NAs are always consecutive once they appear.
> 
> Thanks again,
> 
> Camilo
> 
> 
> Camilo Mora, Ph.D.
> Department of Geography, University of Hawaii
> Currently available in Colombia
> Phone:   Country code: 57
>   Provider code: 313
>   Phone 776 2282
>   From the USA or Canada you have to dial 011 57 313 776 2282
> http://www.soc.hawaii.edu/mora/
> 
> 
> 
> Quoting arun :
> 
> > Hi Henrik,
> >
> > Thanks for testing it to a different dataset.  I didn't test it at  
> > that time to multiple conditions.  Probably, apply is a better method.
> >
> >
> > Anyway, you can still get the same result by doing this:
> >
> > dat1<-read.table(text="
> > Lat  Lon  x1  x2  x3
> > 0110  NA  NA  .1
> > 0111  .4  NA  .3
> > 0112  NA  .5  .6
> > ",sep="",header=TRUE)
> > dat2<-data.frame(t(dat1[,3:5]))
> > dat3<-data.frame(dat1,NewColumn=unlist(lapply(dat2,function(x)  
> > x[!is.na(x)][1])))
> >  row.names(dat3)<-1:nrow(dat3)
> >  dat3
> > #  Lat Lon  x1  x2  x3 NewColumn
> > #1   1  10  NA  NA 0.1   0.1
> > #2   1  11 0.4  NA 0.3   0.4
> > #3   1  12  NA 0.5 0.6   0.5
> >
> > #Now, to a slightly different dataset
> > dat1<-read.table(text="
> > Lat  Lon  x1  x2  x3
> > 0110  NA  NA  NA
> > 0111  NA  NA  .3
> > 0112  NA  .6   NA
> > ",sep="",header=TRUE)
> >  dat2<-data.frame(t(dat1[,3:5]))
> >  dat3<-data.frame(dat1,NewColumn=unlist(lapply(dat2,function(x)  
> > x[!is.na(x)][1])))
> >   row.names(dat3)<-1:nrow(dat3)
> >   dat3
> >   #Lat Lon x1  x2  x3 NewColumn
> > #1   1  10 NA  NA  NANA
> > #2   1  11 NA  NA 0.3   0.3
> > #3   1  12 NA 0.6  NA   0.6
> >
> >
> > I hope this works well.
> >
> >
> > A.K.
> >
> >
> >
> >
> > - Original Message -
> > From: Henrik Singmann 
> > To: arun 
> > Cc: Camilo Mora ; R help 
> > Sent: Tuesday, July 24, 2012 10:18 AM
> > Subject: Re: First value in a row
> >
> > Hi,
> >
> > As Arun's idea was also my first idea let me pinpoint the problem of  
> > this solution.
> > It only works if the data in question (i.e., columns x1 to x3)  
> > follow the pattern of the example data insofar that the NAs form a  
> > triangle like structure. This is so because it loops over columns  
> > instead of rows and takes advantage of the triangle NA structure.
> >
> > For example, slightly changing the data leads to a result that does  
> > not follow the description of Camilo seem to want:
> >
> > dat1<-read.table(text="
> > Lat  Lon  x1  x2  x3
> > 0110  NA  NA  .1
> > 0111  .4  NA  .3
> > 0112  NA  .5  .6
> > ",sep="",header=TRUE)
> >
> > # correct answer from description would be .1, .4, .5
> >
> > # arun's solution:
> > data.frame(dat1,NewColumn=rev(unlist(lapply(dat1[,3:5],function(x)  
> > x[!is.na(x)][1]
> >
> > #  x3  x2  x1
> > # 0.1 0.5 0.4
> >
> > # my solution:
> > apply(dat1[,-(1:2)], 1, function(x) x[!is.na(x)][1])
> >
> > # [1] 0.1 0.4 0.5
> >
> > So the question is, what you want and how the data looks.
> >
> > Cheers,
> > Henrik
> >
> >
> > Am 24.07.2012 14:27, schrieb arun:
> >> Hi,
> >>
> >> Try this:
> >>
> >> dat1<-read.table(text="
> >> Lat  Lon  x1  x2  x3
> >> 0110  NA  NA  .1
> >> 0111  NA  .2  .3
> >> 0112  .4  .5  .6
> >> ",sep="",header=TRUE)
> >>
> >> dat2<-dat1[,3:5]
> >> 
> >> dat3<-data.frame(dat1,NewColumn=rev(unlist(lapply(dat2,function(x)  
> >> x[!is.na(x)][1]
> >> row.names(dat3)<-1:nrow(dat3)
> >>dat3
> >> Lat Lon  x1  x2  x3 NewColumn
> >> 1   1  10  NA  NA 0.1   0.1
> >> 2   1  11  NA 0.2 0.3   0.2
> >> 3   1  12 0.4 0.5 0.6   0.4
> >>
> >> A.K.
> >>
> >>
> >>
> >>
> >> - Original Message -
> >> From: Camilo Mora 
> >> To: r-help@r-project.org
> >> Cc:
> >> Sent: Tuesday, July 24, 2012 2:48 AM
> >> Subject: [R] First value in a row
> >>
> >> Hi.
> >>
> >> This is likely a trivial problem but have not found a solution.  
> >> Imagine the following dataframe:
> >>
> >> Lat   Lon  x1   x2  x3
> >> 0110   NA   NA  .1
> >> 0111   NA   .2  .3
> >> 0112   .4   .5  .6
> >>
> >> I want to generate another column that consist of the first value  
> >> in each row from columns x1 to x3. That is
> >>
> >> NewColu

[R] Problem creating reference manuals from latex

2011-11-14 Thread Tyler Rinker

R Community,
 
I often am in need of viewing the reference manuals of packages and do not have 
Internet access.  I have used the code:
 
path <- find.package('tm')
system(paste(shQuote(file.path(R.home("bin"), "R")),"CMD", 
"Rd2pdf",shQuote(path)))
 
someone kindly provided from this help list to generate the manuals from the 
latex files.  This worked well with version R 2.13.  After the upgrade to R 
2.14 I use this code (see below and get an error message I don't understand).  
I'm pretty sure "! LaTeX Error: File `inconsolata.sty'" not found. is important 
but don't get it's significance.  There's a post about it here: 
http://r.789695.n4.nabble.com/inconsolata-font-for-building-vignettes-with-R-devel-td3838176.html
  but I am a windows user making this a moot point.  I know this file is n R 
font's file that Miktext needs to build the manual.
 
I'd like to be able generate the reference manuals again without the Internet.  
While the code above worked in the past I'm open to alternative methods.
 
Version: R 2.14.0 2011-10-31
OS: Windows 7
Latex: MikTex 2.9
 
Thank you
Tyler Rinker

 
>  path <- find.package('tm')
>  system(paste(shQuote(file.path(R.home("bin"), "R")),"CMD", 
> "Rd2pdf",shQuote(path)))
Hmm ... looks like a package
Converting parsed Rd's to LaTeX ...
Creating pdf output from LaTeX ...
Warning: running command '"C:\PROGRA~2\MIKTEX~1.9\miktex\bin\texi2dvi.exe"  
--pdf "Rd2.tex"  -I "C:/PROGRA~1/R/R-214~1.0/share/texmf/tex/latex" -I 
"C:/PROGRA~1/R/R-214~1.0/share/texmf/bibtex/bst"' had status 1
Error : running 'texi2dvi' on 'Rd2.tex' failed
LaTeX errors:
! LaTeX Error: File `inconsolata.sty' not found.
Type X to quit or  to proceed,
or enter new name. (Default extension: sty)
! Emergency stop.
 
 
l.267 
   
!  ==> Fatal error occurred, no output PDF file produced!
Error in running tools::texi2dvi
Warning message:
running command '"C:/PROGRA~1/R/R-214~1.0/bin/i386/R" CMD Rd2pdf 
"C:/Users/Rinker/R/win-library/2.14/tm"' had status 1 
  
[[alternative HTML version deleted]]

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Problem creating reference manuals from latex

2011-11-14 Thread Tyler Rinker

Duncan,
 
Thank you for your reply.  I was not clear about the Internet access.  I do 
have access, just at times I don't, hence the need to produce the manuals from 
latex rather than simply using the Internet.
 
Please pardon my lack of knowledge around your response.  You said I'd have to 
install the inconsolata.sty from CTAN.  How?  Is this installed in an R 
directory or a Tex directory?  Do I use R to install it or latex or save the 
file and drop into a particular folder (directory)?
 
I've used rseek and a simple google search which reveals a great deal about 
inconsolata, unfortunately I am not grasping what I need to do.
 
Tyler

 

> Date: Mon, 14 Nov 2011 21:59:10 -0500
> From: murdoch.dun...@gmail.com
> To: tyler_rin...@hotmail.com
> CC: r-help@r-project.org
> Subject: Re: [R] Problem creating reference manuals from latex
> 
> On 11-11-14 9:44 PM, Tyler Rinker wrote:
> >
> > R Community,
> >
> > I often am in need of viewing the reference manuals of packages and do not 
> > have Internet access. I have used the code:
> >
> > path<- find.package('tm')
> > system(paste(shQuote(file.path(R.home("bin"), "R")),"CMD", 
> > "Rd2pdf",shQuote(path)))
> >
> > someone kindly provided from this help list to generate the manuals from 
> > the latex files. This worked well with version R 2.13. After the upgrade to 
> > R 2.14 I use this code (see below and get an error message I don't 
> > understand). I'm pretty sure "! LaTeX Error: File `inconsolata.sty'" not 
> > found. is important but don't get it's significance. There's a post about 
> > it here: 
> > http://r.789695.n4.nabble.com/inconsolata-font-for-building-vignettes-with-R-devel-td3838176.html
> >  but I am a windows user making this a moot point. I know this file is n R 
> > font's file that Miktext needs to build the manual.
> >
> > I'd like to be able generate the reference manuals again without the 
> > Internet. While the code above worked in the past I'm open to alternative 
> > methods.
> 
> You need to install the inconsolata.sty file. It is available on CTAN 
> (the TeX network, not the R one). You say you don't have Internet 
> access, so I don't know how you'll do this, but presumably there's a 
> way: you got MikTex installed somehow.
> 
> Duncan Murdoch
> 
> >
> > Version: R 2.14.0 2011-10-31
> > OS: Windows 7
> > Latex: MikTex 2.9
> >
> > Thank you
> > Tyler Rinker
> >
> >
> >> path<- find.package('tm')
> >> system(paste(shQuote(file.path(R.home("bin"), "R")),"CMD", 
> >> "Rd2pdf",shQuote(path)))
> > Hmm ... looks like a package
> > Converting parsed Rd's to LaTeX ...
> > Creating pdf output from LaTeX ...
> > Warning: running command '"C:\PROGRA~2\MIKTEX~1.9\miktex\bin\texi2dvi.exe" 
> > --pdf "Rd2.tex" -I "C:/PROGRA~1/R/R-214~1.0/share/texmf/tex/latex" -I 
> > "C:/PROGRA~1/R/R-214~1.0/share/texmf/bibtex/bst"' had status 1
> > Error : running 'texi2dvi' on 'Rd2.tex' failed
> > LaTeX errors:
> > ! LaTeX Error: File `inconsolata.sty' not found.
> > Type X to quit or to proceed,
> > or enter new name. (Default extension: sty)
> > ! Emergency stop.
> > 
> >
> > l.267
> >
> > ! ==> Fatal error occurred, no output PDF file produced!
> > Error in running tools::texi2dvi
> > Warning message:
> > running command '"C:/PROGRA~1/R/R-214~1.0/bin/i386/R" CMD Rd2pdf 
> > "C:/Users/Rinker/R/win-library/2.14/tm"' had status 1
> > 
> > [[alternative HTML version deleted]]
> >
> > __
> > R-help@r-project.org mailing list
> > https://stat.ethz.ch/mailman/listinfo/r-help
> > PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
> > and provide commented, minimal, self-contained, reproducible code.
> 
  
[[alternative HTML version deleted]]

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Problem creating reference manuals from latex

2011-11-15 Thread Tyler Rinker


 Duncan,

Thank you for your patience, time and expertise.  You were 100% correct and the 
problem has been resolved.  I'm adding what I did as a windows user to complete 
the list record for future searchers.
 
To download the inconsolata package (you may approach this several ways this 
one seemed easiest ot me) Go to the  command prompt and type:
 
mpm --verbose --install inconsolata
 
Thanks again Duncan!  I appreciate it.

Tyler
 

> Date: Tue, 15 Nov 2011 06:15:05 -0500
> From: murdoch.dun...@gmail.com
> To: tyler_rin...@hotmail.com
> CC: r-help@r-project.org
> Subject: Re: [R] Problem creating reference manuals from latex
> 
> On 11-11-14 10:25 PM, Tyler Rinker wrote:
> >
> > Duncan,
> >
> > Thank you for your reply. I was not clear about the Internet access. I do 
> > have access, just at times I don't, hence the need to produce the manuals 
> > from latex rather than simply using the Internet.
> >
> > Please pardon my lack of knowledge around your response. You said I'd have 
> > to install the inconsolata.sty from CTAN. How? Is this installed in an R 
> > directory or a Tex directory? Do I use R to install it or latex or save the 
> > file and drop into a particular folder (directory)?
> 
> It is a TeX package. You need to use the MikTeX package installer to 
> install it.
> 
> I usually set up MikTeX to do this automatically when it needs a new 
> package, but that requires an available Internet connection; you'll need 
> to do something manually. Start in the Start Menu item for MikTeX 2.9, 
> and find the "package manager" item. Run it, and choose to install the 
> "inconsolata" package.
> 
> Duncan Murdoch
> 
> 
> > I've used rseek and a simple google search which reveals a great deal about 
> > inconsolata, unfortunately I am not grasping what I need to do.
> >
> > Tyler
> >
> >
> >
> >> Date: Mon, 14 Nov 2011 21:59:10 -0500
> >> From: murdoch.dun...@gmail.com
> >> To: tyler_rin...@hotmail.com
> >> CC: r-help@r-project.org
> >> Subject: Re: [R] Problem creating reference manuals from latex
> >>
> >> On 11-11-14 9:44 PM, Tyler Rinker wrote:
> >>>
> >>> R Community,
> >>>
> >>> I often am in need of viewing the reference manuals of packages and do 
> >>> not have Internet access. I have used the code:
> >>>
> >>> path<- find.package('tm')
> >>> system(paste(shQuote(file.path(R.home("bin"), "R")),"CMD", 
> >>> "Rd2pdf",shQuote(path)))
> >>>
> >>> someone kindly provided from this help list to generate the manuals from 
> >>> the latex files. This worked well with version R 2.13. After the upgrade 
> >>> to R 2.14 I use this code (see below and get an error message I don't 
> >>> understand). I'm pretty sure "! LaTeX Error: File `inconsolata.sty'" not 
> >>> found. is important but don't get it's significance. There's a post about 
> >>> it here: 
> >>> http://r.789695.n4.nabble.com/inconsolata-font-for-building-vignettes-with-R-devel-td3838176.html
> >>>  but I am a windows user making this a moot point. I know this file is n 
> >>> R font's file that Miktext needs to build the manual.
> >>>
> >>> I'd like to be able generate the reference manuals again without the 
> >>> Internet. While the code above worked in the past I'm open to alternative 
> >>> methods.
> >>
> >> You need to install the inconsolata.sty file. It is available on CTAN
> >> (the TeX network, not the R one). You say you don't have Internet
> >> access, so I don't know how you'll do this, but presumably there's a
> >> way: you got MikTex installed somehow.
> >>
> >> Duncan Murdoch
> >>
> >>>
> >>> Version: R 2.14.0 2011-10-31
> >>> OS: Windows 7
> >>> Latex: MikTex 2.9
> >>>
> >>> Thank you
> >>> Tyler Rinker
> >>>
> >>>
> >>>> path<- find.package('tm')
> >>>> system(paste(shQuote(file.path(R.home("bin"), "R")),"CMD", 
> >>>> "Rd2pdf",shQuote(path)))
> >>> Hmm ... looks like a package
> >>> Converting parsed Rd's to LaTeX ...
> >>> Creating pdf output from LaTeX ...
> >>> Warning: running command 
> >>> '

Re: [R] References for book "R In Action" by Kabacoff

2011-12-01 Thread Tyler Rinker

In the ebook version there is a list of references (pp. 434-437).

> Date: Thu, 1 Dec 2011 10:48:45 +0100
> From: lig...@statistik.tu-dortmund.de
> To: ravi.k...@gmail.com
> CC: r-help@r-project.org
> Subject: Re: [R] References for book "R In Action" by Kabacoff
> 
> On 01.12.2011 10:10, Ravi Kulkarni wrote:
> > I know this is not really an R question - it is a query about a recent book
> > on R ("R In Action") by Robert Kabacoff, (Manning Publications 2011).
> > There are many references to interesting topics in R in the book, BUT, I do
> > not find a bibliography/list of references in the book!
> > Does anybody know if there are errata for the book available some place?
> 
> I'd ask the author!
> 
> Uwe Ligges
> 
> 
> 
> > Thanks,
> >Ravi
> >
> > --
> > View this message in context: 
> > http://r.789695.n4.nabble.com/References-for-book-R-In-Action-by-Kabacoff-tp4127625p4127625.html
> > Sent from the R help mailing list archive at Nabble.com.
> >
> > __
> > R-help@r-project.org mailing list
> > https://stat.ethz.ch/mailman/listinfo/r-help
> > PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
> > and provide commented, minimal, self-contained, reproducible code.
> 
> __
> R-help@r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
> and provide commented, minimal, self-contained, reproducible code.
  
[[alternative HTML version deleted]]

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Import in R with White Spaces

2011-10-03 Thread Tyler Rinker

I use the following function I stole somewhere.  There's probably better ways.
 
white <- function(x){
x <- as.data.frame(x)
W <- function(x) gsub(" +", "", x)
sapply(x,W)
}

#EXAPLE
dat <- paste(letters," ", " ", LETTERS)
(DAT <- data.frame(dat, dat))   #nasty white spaces
white(DAT)   #white spaces gone
 
 
Tyler
 

> Date: Mon, 3 Oct 2011 08:14:27 -0700
> From: francy.casal...@gmail.com
> To: r-help@r-project.org
> Subject: [R] Import in R with White Spaces
> 
> Hi,
> 
> I have a simple question about importing data, I would be very grateful if
> you could help me out.
> 
> I have used read.csv(file name, header=T, sep=",") to bring in a csv file I
> saved in MS Excel.The problem is I have white spaces in the middle of values
> (not in the column names), and this messes up the column entries. Since I
> have many many files that I am importing and I have spaces in all of them, I
> was looking for a way to avoid going into all of them and changing the white
> spaec to, for example, an underscore.
> Can you suggest whether there is a way to tell R that each element delimited
> by "," is actually a different entry, regardless of whether there are white
> spaces in between?
> 
> Thank you so much for the help!
> -f
> 
> 
> --
> View this message in context: 
> http://r.789695.n4.nabble.com/Import-in-R-with-White-Spaces-tp3867799p3867799.html
> Sent from the R help mailing list archive at Nabble.com.
> [[alternative HTML version deleted]]
> 
> __
> R-help@r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
> and provide commented, minimal, self-contained, reproducible code.
  
[[alternative HTML version deleted]]

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


  1   2   >