[R] Shading area under density curves

2007-10-03 Thread rominger
Hello,

I have a question regarding shading regions under curves to display  
95% confidence intervals.  I generated bootstrap results for the slope  
and intercept of a simple linear regression model using the following  
code (borrowed from JJ Faraway 2005):

> attach(allposs.nine.d)
> x<-model.matrix(~log(d.dist,10))[,-1]
> bcoef<-matrix(0,1000,2)
> for(i in 1:1000){
+ newy<-predict(all.d.nine.lm)+residuals(all.d.nine.lm)[sample(1002,rep=TRUE)]
+ brg<-lm(newy~x)
+ bcoef[i,]<-brg$coef
+ }

Where "allposs.nine.d" is a data file composed of two columns: (1)  
geographical distances between sample points ("d.dist") and (2) their  
respective pairwise percent similarity in species composition  
("d.sim").  The expression "all.d.nine.lm" equals lm(d.sim~d.dist).

I saved the bootstrap results for each coefficient as:

> dist.density.b1<-density(bcoef[,2])
> dist.density.b0<-density(bcoef[,1])

Along with their 95% confidence intervals:

> dist.quant.b1<-quantile(bcoef[,2],c(.025,.975))
> dist.quant.b0<-quantile(bcoef[,1],c(.025,.975))

I then could plot smooth density curves along with their 95% CI's:

> plot(dist.density.b1)
> abline(v=dist.quant.b1)

Now finally for my question:  Instead of drawing vertical lines to  
represent the 95% CI's, I'd much prefer to somehow shade in the region  
under the curve corresponding the to 95% CI.  I tried using the  
polygon() function for this but did not get very far as I couldn't  
figure out how to define limits for x and y coordinates.

Any suggestions would be great.  Thanks very much--
Andy Romigner

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] Diffusion entropy analysis

2010-03-23 Thread Andrew Rominger
Hello,

Does anyone know of an R implementation of diffusion entropy analysis (DEA)
as proposed by Scafetta dn Grigolini (2002)?  I was unable to find any
existing functions, so I attempted to write up my own, but cannot reproduce
known results, so obviously I'm doing something wrong.

If there does not appear to be any existing script, I'll send along my
attempts for your critique.  I'd also be happy for any leads on scripts
written in languages other than R.

Thanks very much in advance--
Andy Rominger

Scafetta N, and Grigolini P (2002) Scaling detection in time series:
Diffusion entropy analysis.  Phys. Rev. E 66:1:10.

[[alternative HTML version deleted]]

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] shading an area of a graphic

2010-03-24 Thread Andrew Rominger
Dennis,

If I understand correctly, you should be able to do something like this
(example with exponential decay):

these.x <- (1:10)/10

plot(these.x,exp(-these.x),type="l")  # create a basic plot for starting
point
this.usr <- par("usr")# get the plotting region limits
# they are xleft, xright, ybottom, ytop

# for shading lower region
these.x2 <- c(this.usr[1],these.x,this.usr[2])   # an updated x-axis vector
polygon(x=c(this.usr[1],these.x2,this.usr[2]),
y=c(this.usr[3],exp(-these.x2),this.usr[3]),
col="gray",border=NA)

# for shading upper region
these.x2 <- c(this.usr[1],these.x,this.usr[2])
polygon(x=c(these.x2,this.usr[2]),
y=c(exp(-these.x2),this.usr[4]),
col="gray",border=NA)

# to make the plot frame more clear you may want to add
box()

Basically polygon() is able to draw a shape that has your desired curve for
one side, you just have to give it enough points to get a smooth curve (all
it took in this case was 10).  In reality it is drawing little line segments
between all the points, similar in a way to your proposal, but much faster
and simpler.

Exactly what you assign to x and y in polygon() will depend on if you curve
increases or decreases with x, or is some kind of paraboloid type thing.
but the same basic idea applies.  You don't need to use a bunch of polygons,
one will do.

Hope that helps,
Andy


On Wed, Mar 24, 2010 at 3:23 PM, Dennis Fisher  wrote:

> Colleagues
>
> OS 10.5
> R: 2.10.1
>
> I have a simple x-y plot for which I would like to shade the lower (or
> upper) part of the interior region (i.e., the area bounded by the axes).  If
> the delineation between top and bottom were linear, it would be use to use
> the polygon function.  However, the delineation is a curve (which I can
> describe by an equation).  In theory, I could divide the x-axis into a large
> number of regions, then draw a series of polygons side by side, the top /
> bottom borders of which are lines.  Is there a more elegant solution?
>
> Dennis
>
>
> Dennis Fisher MD
> P < (The "P Less Than" Company)
> Phone: 1-866-PLessThan (1-866-753-7784)
> Fax: 1-866-PLessThan (1-866-753-7784)
> www.PLessThan.com
>
> __
> R-help@r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide
> http://www.R-project.org/posting-guide.html
> and provide commented, minimal, self-contained, reproducible code.
>

[[alternative HTML version deleted]]

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] All sub-summands of a vector

2010-04-02 Thread Andy Rominger
Hello,

I'd like to take all possible sub-summands of a vector in the quickest and
most efficient way possible.  By "sub-summands" I mean for each sub-vector,
take its sum.  Which is to say: if I had the vector

x<-1:4

I'd want the "sum" of x[1], x[2], etc.  And then the sum of x[1:2], x[2:3],
etc.  And then...so on.

The result would be:
1 2 3 4
2 5 7
6 9
10

I can do this with for loops (code below) but for long vectors (10^6
elements) looping takes more time than I'd like.  Any suggestions?

Thanks very much in advance--
Andy


# calculate sums of all sub-vectors...
x <- 1:4

sub.vect <- vector("list",4)

for(t in 1:4) {
maxi <- 4 - t + 1
this.sub <- numeric(maxi)
for(i in 1:maxi) {
this.sub[i] <- sum(x[i:(i+t-1)])
}
sub.vect[[t]] <- this.sub
}

sub.vect

[[alternative HTML version deleted]]

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] All sub-summands of a vector

2010-04-02 Thread Andy Rominger
Great, thanks for your help.  I tried:
x <- 1:1
y <- lapply(1:1,function(t){t*runmean(x,t,alg="fast",endrule="trim")})

and it worked in about 90 sec.

Thanks again,
Andy


On Fri, Apr 2, 2010 at 3:43 PM, Gabor Grothendieck
wrote:

> There is also rollmean in the zoo package which might be slightly
> faster since its optimized for that operation.
> k * rollmean(x, k)
> e.g.
>
> > 2 * rollmean(1:4, 2)
> [1] 3 5 7
>
> will give a rolling sum. runmean in the caTools package is even faster.
>
> On Fri, Apr 2, 2010 at 2:31 PM, Jorge Ivan Velez
>  wrote:
> > Hi Andy,
> >
> > Take a look at the rollapply function in the zoo package.
> >
> >> require(zoo)
> > Loading required package: zoo
> >> x <- 1:4
> >> rollapply(zoo(x), 1, sum)
> > 1 2 3 4
> > 1 2 3 4
> >> rollapply(zoo(x), 2, sum)
> > 1 2 3
> > 3 5 7
> >> rollapply(zoo(x), 3, sum)
> > 2 3
> > 6 9
> >> rollapply(zoo(x), 4, sum)
> >  2
> > 10
> >
> > # all at once
> > sapply(1:4, function(r) rollapply(zoo(x), r, sum))
> >
> >
> > HTH,
> > Jorge
> >
> >
> > On Fri, Apr 2, 2010 at 2:24 PM, Andy Rominger <> wrote:
> >
> >> Hello,
> >>
> >> I'd like to take all possible sub-summands of a vector in the quickest
> and
> >> most efficient way possible.  By "sub-summands" I mean for each
> sub-vector,
> >> take its sum.  Which is to say: if I had the vector
> >>
> >> x<-1:4
> >>
> >> I'd want the "sum" of x[1], x[2], etc.  And then the sum of x[1:2],
> x[2:3],
> >> etc.  And then...so on.
> >>
> >> The result would be:
> >> 1 2 3 4
> >> 2 5 7
> >> 6 9
> >> 10
> >>
> >> I can do this with for loops (code below) but for long vectors (10^6
> >> elements) looping takes more time than I'd like.  Any suggestions?
> >>
> >> Thanks very much in advance--
> >> Andy
> >>
> >>
> >> # calculate sums of all sub-vectors...
> >> x <- 1:4
> >>
> >> sub.vect <- vector("list",4)
> >>
> >> for(t in 1:4) {
> >>maxi <- 4 - t + 1
> >>this.sub <- numeric(maxi)
> >>for(i in 1:maxi) {
> >>this.sub[i] <- sum(x[i:(i+t-1)])
> >>}
> >>sub.vect[[t]] <- this.sub
> >> }
> >>
> >> sub.vect
> >>
> >>[[alternative HTML version deleted]]
> >>
> >> __
> >> R-help@r-project.org mailing list
> >> https://stat.ethz.ch/mailman/listinfo/r-help
> >> PLEASE do read the posting guide
> >> http://www.R-project.org/posting-guide.html
> >> and provide commented, minimal, self-contained, reproducible code.
> >>
> >
> >[[alternative HTML version deleted]]
> >
> > __
> > R-help@r-project.org mailing list
> > https://stat.ethz.ch/mailman/listinfo/r-help
> > PLEASE do read the posting guide
> http://www.R-project.org/posting-guide.html
> > and provide commented, minimal, self-contained, reproducible code.
> >
>

[[alternative HTML version deleted]]

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] Constrained vector permutation

2010-01-27 Thread Andrew Rominger
Hello,

I'm trying to permute a vector of positive integers > 0 with the constraint
that each element must be <= twice the element before it (i.e. for some
vector x, x[i] <= 2*x[i-1]), assuming the "0th" element is 1.  Hence the
first element of the vector must always be 1 or 2 (by assuming the "0th"
element is 1).  Similarly, the 2nd must always be below/= 4, the 3rd always
below/= 6, etc.

Here's an example, suppose I have the vector x
x <- 2:6

This vector already fits the constraint, and a permutation such as

x.good <- c(2,4,6,5,3)

would also fit the constraint, but the vector

x.bad1 <- c(4,6,5,3,2)

does not work because of the 4 in the first position.  The vector

x.bad2 <- c(2,6,5,3,4)

does not work because of the 6 in the second position.

Does anyone know of a pre-made function to permute a vector under such a
constrain?  Or perhaps a clever way to somehow use a function (e.g. ---)
that permutes a contingency table constrained by the marginals?

If such possibilities are not out there, what about this idea:

Assume the given vector already complies with the constraint (e.g. x <-
2:6).

First "cut" the vector (like cutting a deck of cards) at some random (given
condition*) position p:

x1 <- x[1:(p-1)]
x2 <- x[p:length(x)]
x <- c(x2,x1)

* the condition is that p must be chosen such that x2[1] <= 2.

Then "shuffle" the updated vector x by swapping two of its elements, e.g.

temp.x <- x[i]
x[i] <- x[j]
x[j] <- temp.x

Here i and j must be chosen such that the resulting swap does not violate
the condition x[i] <= 2*x[i-1].  Ideally, all the possible swaps could be
represented in a matrix of 0/1 or FALSE/TRUE, such as
z <- c(2,4,6,5,3)

Z <- matrix(c(1,0,0,0,0,
0,1,0,0,1,
0,0,1,1,1,
0,0,1,1,1,
0,1,1,1,1),5,5)

Then one would repeat these two processes many times, each time randomly
choosing to either cut or shuffle.

My issue is I don't really know how to get the matrix of allowable swaps--I
don't know how to automate distinguishing which swaps are allowable and
which aren't.

So my questions here are: most importantly does this method even seem sound
(i.e. are all possible solutions likely to have equal probability)? and
secondly, how would I find all possible swaps?

Thanks in advance for any insight.

-Andy

[[alternative HTML version deleted]]

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Constrained vector permutation

2010-01-28 Thread Andrew Rominger
Hi Jason,

Thanks for you suggestions, I think that's pretty close to what I'd need.
The only glitch is that I'd be working with a vector of ~30 elements, so
permutations(...) would take quite a long time.  I only need one permutation
per vector (the whole routine will be within a loop that generates
pseudo-random vectors that could potentially conform to the constraints).

In light of that, do you think I'd be better off doing something like:
v.permutations <- replicate(1,sample(v,length(v),rep=FALSE))   # instead
of permutations()
results <- apply(v.permutations,2,function(x){all(x <=
f(x[1],length(x)-1))})   # function f(...) would be like your f

It wouldn't be guaranteed to produce any usable permutation, but it seems
like it would be much faster and so could be repeated until an acceptable
vector is found.  What do you think?

Thanks--
Andy


On Thu, Jan 28, 2010 at 6:15 AM, Jason Smith  wrote:

> I just realized I read through your email too quickly and my script does
> not actually address the constraint on each permutation, sorry about that.
>
> You should be able to use the permutations function to generate the vector
> permutations however.
>
> Jason
>
> __
> R-help@r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide
> http://www.R-project.org/posting-guide.html
> and provide commented, minimal, self-contained, reproducible code.
>

[[alternative HTML version deleted]]

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Constrained vector permutation

2010-01-29 Thread Andrew Rominger
Being reasonably sure that all valid permutations are equally probable is
important to me.  I've played around with search algorithms in permuting
contingency tables and find that possible solutions decrease rapidly once
one starts assigning values, particularly if small values are assigned
first, so it would seem all solutions are not equally probable (not only
that but one frequently encounters "dead ends" where there are values left
to assign and no allowable place to put them).  As such I think I'd opt to
use sample()... several times if needed.

To clarify, yes, I only need one valid permutation, the idea is I'll
generate 1000s of ordered vectors, and then for each one generate one valid
permutation.

Thanks very much for the help and insights--
Andy


On Thu, Jan 28, 2010 at 3:04 PM, Thomas Lumley wrote:

> On Thu, 28 Jan 2010, Jason Smith wrote:
>
>  It wouldn't be guaranteed to produce any usable permutation, but it seems
>>> like it would be much faster and so could be repeated until an acceptable
>>> vector is found.  What do you think?
>>>
>>> Thanks--
>>> Andy
>>>
>>>
>> I think I am not understanding what your ultimate goal is so I'm not
>> sure I can give you appropriate advice.  Are you looking for a single
>> valid permutation or all of them?
>>
>> Since that constraint sets a ceiling on each subsequent value, it
>> seems like you could solve this problem more easily and quickly by
>> using a search strategy instead of random sampling or generating all
>> permutations then testing.  The constraint will help prune the search
>> space so you only generate valid permutations.  Once you are examining
>> a particular element you can determine which of the additional
>> elements would be valid, so only consider those.
>>
>
> It's easy to generate valid permutations this way.  It does not appear
> straightforward to ensure that all valid permutations are sampled with equal
> probability, which I thought was part of the specification of the problem.
>
>  -thomas
>
>
> Thomas Lumley   Assoc. Professor, Biostatistics
> tlum...@u.washington.eduUniversity of Washington, Seattle
>

[[alternative HTML version deleted]]

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Constrained vector permutation

2010-02-01 Thread Andrew Rominger
Chuck,

Thanks for the reference.  MCMC with "swapping" was my original goal and I
think I'll go with that in the long run--although using sample() has worked
out for now.  I was initially concerned that checking all choose(n,2)
possible swaps would slow down the process, but in my case for choose(30,2)
= 435 this seems not unreasonable.  I wonder if for larger vectors an
alternative would be desired.

Thanks to everyone for your help--
Andy


On Fri, Jan 29, 2010 at 8:23 PM, Charles C. Berry wrote:

> On Fri, 29 Jan 2010, Andrew Rominger wrote:
>
>  Being reasonably sure that all valid permutations are equally probable is
>> important to me.  I've played around with search algorithms in permuting
>> contingency tables and find that possible solutions decrease rapidly once
>> one starts assigning values, particularly if small values are assigned
>> first, so it would seem all solutions are not equally probable (not only
>> that but one frequently encounters "dead ends" where there are values left
>> to assign and no allowable place to put them).  As such I think I'd opt to
>> use sample()... several times if needed.
>>
>> To clarify, yes, I only need one valid permutation, the idea is I'll
>> generate 1000s of ordered vectors, and then for each one generate one
>> valid
>> permutation.
>>
>> Thanks very much for the help and insights--
>> Andy
>>
>
> Andy,
>
> If you have some sense of importance sampling and/or MCMC you might look at
>
> Zaman and Simberloff (2002, Environmental and Ecological Statistics 9,
> 405--421).
>
> which concerns sampling a binary matrix with fixed margins - not quite your
> problem, but akin to it in being a combinatorial nightmare without an
> obvious direct solution of workable size for real problems.
>
> They define a neighborhood for each allowable matrix s.t. swapping a pair
> of 1's at ij and kl with a pair of 0's at il and kj  (which doesn't violate
> the margin constraints) leads to a member of the neighborhood. IIRC, the
> size of the neighborhood and the sizes of the neighborhoods of the members
> of its neighborhood determine the probabilities of staying put or moving to
> get the next element of the MCMC chain and which member of the neighborhood
> to choose.
>
> I suppose something like that (i.e. defining neighborhoods of allowable
> permutations, measuring their size, and using this to guide sampling or
> develop importance weights) might apply in your case. Maybe something like
> this: start with an ordering of your n-vector that conforms to the
> constraints, look at all the choose(n,2) pairs of elements and check which
> of them could be exchanged to yield another conforming ordering; the
> allowable swaps define the neighborhood, and their number is its size.
>
> So, the idea is to develop an MCMC sampler. Run it for each ordered vector
> to get past the burn in, then take one value from it.
>
> HTH,
>
> Chuck
>
>
>>
>> On Thu, Jan 28, 2010 at 3:04 PM, Thomas Lumley > >wrote:
>>
>>  On Thu, 28 Jan 2010, Jason Smith wrote:
>>>
>>>  It wouldn't be guaranteed to produce any usable permutation, but it
>>> seems
>>>
>>>> like it would be much faster and so could be repeated until an
>>>>> acceptable
>>>>> vector is found.  What do you think?
>>>>>
>>>>> Thanks--
>>>>> Andy
>>>>>
>>>>>
>>>>>  I think I am not understanding what your ultimate goal is so I'm not
>>>> sure I can give you appropriate advice.  Are you looking for a single
>>>> valid permutation or all of them?
>>>>
>>>> Since that constraint sets a ceiling on each subsequent value, it
>>>> seems like you could solve this problem more easily and quickly by
>>>> using a search strategy instead of random sampling or generating all
>>>> permutations then testing.  The constraint will help prune the search
>>>> space so you only generate valid permutations.  Once you are examining
>>>> a particular element you can determine which of the additional
>>>> elements would be valid, so only consider those.
>>>>
>>>>
>>> It's easy to generate valid permutations this way.  It does not appear
>>> straightforward to ensure that all valid permutations are sampled with
>>> equal
>>> probability, which I thought was part of the specification of the
>>> problem.
>>>
>>> -thomas
>>>
>>>
>>> Thomas Lumley   

[R] simulate time series with various "colors" of noise

2010-05-27 Thread Andy Rominger
Hello,

I'm trying to simulate time series with various "colors" of noise to verify
some other code I've written.  I'm using the fractal package, specifically
FDSimulate.  I have a detailed question about this particular function, but
I'd also be happy to receive any suggestions for other packages, functions,
citations.

To the question: FDSimulate takes a delta parameter governing the
fractionally differenced process.  Using delta = 0 we get white noise, delta
= 0.5 pink, and delta = 1 red/"brown."  Everything seemed to be working
great for delta = 0 and 1, but at delta = 0.5 there were problems.  Using
the FDWhittle function (which should back-calculate delta from a series of
numbers) I investigated what's going on:

#
require(fractal)
these.delt <- rep(seq(0,1,length.out=100),rep(10,100))
est.delt <- numeric(1000)
# this takes a few seconds
for(i in 1:1000) {
this.x <-
FDSimulate(rep(these.delt[i],1000),rep(1,1000),method="ce",seed=runif(1,1,1))
est.delt[i] <- FDWhittle(this.x)
}

plot(these.delt,est.delt,xlab="delta",ylab="estimated delta")
abline(0,1,col="red")
#

This plot shows that for FDSimulate(delta=0,...) we can back-calculate the
right value, but at FDSimulate(delta=0.5,...) there is a big jump from a
back-calculated delta of around 0.1 to one of 0.7 (when it should be 0.5).
At FDSimulate(delta=1,...) the given and back-calculated values line up.  So
my question is...which function is not doing its job (or which do I not
understand!), does FDSimulate not produce accurate time series, or does
FDWhittle not accurately estimate delta?  I tried using different methods in
FDSimulate but they threw the error:
Error in as.vector(.Call("RS_fractal_bootstrap_circulant_embedding", S,  :
  error in evaluating the argument 'x' in selecting a method for function
'as.vector'

If I am missing other useful functions for producing/estimating time series
of the fractional/long-memory type I would also welcome suggestions.

Thanks for your thoughts and insights!
Andy Rominger

[[alternative HTML version deleted]]

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Y-axis range in histograms

2010-05-31 Thread Andy Rominger
A few ideas:

Make a log-scale y-axis like:
hist(my.data,...,log="y")

argument yaxp can help make the ticks look pretty...see ?par.

Or use various functions from the package `plotirx': axis.break and
gap.barplot might be helpful.

For those functions, you'll probably need to get your frequencies from the
histogram, something like:

my.freq <- hist(my.data,...,plot=FALSE)$counts

you may also need to play with the x-axis tick labels to actually denote the
correct bin for your frequencies.

Good luck, hope that helps--
Andy



On Mon, May 31, 2010 at 10:49 AM, Aarne Hovi  wrote:

>
> Hi,
>
> I'm trying to create a histogram with R. The problem is that the frequency
> is high for a couple of x-axis categories (e.g. 1500) and low for most of
> the x-axis categories (e.g. 50)
> http://r.789695.n4.nabble.com/file/n2237476/LK3_hist.jpg . When I create
> the
> histogram, it is not very informative, because only the high frequencies
> can
> be seen clearly. Is there any way I could cut the y-axis from the middle so
> that the y-axis values ranged for example from 0 to 300, and then again
> from
> 900 to 1500?
>
> --
> View this message in context:
> http://r.789695.n4.nabble.com/Y-axis-range-in-histograms-tp2237476p2237476.html
> Sent from the R help mailing list archive at Nabble.com.
>
> __
> R-help@r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide
> http://www.R-project.org/posting-guide.html
> and provide commented, minimal, self-contained, reproducible code.
>

[[alternative HTML version deleted]]

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] Help with error in "if then" statement

2008-07-11 Thread Andrew Rominger

Dear list,

I'm afraid this is a mundane question.  Here's the background: I've  
produced a function which allows me to sequentially measure angles and  
distances from a specified reference point to a curve defined by  
empirical data points while (i.e. using a while loop) the angle being  
measured is within certain bounds.  Because the curve is circular I  
need to pars the data into basically an "upper" curve and "lower"  
curve.  I tried to do this with an if statement, specifically:


ycrit<-subset(data,subset=data$x==min(data$x)
y.max<-length*sin(angle)+y.ref #length, angle and y.ref are given
if(y.maxNow the problem:  The while loop works for 4 iterations, until I get  
the error message:


"Error in if (y.max < ycrit) { : missing value where TRUE/FALSE needed"

When I forced the function to print y.max and ycrit for each iteration  
of the while loop, it returns finite real numbers, including ycrit =  
153.5 and y.max = 245.16 for the step which returns the error message.


Any ideas about what's going on here--why does R "think" that  
245.16<153.5 is "missing," or is anything other than TRUE/FALSE?  Am I  
using "if" incorrectly?  In which case would it be more appropriate to  
perhaps create subsets of the data points based on < or > ycrit?


Thanks in advance for any guidance--
Andy

--
Andrew J. Rominger
Department of Biological Sciences
Stanford University
[EMAIL PROTECTED]

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] Error loading packages at sprintf()

2009-03-02 Thread Andrew J. Rominger
Hello all,

I'm running R2.8.1 on a Mac OS 10.4.11.  While trying to install the package 
gdata, I was presented with the following (error at end of report):

R > install.packages("gdata")
also installing the dependency ‘gtools’

trying URL 
'http://cran.stat.ucla.edu/bin/macosx/universal/contrib/2.8/gtools_2.5.0-1.tgz'
Content type 'application/x-tar' length 85484 bytes (83 Kb)
opened URL
==
downloaded 83 Kb

trying URL 
'http://cran.stat.ucla.edu/bin/macosx/universal/contrib/2.8/gdata_2.4.2.tgz'
Content type 'application/x-tar' length 539301 bytes (526 Kb)
opened URL
==
downloaded 526 Kb

/bin/sh: line 1: tar: command not found
2009-03-02 20:42:06.081 R[357] tossing reply message sequence 3 on thread 
0x1ce3ae0
Error in sprintf(gettext(fmt, domain = domain), ...) : 
  argument is missing, with no default


I can't figure out why this error is occurring [the error in 
sprintf(gettext(fmt, domain = domain), ...].  I've never been prompted to 
supply arguments to sprintf before.  Upon trying to install various other 
packages (e.g. 'vegan', 'ade4', 'ads'...) I get the same error message 
regarding sprintf.  I recently removed many old data objects from my workspace, 
could I have accidentally messed with sprintf?

In a previous R-help post a similar error resulted while trying to install a 
package from source, and was corrected by specifying type="source", but I'm not 
really sure how to properly specify the argument 'lib' and so leave it as it's 
default value.  Without specifying 'lib' is it appropriate to call 
type="source"?

Thanks in advance for any help--
Andy Rominger

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] different .Rprofile files for different projects

2008-11-20 Thread Andrew J. Rominger
Dear list,

First off, let me offer my apologies, I know this is a very basic question.  
After amassing a large number of objects (from multiple projects) in one 
working directory, I'd like to be able to start using different directories, 
just for the sake of organization alone.  But I have no idea how to do this.  I 
am using a mac, running R 2.5.

Searching the FAQ online I find:

12.1 How can I have a per session .Rprofile?
You can by writing a .Rprofile file in your favorite session directory...

So I think my specific question is how do I write a .Rprofile?  I know it 
should be obvious, but is there a command to be called in the R console during 
a given session, or do I write a stub file outside of R (in a session 
directory?), open this using Preferences and then modify it in an R session?  
If the latter, how do I go about writing a stub and making R recognize it as a 
working directory, and what is a session directory/how can I find where the 
default session directory is located?

Again, my apologies for being naive, and thanks very much for any help--
Andy Rominger

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.