Hello,
I have a question regarding shading regions under curves to display
95% confidence intervals. I generated bootstrap results for the slope
and intercept of a simple linear regression model using the following
code (borrowed from JJ Faraway 2005):
> attach(allposs.nine.d)
> x<-model.m
there does not appear to be any existing script, I'll send along my
attempts for your critique. I'd also be happy for any leads on scripts
written in languages other than R.
Thanks very much in advance--
Andy Rominger
Scafetta N, and Grigolini P (2002) Scaling detection in time series:
Di
Dennis,
If I understand correctly, you should be able to do something like this
(example with exponential decay):
these.x <- (1:10)/10
plot(these.x,exp(-these.x),type="l") # create a basic plot for starting
point
this.usr <- par("usr")# get the plotting region limits
# they are xleft, xrigh
Hello,
I'd like to take all possible sub-summands of a vector in the quickest and
most efficient way possible. By "sub-summands" I mean for each sub-vector,
take its sum. Which is to say: if I had the vector
x<-1:4
I'd want the "sum" of x[1], x[2], etc. And then the sum of x[1:2], x[2:3],
etc
; 1 2 3 4
> >> rollapply(zoo(x), 2, sum)
> > 1 2 3
> > 3 5 7
> >> rollapply(zoo(x), 3, sum)
> > 2 3
> > 6 9
> >> rollapply(zoo(x), 4, sum)
> > 2
> > 10
> >
> > # all at once
> > sapply(1:4, function(r) rollapply(zoo(x)
Hello,
I'm trying to permute a vector of positive integers > 0 with the constraint
that each element must be <= twice the element before it (i.e. for some
vector x, x[i] <= 2*x[i-1]), assuming the "0th" element is 1. Hence the
first element of the vector must always be 1 or 2 (by assuming the "0t
Hi Jason,
Thanks for you suggestions, I think that's pretty close to what I'd need.
The only glitch is that I'd be working with a vector of ~30 elements, so
permutations(...) would take quite a long time. I only need one permutation
per vector (the whole routine will be within a loop that generat
Being reasonably sure that all valid permutations are equally probable is
important to me. I've played around with search algorithms in permuting
contingency tables and find that possible solutions decrease rapidly once
one starts assigning values, particularly if small values are assigned
first,
case for choose(30,2)
= 435 this seems not unreasonable. I wonder if for larger vectors an
alternative would be desired.
Thanks to everyone for your help--
Andy
On Fri, Jan 29, 2010 at 8:23 PM, Charles C. Berry wrote:
> On Fri, 29 Jan 2010, Andrew Rominger wrote:
>
> Being reason
the error:
Error in as.vector(.Call("RS_fractal_bootstrap_circulant_embedding", S, :
error in evaluating the argument 'x' in selecting a method for function
'as.vector'
If I am missing other useful functions for producing/estimating time series
of the fractional/long-
A few ideas:
Make a log-scale y-axis like:
hist(my.data,...,log="y")
argument yaxp can help make the ticks look pretty...see ?par.
Or use various functions from the package `plotirx': axis.break and
gap.barplot might be helpful.
For those functions, you'll probably need to get your frequencies
r than TRUE/FALSE? Am I
using "if" incorrectly? In which case would it be more appropriate to
perhaps create subsets of the data points based on < or > ycrit?
Thanks in advance for any guidance--
Andy
--
Andrew J. Rominger
Department of Biological Sciences
Stanford University
[
install a
package from source, and was corrected by specifying type="source", but I'm not
really sure how to properly specify the argument 'lib' and so leave it as it's
default value. Without specifying 'lib' is it appropriate to call
type="s
as a
working directory, and what is a session directory/how can I find where the
default session directory is located?
Again, my apologies for being naive, and thanks very much for any help--
Andy Rominger
__
R-help@r-project.org mailing list
14 matches
Mail list logo