Hi
cross posting from sig geo , because I really need help!
I'm using geoR for some spatial linear models and I'm getting
surprisingly optimistic values from the spatial models relative to the
non-spatial, even when the models appear to be performing about
equally (by AIC comparison)
For example
Hi,
Yes, after reading that paper I realized it contained the reference to
the actual paper. A question would be why isn't the original paper
cited instead of a survey. But that solved my problem so thanks a lot!
On Nov 9, 2007 3:53 AM, Rolf Turner <[EMAIL PROTECTED]> wrote:
>
> On 9/11/2007, a
On 9/11/2007, at 4:28 PM, Omar Baqueiro wrote:
> Dear R experts,
>
> I am looking at the Fligner-Killeen statistic to perform a
> "homegenity of variance" test across multiple data sets. However I
> cannot find any reference to any paper or other bibliography where the
> theory behind this test
Dear R experts,
I am looking at the Fligner-Killeen statistic to perform a
"homegenity of variance" test across multiple data sets. However I
cannot find any reference to any paper or other bibliography where the
theory behind this test is explained. I have looked (google) for
information on bot
I think something like this is what you are after. This will create 7
pairs of lists with the parameters that I think you want. I don't
have the data (if you want to sent it to me, I may be able to test it)
so you will have to test it yourself.
# create a list for the results
result <- vector('l
If it were me, I think I would try to use Rscript. R will still have
to pull data using ROracle, and write back to Oracle, but the
operation will be under the control of the pl/sql script.
A standard R installation now includes Rscript. Rscript is intended
to be used, as I recall, in a manner a
>
> (2) More process and I/O facilities, specifically I'd like
> forking and
> something like a "functionconnection" which works like a
> textconnection but obtains input from / feeds output to a
> function.
> This would allow running an external process that receives inp
Thanks again for the response!
For example, I want to run the following
> contrast(fit.lme, list(Trust="U", Sex=levels(Model$Sex),
Freq=levels(Model$Freq)), list(Trust="T", Sex=levels(Model$Sex),
Freq=levels(Model$Freq)))
The 2nd and 3rd arguments are two lists that I'm trying to construct
I am still not sure what you expect as output. Can you provide an
example of what you think that you need. What is it that you are
trying to construct? How do you then plan to use them? There might
be other ways of going about it if we knew what the intent was -- what
is the structure that you
On Nov 8, 2007, at 5:42 PM, Edith Hodgen wrote:
> Hi
>
>
[snip]
> What I think the problem is (I'm hoping it's not)
> —--
> I've used odfWeave to do something similar, and was then able to
> specify both the infile and the outfile (and so could go something
> like
>
Here are a couple of thoughts:
You can use 'file.rename' to rename the .tex file created by Sweave to
your pattern (within the loop if you stick with that approach).
I did something similar (using odfWeave) where the template file always
accessed a given dataframe (mydata1 for example) then when
Thanks.
So with kid.weights I will have (see below) - or is my example use of quantreg
wrong?
"""
#growth_chart_create.R
library(UsingR)
library(quantreg)
lbs_to_kg<-1/2.2046
inch_to_cm<-2.54
data(kid.weights)
data_kids<-kid.weights
data_kids$weight_kg<-kid.weights$weight*lbs_to_kg
data_kids$length
Hi
Apologies in advance if I've missed something obvious. I have read the
Sweave manual, the first article in R News, looked at the Help pages,
googled Sweave and words like loop, output, files, multiple, done much
the same on R site search (in case I missed something on Google) and I
couldn't fin
Thanks for the response!
I want to create those lists so that I could use them in a function
('contrast' in contrast package) as arguments.
Any suggestions?
Thanks,
Gang
On Nov 8, 2007, at 5:12 PM, jim holtman wrote:
> Can you tell us what you want to do, and not how you want to do it.
> Wit
Well Sadeghian, I have to give you credit for perseverance. This must be
the fifth email you have sent to R-help for this one question.
Unfortunately, with each additional message, your chances of getting any
help are becoming asymptotically closer to zero. So some advice:
> PLEASE do read the
Can you tell us what you want to do, and not how you want to do it.
Without the data it is hard to see. Some of your indexing probably
does not have the correct number of parameters when trying to do the
replacement. An explanation of what you expect the output to be would
be useful in determinin
On Thu, 8 Nov 2007, Erik Iverson wrote:
> Hello -
>
> I am wanting to create some Cox PH models with coxph (in package
> survival) using different datasets.
>
> The code below illustrates my current approach and problem with
> completing this.
>
> ### BEGIN R SAMPLE CODE ##
Instead of mapply, use lapply (it is more appropriate in this case anyway as
you have only one list that you need to iterate over):
lapply(df.list, function(x) coxph(form, x))
-Christos
> -Original Message-
> From: [EMAIL PROTECTED]
> [mailto:[EMAIL PROTECTED] On Behalf Of Erik Ivers
On 9/11/2007, at 9:01 AM, Julian Burgos wrote:
> Hey Christoph,
>
> It is not clear what do you want to "extract".
> w[w>0.6] does give you the correlation values above 0.6. What is your
> question?
>
> Julian
>
Perhaps he wants
which(w>0.6,arr.ind=TRUE)
It is
I have trouble creating an array of lists? For example, I want to do
something like this
clist <- array(data=NA, dim=c(7, 2, 3));
for (n in 1:7) {
for (ii in 1:2) {
for (jj in 1:3) {
if (cc[n, ii, jj] == "0") { clist[n, ii, ][[jj]] <- list(levels
(MyModel[,colnames(M
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
[EMAIL PROTECTED] wrote:
> This looks a bit like it only works for unix and maybe linux. is that
> right? I'm using windows so if it is right I'll need another solution.
>
The latest version of the RSPerl code allows one to embed Perl within R
on
Hello -
I am wanting to create some Cox PH models with coxph (in package
survival) using different datasets.
The code below illustrates my current approach and problem with
completing this.
### BEGIN R SAMPLE CODE ##
library(survival)
#Define a function to make tes
This looks a bit like it only works for unix and maybe linux. is that
right? I'm using windows so if it is right I'll need another solution.
Stephen
> http://www.omegahat.org/RSPerl/
>
> --
> Andrew J Perrin - andrew_perrin (at
This will do the calculations and the plot:
> x <- scan(textConnection("255 0 255 0 255 255 255 0 255 0
+ 255 255 255 255 0 255 255 0 255 0
+ 255 255 255 255 255 255 255 0 255 0
+ 255 255 255 255 0 255 255 0 255 0
+ 255 255 0 255 255 255 255 0 255 0
+ 255 255 255 0 255 0 0 255 0 255"), what=0)
Rea
Dear Listers,
My post might be somewhat OT.
Currently, I am trying to use flexmix to build a finite mixture model.
For instance, I am getting the prior probability and coefficients for
each latent class from training data. Is there a way to get the
posterior probablity and prediction of a new datas
Hi
I am trying to run a mixed effects model taking into account that i sampled
in the same locations four times i.e. temporal repeated measures. From what
i gathered i need to group my data by my repeated measure - time - and state
the structure of my random variables so i tried this:
mixedmodel<
Hi everybody,
Im a newbie, but i hope someone can help me in this work...
Ill try to explain what i need to do in the best way, but my english is
not good...
Iv imported a big table of data, this table is something like this:
255 0 255 0 255 255 255 0 255 0
255 255 255 255 0 255 255 0 255 0
255 2
On Nov 8, 2007 3:16 PM, Jan T. Kim <[EMAIL PROTECTED]> wrote:
> On Thu, Nov 08, 2007 at 01:35:34PM -0500, Duncan Murdoch wrote:
> > On 11/8/2007 1:26 PM, Barry Rowlingson wrote:
> > > hadley wickham wrote:
> > >
> > >> You're assuming an automatic cast from numbers into strings? What if
> > >> a +
On Thu, Nov 08, 2007 at 01:35:34PM -0500, Duncan Murdoch wrote:
> On 11/8/2007 1:26 PM, Barry Rowlingson wrote:
> > hadley wickham wrote:
> >
> >> You're assuming an automatic cast from numbers into strings? What if
> >> a + "4" threw an error?
> >
> > What's wrong with commas anyway when usin
On 11/8/2007 2:44 PM, Peter Dalgaard wrote:
...
> (We've been here before, haven't we?)
For anyone interested, last time was here:
https://mailman.stat.ethz.ch/pipermail/r-devel/2006-August/038991.html
and the very first thing Martin said in that message was that it was a
recurring theme.
Du
Hey Christoph,
It is not clear what do you want to "extract".
w[w>0.6] does give you the correlation values above 0.6. What is your
question?
Julian
Christoph Scherber wrote:
> Dear R users,
>
> suppose I have a matrix of observations for which I calculate all
> pair-wise correlations:
>
>
On 11/8/2007 2:44 PM, Peter Dalgaard wrote:
> Duncan Murdoch wrote:
>> On 11/8/2007 11:51 AM, Thomas Lumley wrote:
>>
>>> On Wed, 7 Nov 2007, Duncan Murdoch wrote:
>>>
>>>
At first I thought you were complaining about the syntax, which I find
ugly. There was a proposal last year
Duncan Murdoch wrote:
> On 11/8/2007 11:51 AM, Thomas Lumley wrote:
>
>> On Wed, 7 Nov 2007, Duncan Murdoch wrote:
>>
>>
>>> At first I thought you were complaining about the syntax, which I find
>>> ugly. There was a proposal last year to overload + to do concatenation
>>> of strings, so
On 11/8/2007 2:27 PM, Alberto Monteiro wrote:
> Duncan Murdoch wrote:
>>
>>> and there's always sprintf() for those moments when you
>>> want neat formatting.
>>
>> That's good when you want good control over the formatting, but it
>> doesn't tend to be all that readable, with the variables all l
I think you have the reference for the lmreg package
wrong. It looks like it is Carey VJ. LMSqreg: An R
package for ColeGreen reference centile curves, 2002,
http://www.biostat.harvard.edu/∼carey.
The package seems to be available at
http://www.biostat.harvard.edu/~carey/vcwww_4.html
--- Niels
Duncan Murdoch wrote:
>
>> and there's always sprintf() for those moments when you
>> want neat formatting.
>
> That's good when you want good control over the formatting, but it
> doesn't tend to be all that readable, with the variables all listed
> at the end, instead of in between the bits o
On 08-Nov-07 18:39:57, Gabor Grothendieck wrote:
> On Nov 8, 2007 1:26 PM, Barry Rowlingson <[EMAIL PROTECTED]>
> wrote:
>> hadley wickham wrote:
>>
>> > You're assuming an automatic cast from numbers into strings? What
>> > if
>> > a + "4" threw an error?
>>
>> What's wrong with commas anyway wh
Hello,
This is probably a naieve question. Why does R store everything in
double-precision format? For many circumstances (e.g., dealing with
huge binary files) it seems like a waste of memory. Is there any
thought of allowing the user to decide the format when assigning an
object (e.g., as an opt
On Nov 8, 2007 1:26 PM, Barry Rowlingson <[EMAIL PROTECTED]> wrote:
> hadley wickham wrote:
>
> > You're assuming an automatic cast from numbers into strings? What if
> > a + "4" threw an error?
>
> What's wrong with commas anyway when using cat():
>
> > cat("x is ",x,' and y is ',y,'\n',sep='')
On 11/8/2007 1:26 PM, Barry Rowlingson wrote:
> hadley wickham wrote:
>
>> You're assuming an automatic cast from numbers into strings? What if
>> a + "4" threw an error?
>
> What's wrong with commas anyway when using cat():
>
> > cat("x is ",x,' and y is ',y,'\n',sep='')
> x is 1 and y i
Assume entries which are neither Case1 nor Case2 should be set to 0.
Then:
Case1 * (A == 1) * (D == 1) * (P == 1) + Case2 * (A == -1) * (D == -1)
* (P == -1)
# if A, D and P have their component values in the set [-1, 1] then
this works too:
Case1 * (pmin(A, D, P) == 1) + Case2 * (pmax(A, D, P)
You could try using levelplot from the lattice package:
library(lattice)
levelplot(mcpvalue~x+y)
failing that, interpolate them to a grid using akima or fields then
display with image
L
On 8 Nov 2007, at 18:13, Duncan Murdoch wrote:
> On 11/8/2007 11:51 AM, zhijie zhang wrote:
>> Dear friend
hadley wickham wrote:
> You're assuming an automatic cast from numbers into strings? What if
> a + "4" threw an error?
What's wrong with commas anyway when using cat():
> cat("x is ",x,' and y is ',y,'\n',sep='')
x is 1 and y is 2
and there's always sprintf() for those moments when you
You are putting your results back into "A" which might change things
as you execute. This might be a faster way:
result <- matrix(NA,dim(A)[1], dim(A)[2])
# now compute the cases
result[(A ==1) & (D == 1) & (P ==1)] <- Case1
result[(A == -1) & (D == -1) & (P == -1)] <- Case2
...
On Nov 8, 2
w[w>.6] seems to work for me. I cut down the size of
the matrix for easier visual inspection.
m=matrix(sample(1:20,replace=T),4,5)
w=cor(m,use="pairwise.complete.obs")
w
w[w>.6]
perhaps perferabely
w[w>0.6 & w!=1]
--- Christoph Scherber
<[EMAIL PROTECTED]> wrote:
> Dear R users,
>
> supp
On 11/8/2007 11:51 AM, zhijie zhang wrote:
> Dear friends,
> My dataset is like the following:
> xy mcpvalue
> 0.4603578 0.6247629 1.001
> 0.4603715 0.62477881.001
> 0.4603852 0.6247948 1.001
> 0.4110561 0.5664841 0.9
On 11/8/2007 12:57 PM, hadley wickham wrote:
>> My objection, at least, was that + should be *associative*. I don't think
>> anyone would expect a + b and b+a to be the same for strings, but I do
>> think the fact that (a+b)+c and a+(b+c) would be different (if some of a,
>> b,c were strings) has
On 11/8/2007 11:51 AM, Thomas Lumley wrote:
> On Wed, 7 Nov 2007, Duncan Murdoch wrote:
>
>>
>> At first I thought you were complaining about the syntax, which I find
>> ugly. There was a proposal last year to overload + to do concatenation
>> of strings, so you'd type cat("x=" + x + "y=" + y + "
> My objection, at least, was that + should be *associative*. I don't think
> anyone would expect a + b and b+a to be the same for strings, but I do
> think the fact that (a+b)+c and a+(b+c) would be different (if some of a,
> b,c were strings) has real potential for ugliness.
You're assuming an
On 11/8/07, Michael Kubovy <[EMAIL PROTECTED]> wrote:
> Dear r-helpers,
>
> I'm using ggplot2::ggplot to plot seven time series on the same graph.
>
> c <- ggplot(jobm, aes(y = value, x = year, colour = kind))
> c + stat_smooth()
>
> This gives me a legend with colors that are not very different. C
Hi Azadeh,
As the warning message is telling you, it seems that your initial
parameters for the covariance functions are not very good. Something
that you can do is to use the eyefit() function (package geoR) to fit
your variogram "by eye" and get a first approximation for your
covariance par
Hi all,
Thank for the advice! Gabor, I've been putting off getting into
SQLite. I may need to bite the bullet and learn it.
Jim - thanks for the help - and yes, I'd read that old post. My
problem is that, with the other objects already in memory, I cannot
pull the whole matrix in (in reality, it
Hi
look at akima package, especially its function interp.
Petr
[EMAIL PROTECTED]
[EMAIL PROTECTED] napsal dne 08.11.2007 17:51:21:
> Dear friends,
> My dataset is like the following:
> xy mcpvalue
> 0.4603578 0.6247629 1.001
> 0.4603715 0.6
Hi all,
I have a set of patterns which can occur in a series of (3) matrices. I
want to identify those and create a fourth one with the identifiers of
the cases.
Something like:
for (i in 1:l) {
for (j in 1:w) {
A[A[i,j]==1 & D[i,j]==1 & P[i,j]=
If I understand what you want, you can use 'which'
which(w>0.6, arr.ind=T)
On 08/11/2007, Christoph Scherber <[EMAIL PROTECTED]>
wrote:
>
> Dear R users,
>
> suppose I have a matrix of observations for which I calculate all
> pair-wise correlations:
>
> m=matrix(sample(1:100,replace=T),10,10)
>
Hi all,
I have a set of patterns which can occur in a series of (3) matrices. I
want to identify those and create a fourth one with the identifiers of
the cases.
Something like:
for (i in 1:l) {
for (j in 1:w) {
A[A[i,j]==1 & D[i,j]==1 & P[i,j]=
Dear r-helpers,
I'm using ggplot2::ggplot to plot seven time series on the same graph.
c <- ggplot(jobm, aes(y = value, x = year, colour = kind))
c + stat_smooth()
This gives me a legend with colors that are not very different. Can I
label the lines instead? How?
_
Hi
[EMAIL PROTECTED] napsal dne 08.11.2007 16:43:14:
> Dear R users,
>
> suppose I have a matrix of observations for which I calculate all
> pair-wise correlations:
>
> m=matrix(sample(1:100,replace=T),10,10)
> w=cor(m,use="pairwise.complete.obs")
>
> How do I extract only those correlations
Look at ?get and possibly ?Filter (new to 2.6.0), do they help with what
you want?
--
Gregory (Greg) L. Snow Ph.D.
Statistical Data Center
Intermountain Healthcare
[EMAIL PROTECTED]
(801) 408-8111
> -Original Message-
> From: [EMAIL PROTECTED]
> [mailto:[EMAIL PROTECTED] On Behalf
Dear friends,
My dataset is like the following:
xy mcpvalue
0.4603578 0.6247629 1.001
0.4603715 0.62477881.001
0.4603852 0.6247948 1.001
0.4110561 0.5664841 0.995
The x and y variables are unsorted.
I use the functi
Thanks for all your advice and I have spent considerable time looking at all
the options. I have good new and bad news. Bad news I won't be implmenting R
on our server. Good news I can keep all the business logic in the database
where it belongs. What I will be doing is taking the data set and tur
On Wed, 7 Nov 2007, Duncan Murdoch wrote:
>
> At first I thought you were complaining about the syntax, which I find
> ugly. There was a proposal last year to overload + to do concatenation
> of strings, so you'd type cat("x=" + x + "y=" + y + "\n"), but there was
> substantial resistance, on the
The lmsqreg package is by Vince Carey and is available from his
website. There is an independent package with fortran routines from
Tim Cole.
Both packages implement the LMS method of Cole and Green. You might
also
want to consider alternative methods: an approach based on
nonparametric
qu
On Thu, 8 Nov 2007, Naxerova, Kamila wrote:
> Hi all,
> is the Matrix package no longer available for download via install.packages?
Not for your obsolete version of R. The current version has
Depends: R (>= 2.5.1), stats, methods, utils, lattice
as you can see at
http://cran.r-project.org/sr
On Thu, 8 Nov 2007, envisage wrote:
Prof Brian. thanks, I have check the on-line Errata at
http://www.stats.ox.ac.uk/pub/MASS4/Errata4.2 before, for the second
printing.
I don't know it depends on the first printing Errata at all.
The second printing does not use data() for the MASS datasets,
We are constructing growth charts (age/weight and age/length) for children
with diagnosis that impacts weight/length.
But we we don't know how to use R for producing growth charts.
We are collection data of Age, Weight and Length.
The data are used to produce diagnosis-specific Growth charts (l
Hope I am not too late joining this thread. I believe the difference
between R and SPSS is because SPSS adjusts the Type III SS by the
harmonic mean of the unbalanced cell sizes. This calculation is
discussed in Maxwell and Delaney (1990, pp. 271-297).
In short, the best explanation I can offer
Dear R users,
suppose I have a matrix of observations for which I calculate all
pair-wise correlations:
m=matrix(sample(1:100,replace=T),10,10)
w=cor(m,use="pairwise.complete.obs")
How do I extract only those correlations that are >0.6?
w[w>0.6] #obviously doesn´t work,
and I can´t find a way
Ooops - yes that's a bug! It'll be fixed in the next version of
ggplot, or you can run this code to fix it yourself:
GeomAbline$new <- function(., mapping=aes(), data=NULL, intercept=0,
slope=1, ...) {
if (missing(data)) {
data <- data.frame(intercept = intercept, slope=slope)
}
Hello again,
Sorry but the code that I insert wasn't write. Should be like this:
fit_2323v_168f<-auto.arima(regts.ts, d = NA, D = NA, max.p = 2, max.q = 2,
max.P = 1, max.Q = 1, max.order = 5,
start.p=0, start.q=0, start.P=0, start.Q=0,
stationary
Peter and Moshe, thank you both for your suggestions and hints. I'm proud to
say that it took me less than an hour to find my mistake:
> s_pooled <- (((n-1)*(s_x^2)) + ((m-1)*(s_y^2))) / (n+m-2)
> s_pooled
[1] 1.939521
> t_obs <- (xbar - ybar) / (sqrt(s_pooled) * (sqrt(1/n + 1/m)))
> t_obs
[1] 2.
On 11/8/07, ONKELINX, Thierry <[EMAIL PROTECTED]> wrote:
> You could work around it like this.
>
> n <- length(levels(TDBU$system))
> rows <- ceiling(sqrt(n))
> TDBU$rows <- ceiling(as.numeric(TDBU$system) / rows)
> TDBU$cols <- (as.numeric(TDBU$system) - 1) %% rows
>
> ggplot(TDBU,aes(x=x))+geom_
Hello,
I using the fuction auto.arima() from package forecast to predict the values
of p,d,q and P,D,Q.
My problem is the execution time of this function, for example, a time
series with 2323 values with seasonality to the week take over 8 hours to
execute all the possibilities.
I using a compute
?unlist
I think you're misreading ?stack.
--- Frank Schmid <[EMAIL PROTECTED]> wrote:
> Dear R user
>
> Suppose I have the following list:
>
> > f <- rnorm(2)
> > s <- rnorm(3)
> > l <- list(f,s)
> > l
> [[1]]
> [1] 0.31784399 0.08575421
>
> [[2]]
> [1] -0.6191679 0.7615479 -1.0087659
>
Dear list.
I am student M.S. statistics in department statistics .I have thise problem in
work with the function variofit and nls and dont know how to solve it.
var1<-variog(data,option="bin")
var2<-variog(data,option="cloud")
v1<-var1$v
u1<-var1^u
v2<-var2$v
u2<-var2$u
variofit(var1,ini.cov.pars
You could work around it like this.
n <- length(levels(TDBU$system))
rows <- ceiling(sqrt(n))
TDBU$rows <- ceiling(as.numeric(TDBU$system) / rows)
TDBU$cols <- (as.numeric(TDBU$system) - 1) %% rows
ggplot(TDBU,aes(x=x))+geom_histogram(aes(y=..density..))+ geom_density()
+ facet_grid(rows ~ cols)
I have thise problem in work with the function variofit and nls and dont know
how to solve it.
var1<-variog(data,option="bin")
var2<-variog(data,option="cloud")
v1<-var1$v
u1<-var1^u
v2<-var2$v
u2<-var2$u
variofit(var1,ini.cov.pars=c(0.005,1.5),cov.model="power",fix.nugget=F,weight="equal")
var
Hello.
Error in optim(ini, .loss.vario, method = "L-BFGS-B", hessian = TRUE, :
non-finite value supplied by optim
I have this errore in fit the model"power" in simulation with the function
g<-function(n=100,max=100,c0=0,ce=.005,ae=1.5,nsim=1000){
var.exp<-function(c0,ce,ae,h){
f<-c0+ce
Hello.
Error in optim(ini, .loss.vario, method = "L-BFGS-B", hessian = TRUE, :
non-finite value supplied by optim
I have this errore in fit the model"power" in simulation with the function
g<-function(n=100,max=100,c0=0,ce=.005,ae=1.5,nsim=1000){
var.exp<-function(c0,ce,ae,h){
f<-c0+ce
Does anyone (Hadley??) know if there's a straightforward
way in ggplot2 to get data divided by a single factor to
plot as a rectangular grid of subplots? So far I've only
been able to get such data plotted as a single row or
single column of skinny subplots. The code below gives
an example i
Hi all,
is the Matrix package no longer available for download via install.packages?
When I try to install it (from any mirror), I get the following error message:
install.packages("Matrix",lib="/home/kn52/R-2.5.0/library")
Warning message:
package 'Matrix' is not available in
install.packages("
have you tried "as.is=TRUE"
On Nov 8, 2007 6:20 AM, <[EMAIL PROTECTED]> wrote:
> Hi,
>
> I'm trying to use read.fwf
>
>temp = read.fwf ("Raw data.txt", widths = c (11, 21, 10, rep
> (16, 6)) ,skip = 2, n = 2, stringsAsFactors = FALSE, strip.white = TRUE)
>
> but no matter what I do the s
Don't know if SQLite can handle that many columns but if it can and if file
in an acceptable format then sqldf simplifies the interface to reading it
into an SQLite database that it automatically creates on the fly and then
gets a subset out of it into R. (If it will fit into memory you can omit t
Dear List,
I encountered a strange problem when trying to print a
path diagram:
> path.diagram(s,
+ ingore.double=FALSE,
+ edge.labels='values')
digraph "s" {
rankdir=LR;
size="8,8";
node [fontname="Helvetica" fontsize=14 shape=box];
edge [fontname="Helvetica" fontsize=10];
Also check out:
http://finzi.psych.upenn.edu/R/Rhelp02a/archive/92525.html
On Nov 8, 2007 4:19 AM, Matthew Keller <[EMAIL PROTECTED]> wrote:
> Hi all,
>
> Is there a way to skip non-sequential lines using the "skip" argument
> in the scan function?
>
> E.g., I have a matrix with 100 rows and 1e7
simplest thing is to read in all 100 rows and then just select the
ones you want:
x <- scan()
x <- lapply(x, '[', seq(5,99,2))
On Nov 8, 2007 4:19 AM, Matthew Keller <[EMAIL PROTECTED]> wrote:
> Hi all,
>
> Is there a way to skip non-sequential lines using the "skip" argument
> in the sca
I am learning ggplot2, and need your help.
When I try
> p <- ggplot(mtcars, aes(x = wt, y=mpg)) + geom_point()
> p + geom_abline(slope=5)
(from http://had.co.nz/ggplot2/geom_abline.html)
the slope of the abline does not change, but this works:
> p + geom_abline(intercept=20)
In order to hav
Here are replies from Ted and Jasjeet . Thank you both for your help.
Oarabile
Jasjeet Singh Sekhon wrote:
>A bootstrap Kolmogorov-Smirnoff test will have the correct test level
>even if there are ties---i.e., even if non-continuous distributions
>are being compared. See Abadie, Alberto. 2002. `
On 11/6/07, Van Campenhout Bjorn <[EMAIL PROTECTED]> wrote:
>> Hi all,
>>
>> I made a dotplot() with lattice, which comes out nice on the graphics
>> device. I can save this as a eps using postscript() and include this in
>> a word document. This prints nice, but does not look good on screen.
Hi,
I'm trying to use read.fwf
temp = read.fwf ("Raw data.txt", widths = c (11, 21, 10, rep
(16, 6)) ,skip = 2, n = 2, stringsAsFactors = FALSE, strip.white = TRUE)
but no matter what I do the strings are turned into factors. I believe
it's the "n=2" parameter that causes the problem a
Hi
[EMAIL PROTECTED] napsal dne 07.11.2007 18:23:55:
> hello,
>
> i am a bit of a statistical neophyte and currently trying to make some
sense
> of confidence intervals for correlation coefficients. i am using the
cor.
> test() function. the documentation is quite terse and i am having
troub
Hallo,
I just created a graph with
myUndirectedGraph = randomEGraph(as.character(1:50), edges = 50)
The result is now an undirected graph. How can I change it into a directed one
with random directed edges?
Thanks, Corinna
[[alternative HTML version deleted]]
_
Read the posting guide and follow it, i.e.
a) do not cross-post.
b) read the mailing list archives and find that this has been resolved
in R-patched
Uwe Ligges
Camila Estevam wrote:
> Hi,
>
> I was using the 2.4.1 R version and I had no problem
> saving my plots as postScript. Now that I hav
Prof Brian. thanks, I have check the on-line Errata at
http://www.stats.ox.ac.uk/pub/MASS4/Errata4.2 before, for the second
printing.
I don't know it depends on the first printing Errata at all.
and sorry again for my anonymous posting. I am not a native English
speaker, and I should learn how to
I was sure that there was such a solution.
I would have liked to find it.
Thank you for your help.
Ptit Bleu.
-
Wollkind, Steven wrote:
>
> You don't need to loop. You can just do
>
> pfit$coefficients[is.na(pfit$coefficients)] <- 0
>
>
>
> Steve Wollkind
> As
NB: you posted part of a private message (out of context: it was in reply
to another message) to R-help. Both posting a private message without
permission and not giving the necessary context are breaches of copyright
law, and rather annoying -- see the posting guide for details.
On Wed, 7 Nov
On Thu, 8 Nov 2007, simon gatehouse wrote:
> If possible I would like to add two sub-menus to the R Console under
> Windows.
>
> For example, I would like to add:
> winMenuAddItem("File", "Load CSV...", "loadCSV()")
> winMenuAddItem("File", "Save CSV...", "saveCSV()")
>
> and have them appear unde
Hi all,
Is there a way to skip non-sequential lines using the "skip" argument
in the scan function?
E.g., I have a matrix with 100 rows and 1e7 columns. I open a
connection and want to read only lines 5, 7, 9, etc [i.e.,
seq(5,99,2)]
It might seem that the syntax to do this would be something li
Dear John,
Forgive me for putting my nose out, I hope that I'm not rude, but I am a bit
bewildered by your mail (and by statistical modelling).
I agree that if your model is:
Lawndepression~lawn.roller.weight +(1|lawn.id),
When, in fact, it *should be* (because you simulated the data or you're
Dear List
I am a newbie to this list and a fresh user of R. Is it possible to create
the shade map with 'R'. If so, help me get the related procedure containing
materials.
with thanks
pushparaj
--
View this message in context:
http://www.nabble.com/Shade-Maps-with-R-tf4768900.html#a13641003
Sen
1 - 100 of 101 matches
Mail list logo