Oops, now with link to Ahmed's page
http://cs.anu.edu.au/people/Ahmed.ElZein/doku.php?id=research:more
On Tue, Nov 18, 2008 at 11:25 PM, Mose <[EMAIL PROTECTED]> wrote:
> GPU architecture is different enough from CPU architecture that you
> don't need 10s of GPUs to see a performance benefit ove
GPU architecture is different enough from CPU architecture that you
don't need 10s of GPUs to see a performance benefit over today's, say,
8 core CPUs. Lots of GPUs now give you a (relatively cheap)
"supercomputer" -- look up nVidia's Tesla marketing mumbo jumbo. One
GPU still gives you a 'heckuv
On Tue, 18 Nov 2008, Emmanuel Levy wrote:
Dear All,
I just read an announcement saying that Mathematica is launching a
version working with Nvidia GPUs. It is claimed that it'd make it
~10-100x faster!
http://www.physorg.com/news146247669.html
Well, lots of things are 'claimed' in marketing (
I wonder whether a package for signal (not image) features extraction has ever
been contributed to CRAN.
We have a very crude signal classifier based on the absolute difference of the
area subtended by the two signals being compared. We miss by far a lot of
possibly important features.
Thank yo
Er, ... the log transform is more like using larger units (giving
smaller numerical values.)
On Nov 18, 2008, at 11:55 PM, David Winsemius wrote:
You can always inflate the SS by using smaller units, which is what
your log transformation is doing. What is important for inference
is the
You can always inflate the SS by using smaller units, which is what
your log transformation is doing. What is important for inference is
the ratios of those sums of squares. The rest of your homework is
something you will need to complete yourself.
http://www.ugr.es/~falvarez/relaMetodos2.
My guess is that you are dimly remembering stack(.)
t <- list(A=c(4,1,4),B=c(3,7,9,2))
> stack(t)
values ind
1 4 A
2 1 A
3 4 A
4 3 B
5 7 B
6 9 B
7 2 B
If you needed the internal factor of ind that is possible as well
> as.numeric(stack(t)$ind)
Hi wizards,
I have the following model:
x<-c(20.79, 22.40, 23.15, 23.89, 24.02, 25.14, 28.49, 29.04, 29.88, 30.06)
y <- c(194.5, 197.9, 199.4, 200.9, 201.4, 203.6, 209.5, 210.7, 211.9, 212.2)
model1 <- lm( y ~ x )
anova(model1)
Df Sum Sq Mean Sq F valuePr(>F)
x 1 368.87 36
Dear kayj,
Here is one way:
# Data
set.seed(123)
x=runif(100)
# Cuts
as.data.frame.table(table(cut(x,seq(0,1,by=0.1
#Var1 Freq
#1(0,0.1]7
#2 (0.1,0.2] 12
#3 (0.2,0.3] 11
#4 (0.3,0.4]9
#5 (0.4,0.5] 14
#6 (0.5,0.6]7
#7 (0.6,0.7] 11
#8 (0.7,0.8] 11
#9 (0.8,0.9]
Dear Marco,
Try this:
# Data
t1 <- list(A=c(4,1,4),B=c(3,7,9,2))
# Processing
res=data.frame(
t1=do.call(c,t1),
levels=rep(c(1,2),do.call(c,lapply(t1,function(x) length(x
)
rownames(res)=NULL
res
HTH,
Jorge
On Tue, Nov 18, 2008 at 10:15 PM, Blanchette, Marco <
[EMAIL PROTECTED]> wrote:
Thanks. That's very useful.
I was confused by the fact that the p-values are in fact outputted in
Faraway's book and elsewhere. The package default output options should have
changed since then.
UKaraoz wrote:
>
>
> I am trying to replicate the repeated measures example from Dr.Faraway's
> boo
Hi
Perhaps Python's raw strings is what Duncan TL was referring to?
These are specified as r'Hello World' and their main
advantage is that backslashes are simply passed through. From the
Python Language Reference:
"When an 'r' or 'R' prefix is present, a character following a
backslash is inclu
I am pretty sure that I came across a function that creates a vector of levels
from a list but I just can't remember.
Basically, I have something like
> t <- list(A=c(4,1,4),B=c(3,7,9,2))
> t
$A
[1] 4 1 4
$B
[1] 3 7 9 2
And I would like to get something like the following:
t levels
4 1
1 1
4 1
> -Original Message-
> From: [EMAIL PROTECTED] on behalf of UKaraoz
> Sent: Tue 11/18/2008 2:16 PM
> To: r-help@r-project.org
> Subject: [R] lmer p-values for fixed effects missing
>
>
> I am trying to replicate the repeated measures example from Dr.Faraway's book
> (Extending the line
Uwe Ligges <[EMAIL PROTECTED]> writes:
> Hutchinson,David [PYR] wrote:
>> Hi,
>> I am trying to build an R package. My existing code makes use of
>> the
>> bitops and chron packages. So I have included statements to import
>> required functionality into the NAMESPACE file using import(). When I
>
I am trying to replicate the repeated measures example from Dr.Faraway's book
(Extending the linear model with R) as follows:
data(vision)
vision$npower <- rep(1:4,14)
mmod <-lmer(acuity~power+(1|subject)+(1|subject:eye),vision)
When I look at the fixed effects p-value, it is missing. Am I mis
I am trying to replicate the repeated measures example from Dr.Faraway's book
(Extending the linear model with R) as follows:
data(vision)
vision$npower <- rep(1:4,14)
mmod <-lmer(acuity~power+(1|subject)+(1|subject:eye),vision)
When I look at the fixed effects p-value, it is missing. Am I mis
Dear All,
I just read an announcement saying that Mathematica is launching a
version working with Nvidia GPUs. It is claimed that it'd make it
~10-100x faster!
http://www.physorg.com/news146247669.html
I was wondering if you are aware of any development going into this
direction with R?
Thanks f
Jared Chapman schrieb:
I am an uber nubie with r but I have a question that I hope someone
can help me with.
I have a data set with 1200+ data points that I want to put into a
cluster graph. the problem is that when the cluster graph is generated
there are too many data points to view the labels
Just for clarification : this is an extension, where you can take the
probability of switching to any possible value. Take into account that
the p-value in the function rbinom is 1-P for the switching of the
sign.
On Tue, Nov 18, 2008 at 11:21 PM, joris meys <[EMAIL PROTECTED]> wrote:
> The funct
The function rbinom might be a solution.
Try following simple program :
vec <- c(-1,1,-1,1,1,-1,-1,1,1)
inv <-rbinom(length(vec),1,0.5)
inv <-ifelse(inv==0,-1,1)
vec2 <- vec*inv #switches sign with p=0.5
In this, inv is a random binomial vector, where the probability for
being 1 is 0.5 in all
Good Morning,
I'm using vars package to do a Johansen Procedure for VAR, and I'm
interested in using with K=1, but the next error comes up
> ca.jo(Canada, type ="eigen", ecdet ="const", K = 1,
+ spec="longrun", season = NULL, dumvar = NULL)
Error en ca.jo(Canada, type = "eigen", ecdet = "const",
Nicklas Pettersson stat.su.se> writes:
>
> Hi,
>
> I wonder if anyone knows how to generate a list of objects, e.g. ten
> vectors with names: vect1, vect2, ... , vect10.
> My own idea was to use something like:
>
> for (i in 1:10)
> print(paste("vect", i,"<-NULL",sep=""))
>
I thin
Sorry, the code is incomplete.
You get a better result this way...
postscript('Circle.eps',paper='special',width=4,height=4)
par(mar=c(0,0,0,0))
plot.new()
points(0.5,0.5,pch=21,cex=50,bg='gray')
dev.off()
-Mensagem original-
De: Rodrigo Aluizio [mailto:[EMAIL PROTECTED]
Enviada em: terç
Salas, Andria Kay <[EMAIL PROTECTED]> [Tue, Nov 18, 2008 at 04:10:06PM CET]:
> I need help with another problem I am having that deals with the generation
> of vectors that I asked about yesterday. I now need to have each value in
> the vector (all values either 1 or -1) have a probability p tha
Hi Hans,
Thanks for the reply.
On Tue, 18 Nov 2008, Hans Werner Borchers wrote:
Faheem Mitha email.unc.edu> writes:
Hi,
Does anyone know of an R ORM (Object Relational Mapper)? I'm thinking of
something similar to sqlalchemy (http://www.sqlalchemy.org/).
Alternatively or additionally, can
A fast and simple way to do that would be something like this (the example
is a gray circle)
postscript('Circle.eps')
par(mar=c(0,0,0,0))
plot.new()
points(0.5,0.5,pch=21,cex=50,bg='gray')
dev.off()
You just have to chance the symbol (pch) and color (bg), the output will be
an EPS file, open it o
I am an uber nubie with r but I have a question that I hope someone
can help me with.
I have a data set with 1200+ data points that I want to put into a
cluster graph. the problem is that when the cluster graph is generated
there are too many data points to view the labels, they are just a
jumbled
Can't you just set of the corresponding nested regression equations
and get F-statistics from an AVOVA table?
--
David Winsemius, MD
Heritage Labs
On Nov 18, 2008, at 11:24 AM, ram basnet wrote:
Hi R users,
I want to calculate the partial correlation (first or second order)
and their corre
Hi everyone,
I have a PCA plot that I'm writing about in the text. There were so many
symbols in different colours on it that I didn't include a legend in the
plot as it would be useless. So what I was hoping to do was to talk about
each set of replicates in the text and when I do that, use thei
on 11/18/2008 10:31 AM Yiguo Sun wrote:
> Dear All,
>
> I have both R 2.80 and Scientific Workplace 5.5 installed on my
> computer.
>
> I copied the following commands from HTML help provided by R:
>
> testfile <- system.file("Sweave", "Sweave-test-1.Rnw", package =
> "utils") Sweave(testfile)
>
It's generally better to have a descriptive subject as you have done
here but not in your prior psoting.
You might think about how you can use this sort of result:
vec <- c(1, -1, -1, 1, 1, -1)
sample(c(-1,1),length(vec),replace=T)
Perhaps:
vec <- vec*sample(c(-1,1),length(vec),replace=T)
?s
On Tue, 2008-11-18 at 16:27 +, Sharma, Manju wrote:
> Hi,
>
> I have a small simple data frame (attached) - to compare diversity of
> insects encountered in disturbed and unditurbed site. What i have is
> the count of insects - the total number of times they were encountered
> over 30 mon
solution:
reshape package, melt function.
On Tue, 2008-11-18 at 02:07 +, Alexandre Swarowsky wrote:
> Hi,
>
> It's probably a simple issue but I'm struggling with that. I'll use the
> example shown in the help page.
>
> head(Indometh)
> wide <- reshape(Indometh, v.names="conc", idvar="Subje
To get at the effect of d in the model y~ a+b+c+d you need to look at the
regression of the residuals of y ~ a+b+c on the residuals of d ~ a+b+c, that
should give you the same results/significance as d in the full model. If you
only regress the residuals against d, then that does not adjust fo
Dear Patrick,
Try this:
plot(y, xlab="Survival Time (Months)", ylab="Survival Probability",
mark.time=TRUE, col = c("red", "blue"),
main="Kaplan-Meier Curve of Survival Times - Stratified by Group", lwd=3)
legend('topright',c("Locally Advanced",
"Metastatic"),lty=1,col.text=c('red','blue'))
See ?
I recently put several packages for time series databases on CRAN. The
main package, TSdbi, provides a common interface to time series
databases. The objective is to define a standard interface so users can
retrieve time series data from various sources with a simple, common,
set of commands, a
Duncan Murdoch murdoch at stats.uwo.ca Sat Nov 8 15:41:34 CET 2008
wrote:
> On 08/11/2008 7:20 AM, John Wiedenhoeft wrote:
> > Hi there,
> >
> > I rejoiced when I realized that you can use Perl regex from within
R. However,
> > as the FAQ states "Some functions, particularly those involving
regul
Peter,
See this FAQ from the site that I referenced:
How can I install the packages from the EPEL software repository?
http://fedoraproject.org/wiki/EPEL/FAQ#How_can_I_install_the_packages_from_the_EPEL_software_repository.3F
and this one pertaining to the GPG key:
How do I know that a package
I am attempting to sample 10 markers from each chromosome, with a maximum
distance of 14, calculated by the location of the marker in each chromosome
as loc[i+1] - loc[i]. I presume the easiest way to do this is with a while
loop, so that the function keeps re-sampling when the max distains is
gre
To clear up a question regarding my earlier posting regarding random changes in
a vector:
Say you have a vector (1, -1, -1, 1, 1, -1). I want each value in the vector
to have a probability that it will change signs when this vector is
regenerated. For example, probability = 50%. When the vec
Hi all,
i am relatively new to R. I searched all relevant CRAN taks views, in
particular finance and time series but could not find any package that
covers a function for bootstrapping skewness-adjusted t-statistics (Johnson
1978). Did I miss something? any help is highly appreciated.
thanks
fabi
Hi Marc + Gustavo,
thank you for the help.
Marc -
I tried to install the xdg-utils RPM from a different web site ( I can not
find it anymore) and yum complained about a missing public key. I never
could find the public key for that RPM package.
The command I have for updating the keys is
su
List,
Is there any way to specify the position of the legend from within the
following code? If so, how can I do it. As it stands it in the bottom left
corner and I want to move it to the top right. I'm not sure if I can use the
default "plot" or if I need to go with the lattice package. Sugg
'assign'
for (i in 1:10) assign(paste('vect', i, sep=''), NULL)
or use a 'list'
On Tue, Nov 18, 2008 at 12:58 PM, Nicklas Pettersson
<[EMAIL PROTECTED]> wrote:
> Hi,
>
> I wonder if anyone knows how to generate a list of objects, e.g. ten vectors
> with names: vect1, vect2, ... , vect10.
> My ow
I would not use 'force', as the consequences can be a corrupted RPM
database and other conflicts.
The xdg-utils RPM for RHEL 4 is available from here:
http://download.fedora.redhat.com/pub/epel/4/x86_64/
You might want to review the EPEL FAQ here:
http://fedoraproject.org/wiki/EPEL/FAQ
You
An, again FAQ "How can I turn a string into a variable?"
Uwe Ligges
Nicklas Pettersson wrote:
Hi,
I wonder if anyone knows how to generate a list of objects, e.g. ten
vectors with names: vect1, vect2, ... , vect10.
My own idea was to use something like:
for (i in 1:10)
print(pa
Hutchinson,David [PYR] wrote:
Hi,
I am trying to build an R package. My existing code makes use of the
bitops and chron packages. So I have included statements to import
required functionality into the NAMESPACE file using import(). When I
run Rcmd build, and error is generated "Error: packa
Hi,
I wonder if anyone knows how to generate a list of objects, e.g. ten
vectors with names: vect1, vect2, ... , vect10.
My own idea was to use something like:
for (i in 1:10)
print(paste("vect", i,"<-NULL",sep=""))
but the result is:
"vect1<-NULL"
...
"vect10<-NULL"
and not
vect1<-N
From ?"[":
"When indexing arrays by [ a single argument i can be a matrix with as
many columns as there are dimensions of x"
Uwe Ligges
start wrote:
To replace particular values in a matrix we can use the following command.
matrix[x,y] = value
But what to do if I have a list of +- 1,000
If all you have is species richness at each site, then you can't
calculate diversity at all. You need to have the raw species abundance
data. The data you provided is not a "community data matrix".
The diversity function is doing exactly what it is supposed to: calculating
the Shannon diversity fo
Hi,
I have a small simple data frame (attached) - to compare diversity of
insects encountered in disturbed and unditurbed site. What i have is
the count of insects - the total number of times they were encountered
over 30 monitoring slots.
Can someone please check for me to make sure how th
Dear All,
I have both R 2.80 and Scientific Workplace 5.5 installed on my computer.
I copied the following commands from HTML help provided by R:
testfile <- system.file("Sweave", "Sweave-test-1.Rnw", package = "utils")
Sweave(testfile)
I then compile Sweave-test-1.tex file using Scientific Wo
xdg utils is probably not being recognized because you compiled it
from source. The R rpm is looking for the xdg utils package. I'm not
familiar with yum, but I think you can try to force the installation:
rpm -ivh --force (or something like that) /data/R-2.8.0-1.rh4.x86_64.rpm
On Tue, Nov 18, 20
If you know how to merge 2 of the files together, then you can use the Reduce
function to do the merging of multiple files.
You could use lapply to read all of the files into a list, then Reduce to merge
them together, then output the result to a new file if a file is really what
you want.
Ano
Try something along these lines:
> mat <- matrix(0, 10, 10)
> x <- c(1,10,10,1)
> y <- c(1,1,10,10)
> z <- 1:4
> mat[ cbind(x,y) ] <- z
> mat
Hope this helps,
--
Gregory (Greg) L. Snow Ph.D.
Statistical Data Center
Intermountain Healthcare
[EMAIL PROTECTED]
801.408.8111
> -Original Messag
Hi to all,
probably question is to Henrik Bengtsson but may be others can help as well.
I have an object of "Class A" with fields which are being changed (updating)
dynamically in a loop. I would like to keep all iterations of the object in
a list, but when i'm trying to do it i get a reference t
Hi R users,
I want to calculate the partial correlation (first or second order) and their
corresponding significance test (p-value).
I used
> library(corpcor)
and "cor2pcor" function to calculate partial correlation but could not
calculate "p-value".
If some one knows the R package or function
To replace particular values in a matrix we can use the following command.
matrix[x,y] = value
But what to do if I have a list of +- 1,000,000 values and associated x,y
values.
Is there an efficient way to replace my matrix with those values.
My first idea was to replace x, y and values by vect
Thanks very much for the tips. Simply removing .Last did allow me to
exit, and looking at sessionInfo showed that there was a "fame_2.3"
namespace loaded, even without loading the fame package, which I guess
had this remnant code for .Last from a previous version of fame (there
appears to be no
Hi,
I am trying to update my version of R on Centos 4.
$uname -a
Linux 2.6.9-78.0.5.ELsmp #1 SMP Wed Oct 8 07:06:30 EDT 2008 x86_64 x86_64
x86_64 GNU/Linux
I tried to update the current version of R (2.6.2) which was installed
locally as an rpm
$R --version
R version 2.6.2 (2008-02-08)
Achim Zeileis wrote:
On Mon, 17 Nov 2008, Michael Friendly wrote:
I just added a CITATION file to the heplots package--- appended below.
From the document ion for ?CITATION, there can be *one or more* calls to
citEntry() within the CITATION file, and each should produce an object
of class "cita
Hi,all:
I am running a nonlinear regression and there is a problem.
There is a data frame: data
p s x t
1 875.0 12392.5 11600 0.06967213
2 615.0 12332.5 12000 0.06967213
3 595.0 12332.5 12000 0.06967213
4 592.5 12337.0 12000 0.06967213
5 650.0 12430.0 12000 0.06967213
I need help with another problem I am having that deals with the generation of
vectors that I asked about yesterday. I now need to have each value in the
vector (all values either 1 or -1) have a probability p that it will switch
signs (so, say, each value has a 50% chance of switching from -1
Hi,
I am trying to build an R package. My existing code makes use of the
bitops and chron packages. So I have included statements to import
required functionality into the NAMESPACE file using import(). When I
run Rcmd build, and error is generated "Error: package 'bitops' does not
have a name sp
Hi Christina,
>> How can this happen? How can the p-values from the Tukey become
>> significant when the lme-model wasn't?
The link below, with an explanation by Prof. Fox is relevant to your
question:
http://www.nabble.com/Strange-results-with-anova.glm()-td13471998.html#a13475563
Another wa
The gtools package has quantcut.
On Tue, Nov 18, 2008 at 6:53 AM, Daniel Brewer <[EMAIL PROTECTED]> wrote:
> Hello,
>
> I was just wondering whether there is a quick way to divide a vector of
> data into four groups defined by the quantiles?
> i.e.
> 0-25%
> 25-50%
> 50-75%
> 75-100%
>
> Many than
On Tue, Nov 18, 2008 at 4:09 AM, Wacek Kusnierczyk
<[EMAIL PROTECTED]> wrote:
> Wacek Kusnierczyk wrote:
>> Duncan Murdoch wrote:
>>
>>> paramValue <- 15
>>> source("myRfile.R")
>>>
>>> The quotes are necessary, because source(myRfile.R) would go looking
>>> for a variable named myRfile.R, rather t
Daniel Brewer wrote:
Hello,
I was just wondering whether there is a quick way to divide a vector of
data into four groups defined by the quantiles?
i.e.
0-25%
25-50%
50-75%
75-100%
Many thanks
Dan
library(Hmisc)
cut2(x, g=4)
--
Frank E Harrell Jr Professor and Chair School of M
Hi everyone
I'm using Tukey HSD as post-hoc test following a lme analysis. I'm
measuring hemicelluloses in different species treated with three
different CO2 concentrations (l=low, m=medium, h=high). The whole
experiment is a split-plot design and the Tukey-function from the
package multcomp
If I understand what you're seeking to do, you might also consider the
rplot() function written by Rolf Turner. It can be found at
http://tolstoy.newcastle.edu.au/R/help/02a/1174.html
Benjamin
-Original Message-
From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED]
On Behalf Of Plantky
Se
Hello Rodrigo,
You're almost there:
you should make the variable distance before the while loop, and this should
be higher than 14 to go inside the while loop:
selectmarkers<- function(n=10){
tapply(mm$marker, mm$chr, function(m){
distances <- 15
while (max(distances) > 14) {
Leon Yee wrote:
> Hi,
>
> Hans W. Borchers wrote:
>>> Dear all,
>>>
>>>Which data structure in R can mimic hash in PERL? I'd like to set
>>> up a lookup table, which could be accomplished by HASH if using PERL.
>>> Which of the data structures in R is the most efficient for lookup
>>> table?
>>
check at cut() and split(), e.g.,
x <- rnorm(100)
qx <- quantile(x)
ind <- cut(x, qx, include.lowest = TRUE)
split(x, ind)
I hope it helps.
Best,
Dimitris
Daniel Brewer wrote:
Hello,
I was just wondering whether there is a quick way to divide a vector of
data into four groups defined by the
On Tue, Nov 18, 2008 at 3:41 AM, gauravbhatti <[EMAIL PROTECTED]> wrote:
>
> Hi all can any one of you write a script for the following problem
Well, yes, but that sounds remarkably like a homework problem,
and we can't and won't do your school assignments for you.
Your instructor gave you some s
Hello,
I was just wondering whether there is a quick way to divide a vector of
data into four groups defined by the quantiles?
i.e.
0-25%
25-50%
50-75%
75-100%
Many thanks
Dan
--
**
Daniel Brewer, Ph.D.
Institute of Cancer Research
M
Hi,
Hans W. Borchers wrote:
Dear all,
Which data structure in R can mimic hash in PERL? I'd like to set
up a lookup table, which could be accomplished by HASH if using PERL.
Which of the data structures in R is the most efficient for lookup
table?
Thanks for your help.
The regular answer t
It Works now.
Thanks to all.
P.Branco
Gavin Simpson wrote:
>
> On Mon, 2008-11-17 at 07:20 -0800, P.Branco wrote:
>> Sorry, it does not work.
>>
>> If I do a rnorm I lose the original values of my vectors, and the
>> equation
>> result must be attained by the use of the original values.
>
> On Mon, Nov 17, 2008 at 7:33 PM, Wacek Kusnierczyk
>
>
>
>> ...what i would (r-naively) expect is that lapply for-loops over an index
>> variable, and each promise picks from the list a value at that index
>>
let me make this line more clear:
funcs = lapply(1:2, function(i) funct
-- sorry, Repost in Text only mode
Hello R-folks,
I don't get the color of the legend in a lattice-plot right. I select a palette
from RColorBrewer and use it in the barchart plot.
The resulting graph shows the new palette in the graph, but in the legend
rectangles the standard palette is
Hello R-folks,
I don't get the color of the legend in a lattice-plot right. I select a palette
from RColorBrewer and use it in the barchart plot.
The resulting graph shows the new palette in the graph, but in the legend
rectangles the standard palette is used. Adding a col argument into auto.ke
I'm working with a linear model with four factors as explicatory variables,
being all of them significally (e.g. y ~ a + b + c + d). I thought that the
residuals of a linear model keep the variance not explained by the model, so
if I use my model with just three factors (y ~ a + b + c) and keep th
Hi,
I have more questions about the fft. The application in Excel is very
limited.
In Excel I can adjust graphs and calibrate the x and y-axis. The input and
process, however, is limited compared to R.
With a Dataset table where one column is the hour difference and the second
are the values w
Hi all can any one of you write a script for the following problem
Let X be a matrix of random normal values (mean =0; sd=1) (see rnorm()
function) having 10 columns and N=100 rows. Let the first row in the matrix
be (1,1.5,1.4,3,1.9,4,4.9,2.6,3.2,2.4). Assume that the first 5 columns of
data for
Wacek Kusnierczyk wrote:
> Duncan Murdoch wrote:
>
>> paramValue <- 15
>> source("myRfile.R")
>>
>> The quotes are necessary, because source(myRfile.R) would go looking
>> for a variable named myRfile.R, rather than using "myRfile.R" as the
>> filename.
>>
>
> why?
i see this question has
A new package, *latticist*, is available now from CRAN.
Latticist is a graphical user interface for exploratory visualisation.
It is primarily an interface to the Lattice graphics system, but also
produces displays from the vcd package for categorical data.
Given a multivariate dataset (either a
I am attempting to sample 10 markers from each chr, with a maximum distance
of 14, calculated by the location of the marker in each chromosome as
loc[i+1] - loc[i]. I presume the easiest way to do this is with a while
loop, so that the function keeps resampling when the max distains is greater
tha
Ok I've figured it out already, I just needed to change the ylim=c
(0,100) to ylim=c(100,0)
Thanks for replying to me though!
Kang Min
On Nov 18, 1:54 pm, "Daniel Malter" <[EMAIL PROTECTED]> wrote:
> Hi, I don't understand the question. If your data is in the fourth quadrant
> (all positive Xs, a
jim holtman wrote:
> You can use the 'local' function to make sure you create a value of
> 'i' that is defined when the function is defined:
>
>
>> funcs = lapply(1:5, function(i)local({i; function(){ i}}))
>> funcs[[3]]()
>>
> [1] 3
>
>> funcs[[2]]()
>>
> [1] 2
>
ok, but that'
> Here I have a folder with more than 50 tab-delimited files. Each
> file has a few hundreds of thousands rows/subjects, and the number
> of columns/variables of each file varies.The 1st row consists of all
> the variable names.
>
> Now I would like to merge all the files into one tab-delimited
See the help for .Last:
Immediately _before_ terminating, the function '.Last()' is
executed if it exists and 'runLast' is true. If in interactive use
there are errors in the '.Last' function, control will be returned
to the command prompt, so do test the function thoroughly.
91 matches
Mail list logo