I have a bivariate plot of axis2 against axis1 (data below). I would like
to use different size, type and color for points in the plot for the point
coming from different region. For some reasons, I cannot get it done. Below
is my code.
col <- rep(c("blue", "red", "darkgreen"), c(16, 16, 16))
## C
On Apr 6, 2012, at 00:25 , ikuzar wrote:
> Hi,
>
> I'd like to know how to get a vector of min value from many vectors without
> making a loop. For example :
>
>> v1 = c( 1, 2, 3)
>> v2 = c( 2, 3, 4)
>> v3 = c(3, 4, 5)
>> df = data.frame(v1, v2, v3)
>> df
> v1 v2 v3
> 1 1 2 3
> 2 2 3 4
On 06-04-2012, at 00:55, Navin Goyal wrote:
> Hi,
> I am using the integrate function in some simulations in R (tried ver 2.12
> and 2.15). The problem I have is that the last few rows do not integrate
> correctly. I have pasted the code I used.
> The column named "integral" shows the output from
On Apr 5, 2012, at 10:57 PM, Christopher Kelvin wrote:
Hello,
i need to simulate 100 times, n=40 ,
the distribution has 90% from X~N(0,1) + 10% from X~N(20,10)
Is my loop below correct?
Thank you
n=40
for(i in 1:100){
x<-rnorm(40,0,1) # 90% of n
You are overwriting x and y and at the end o
On Apr 5, 2012, at 10:38 PM, ieatnapalm wrote:
Hey, sorry if this has been addressed before, but I'm really new to
R and
having trouble with the gsub function. I need a way to make this
function
exclude certain values from being substituted:
ie my data looks something like (15:.0234,10:.015
Hey, sorry if this has been addressed before, but I'm really new to R and
having trouble with the gsub function. I need a way to make this function
exclude certain values from being substituted:
ie my data looks something like (15:.0234,10:.0157) and I'm trying to
replace the leading 15 with someth
Hello,
i need to simulate 100 times, n=40 ,
the distribution has 90% from X~N(0,1) + 10% from X~N(20,10)
Is my loop below correct?
Thank you
n=40
for(i in 1:100){
x<-rnorm(40,0,1) # 90% of n
z<-rnorm(40,20,10) # 10% of n
}
x+z
__
R-help@r-project.o
On Apr 5, 2012, at 3:15 PM, Christopher R. Dolanc wrote:
I keep expecting R to have something analogous to the =count
function in Excel, but I can't find anything. I simply want to count
the data for a given category.
I've been using the ddply() function in the plyr package to
summarize
Again my thanks!
John
John David Sorkin M.D., Ph.D.
Chief, Biostatistics and Informatics
University of Maryland School of Medicine Division of Gerontology
Baltimore VA Medical Center
10 North Greene Street
GRECC (BT/18/GR)
Baltimore, MD 21201-1524
(Phone) 410-605-7119
(Fax) 410-605-7913 (Please ca
Thanks!
John
John David Sorkin M.D., Ph.D.
Chief, Biostatistics and Informatics
University of Maryland School of Medicine Division of Gerontology
Baltimore VA Medical Center
10 North Greene Street
GRECC (BT/18/GR)
Baltimore, MD 21201-1524
(Phone) 410-605-7119
(Fax) 410-605-7913 (Please call phone
> -Original Message-
> From: r-help-boun...@r-project.org [mailto:r-help-boun...@r-project.org] On
> Behalf
> Of Drew Tyre
> Sent: Thursday, April 05, 2012 8:35 AM
> To: Ramiro Barrantes
> Cc: r-help@r-project.org
> Subject: Re: [R] reclaiming lost memory in R
>
> Ramiro
>
> I think the
> -Original Message-
> From: r-help-boun...@r-project.org [mailto:r-help-boun...@r-project.org] On
> Behalf
> Of John C Nash
> Sent: Thursday, April 05, 2012 1:20 PM
> To: r-help@r-project.org
> Subject: [R] Appropriate method for sharing data across functions
>
> In trying to streamline
I think you are looking for the function called length(). I cannot recreate
your output, since I don't know what is in NZ_Conifers, but with the built-in
dataset mtcars I get:
> ddply(mtcars, .(cyl,gear,carb), summarize, MeanWt=mean(wt), N=length(wt))
cyl gear carb MeanWt N
143
Please provide a sample of the data and the code you have tried. If
"pts" is your data.frame read in from CSV and the long/lats describe
points within the mainland U.S. then this should show you sensible
points on the map:
require(maps)
map('state')
points(pts$long, pts$lat)
If not, investigate
Dear David, Duncan, and Jochen, and everyone
I am happy to report that with R version 2.15.0, rgl_0.92.861 now
loads properly. To whoever fixed this, thank you!
Mark
> Below some more observations that might help you locate the problem.
>
> Also sorry for ignoring the posting rules in my la
Hi,
I'd like to know how to get a vector of min value from many vectors without
making a loop. For example :
>v1 = c( 1, 2, 3)
> v2 = c( 2, 3, 4)
> v3 = c(3, 4, 5)
> df = data.frame(v1, v2, v3)
> df
v1 v2 v3
1 1 2 3
2 2 3 4
3 3 4 5
> min_vect = min(df)
> min_vect
[1] 1
I 'd like to
This is one way:
f <- function(x, y){
Z <- ifelse(x==y, 3, 4)
return(Z)
}
DS[3] <- with(DS, f(X,Y))
colnames(DS)[3] <- "Z"
But you don't really need a function to do that.
DS[3] <- with(DS, ifelse(X==Y, 3, 4)) # this works just fine
I'm glad you've decided to use R; eventually you will ne
Hi Marco,
I saw this post and was wondering if you would be able to help me.
I have a gene expression data file that i would like to build a bayesian n/w
on.
I input a file with samples as rows and columns as features into the bnlearn
package.
I read through the pdf file that talks about the bnl
I am also looking for a help in using the "deal" package.
When I try this, I get an error saying "Error in array(1, Dim) : 'dim'
specifies too large an array"
ksl.prior <- jointprior(ksl.nw)
Does anyone know what the error indicates ?
My data is gene expression values with about 58 rows (sampl
Hi,
I am using the integrate function in some simulations in R (tried ver 2.12
and 2.15). The problem I have is that the last few rows do not integrate
correctly. I have pasted the code I used.
The column named "integral" shows the output from the integrate function.
The last few rows have no integ
Hi everyone,
I'm trying to input an excel datasheet with city names and lat+longs, that
has already been converted to a .csv file and make a map using R with my
data. My datasheet is 30 cities, with their lat+long, temp, elevation. So
far all I'm able to do is load the datasheet into R, I instal
Dear all,
Suppose I have a dataset with two variables:
X = c(0, 1, 2)
Y = c(1, 1, 1)
DS = data.frame(X, Y)
Now, I want to create a new variable Z with 3 observations, but I want its
values to be the result of a function. I want to create a function that
compares X and Y, and if X = Y, then Z va
I keep expecting R to have something analogous to the =count function in
Excel, but I can't find anything. I simply want to count the data for a
given category.
I've been using the ddply() function in the plyr package to summarize
means and st dev of my data, with this code:
ddply(NZ_Conifer
Hi,
I am running a negative binomial model using Gamlss and when I try to include
random effect, I get the following message:
Warning messages:
1: In vcov.gamlss(object, "all") :
addive terms exists in the mu formula standard errors for the linear terms
maybe are not appropriate
2: In vco
Hello,
Try
apply(df, 2, min)
(By the way, 'df' is the name of a R function, avoid it, 'DF' is better.)
Hope this helps,
Rui Barradas
--
View this message in context:
http://r.789695.n4.nabble.com/how-to-compute-a-vector-of-min-values-tp4536224p4536275.html
Sent from the R help mailing list a
Hello,
I am a new user of R and I am trying to use the data I am reading from a
spreadsheet.
I installed the xlsReadWrite package and I am able to read data from this
files, but how can I assign the colums into values?
E.g:
as I read a spreadsheet like this one:
A B
1 2
4 9
I manually assign th
As a vendor-neutral standard, the Predictive Model Markup Language
(PMML) enables the interchange of data mining models among different
tools and environments – open source and commercial - avoiding
proprietary issues and incompatibilities.
Please see the Zementis newsletter below for details abou
On Thu, Apr 5, 2012 at 3:07 PM, Dirk Eddelbuettel wrote:
>
> Julio,
>
> Nobody mentioned the _set_ operations union(), intersect(), setdiff(),
> which are described under 'help(union)' (and the other names, of course)
... but which are basically wrappers for match().
-- Bert
>
> R> X <- c(10
Julio,
Nobody mentioned the _set_ operations union(), intersect(), setdiff(),
which are described under 'help(union)' (and the other names, of course)
R> X <- c(10:13, 17,18)
R> Y <- c(11,12,17,18)
R> intersect(X, Y)# gives the actual values
[1] 11 12 17 18
R> X %in% inters
Berend Hasselman xs4all.nl> writes:
> Try
>
> X %in% Y
>
> You could also have a look at match
>
> Berend
>
>
Thanks Berend!
--Sergio.
__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting
Run the examples for the "loess.demo" function in the TeachingDemos
package to get a better understanding of what goes into the loess
predictions.
On Tue, Apr 3, 2012 at 2:12 PM, Recher She wrote:
> Dear R community,
>
> I am trying to understand how the predict function, specifically, the
> pred
Probably to pull down the source of one and study it directly: if you
already know LaTeX and R, Sweave isn't much more to master: zoo does
vignettes nicely, but any package with vignettes should be pretty
good.
Michael
On Thu, Apr 5, 2012 at 4:33 PM, Erin Hodgess wrote:
> Hi R People:
>
> What i
If you look at the code for summary.lm the line for the value of sigma is:
ans$sigma <- sqrt(resvar)
and above that we can see that resvar is defined as:
resvar <- rss/rdf
If that is not sufficient you can find how rss and rdf are computed in
the code as well.
On Tue, Apr 3, 2012 at 8:56 AM,
Le jeudi 05 avril 2012 à 12:40 -0700, Peter Meilstrup a écrit :
> Consider the data.frame:
>
> df <- data.frame(A = c(1,4,2,6,7,3,6), B= c(3,7,2,7,3,5,4), C =
> c(2,7,5,2,7,4,5), index = c("A","B","A","C","B","B","C"))
>
> I want to select the column specified in 'index' for every row of 'df', to
On Thu, Apr 5, 2012 at 1:40 PM, Peter Meilstrup
wrote:
> Consider the data.frame:
>
> df <- data.frame(A = c(1,4,2,6,7,3,6), B= c(3,7,2,7,3,5,4), C =
> c(2,7,5,2,7,4,5), index = c("A","B","A","C","B","B","C"))
>
> I want to select the column specified in 'index' for every row of 'df', to
> get
>
>
I tried your code, first I removed the reference to the global
variable data$Line, then it works if I finish identifying by either
right clicking (I am in windows) and choosing stop, or using the stop
menu. It does as you say if I press escape or use the stop sign
button (both stop the whole evalu
Hi R People:
What is the best way to learn how to produce vignettes, please?
Any help much appreciated.
Thanks,
Erin
--
Erin Hodgess
Associate Professor
Department of Computer and Mathematical Sciences
University of Houston - Downtown
mailto: erinm.hodg...@gmail.com
_
Why not pass around a reference class?
Hadley
On Thu, Apr 5, 2012 at 3:20 PM, John C Nash wrote:
> In trying to streamline various optimization functions, I would like to have
> a scratch pad of working data that is shared across a number of functions.
> These can be called from different levels
In trying to streamline various optimization functions, I would like to have a scratch pad
of working data that is shared across a number of functions. These can be called from
different levels within some wrapper functions for maximum likelihood and other such
computations. I'm sure there are o
You might want to look at the lattice or ggplot2 packages, both of
which can create a graph for each of the classes.
On Tue, Apr 3, 2012 at 6:20 AM, arunkumar wrote:
> Hi
> I have a data class wise. I want to create a histogram class wise without
> using for loop as it takes a long time
> my
On 05-04-2012, at 21:32, Julio Sergio wrote:
> I have an ordered "set" of numbers, represented by a vector, say
>
>> X <- c(10:13, 17,18)
>> X
> [1] 10 11 12 13 17 18
>
> then I have a "sub-set" of X, say
>
>> Y <- c(11,12,17,18)
>
> Is there a simple way in R to have a logical vector (paral
Richard M. Heiberger temple.edu> writes:
>
> At least two ways
>
> > (!is.na(match(X, Y)))
> [1] FALSE TRUE TRUE FALSE TRUE TRUE
> > X %in% Y
> [1] FALSE TRUE TRUE FALSE TRUE TRUE
Thanks Richard!
--Sergio.
__
R-help@r-project.org mailing li
Hi, I have the same question as Jason on how to estimate the standard error and
construct CI around S_1(t) - S_2(t). From summary.survfit(obj), how can I
combine the 2 survival estimates and the associated standard errors, to get an
estimate of standard error for the difference / then calculate
Consider the data.frame:
df <- data.frame(A = c(1,4,2,6,7,3,6), B= c(3,7,2,7,3,5,4), C =
c(2,7,5,2,7,4,5), index = c("A","B","A","C","B","B","C"))
I want to select the column specified in 'index' for every row of 'df', to
get
goal <- c(1, 7, 2, 2, 3, 5, 5)
This sounds a lot like the indexing-by
At least two ways
> (!is.na(match(X, Y)))
[1] FALSE TRUE TRUE FALSE TRUE TRUE
> X %in% Y
[1] FALSE TRUE TRUE FALSE TRUE TRUE
>
On Thu, Apr 5, 2012 at 3:32 PM, Julio Sergio wrote:
> I have an ordered "set" of numbers, represented by a vector, say
>
> > X <- c(10:13, 17,18)
> > X
> [1]
I have an ordered "set" of numbers, represented by a vector, say
> X <- c(10:13, 17,18)
> X
[1] 10 11 12 13 17 18
then I have a "sub-set" of X, say
> Y <- c(11,12,17,18)
Is there a simple way in R to have a logical vector (parallel to X) indicating
what elements of X are in Y, i.e.,
Thanks for your response, Duncan.
x$eventtype is a "character" vector (because the same hashing error
occurred when I tried to read.table() in the first place specifying
colClasses = c(..., "factor", ...).
x really is that long:
dim(x)
[1] 1093574297 12
...the x$eventtype field has t
I am wondering if it possible to normalize the slope of a linear regression to
its intercept to allow for valid between-group comparisons.
Here is the scenario:
I need to compare the slopes of biomass increase among NAFO divisions of
Northwest Atlantic cod. However, the initial division biomass
I found a rather easy solution that circumvents this problem by:
1) creating your own length function using na.omit function
2) calculating variance using tapply
3) calculating length using new length function
4) calculating square root of variance by length
*Code from LeCzar:*
object1<-as.d
The binaries for RBloomberg are hosted on findata.org. The package is
only useful in combination with a Bloomberg terminal, but users who
have access to one should not be deterred by its absence from CRAN.
John
On Thu, Apr 5, 2012 at 12:18 PM, Prof Brian Ripley
wrote:
> On 05/04/2012 08:54, arva
On 05/04/2012 2:03 PM, Adam D. I. Kramer wrote:
Hello,
I'm doing some analysis on a rather large data set. In this case,
some simple commands are failing. For example, this one:
> x$eventtype<- factor(x$eventtype)
Error in unique.default(x) : length 1093574297 is too large for hashing
Things are not that gory with knitr. You only need to use the option
cache=TRUE and it will take care of most of the things you mentioned.
For example, objects in a chunk are automatically saved and lazy
loaded; when code is modified, old cache will be automatically removed
and new cache will be bu
Hello,
I'm doing some analysis on a rather large data set. In this case,
some simple commands are failing. For example, this one:
x$eventtype <- factor(x$eventtype)
Error in unique.default(x) : length 1093574297 is too large for hashing
...I think this is a bug, because "hashing" shou
Reproducibility is important, and as I mentioned in a previous email,
there are probably ways I could avoid running the entire script over and
over again with each sweave compilation. Still, relying on saved
workspaces, temporary files or caches still has some of the issues that
working in the mai
Well, I do not think it is a good practice (in terms of reproducible
research) to keep on running Sweave in the same R session, because
your previous run and your current workspace could "pollute" your next
run. To make sure a document compiles on its own, it is better always
to start a new clean R
Hi, Sarah: You were correct: I failed to read the question with
sufficient care. Thanks for your original reply and for the
correction. Spencer
On 4/5/2012 10:11 AM, Sarah Goslee wrote:
sos is a great way to search help pages, agreed. But the question is
about functions AND mailing list a
Yep, I'm using RStudio, and have used Tinn-R in the past. RStudio does
start a new R session when processing a sweave document via the RStudio
GUI. In my case, this presented a problem for the reasons I stated
before (i.e. that I need to run sweave in the main environment, not a
new one). Hence,
Ok, I have a new, multipart problem that I need help figuring out.
Part 1. I have a three dimensional array (species, sites, repeat counts
within sites). Sampling effort per site varies so the array should be
ragged.
Maximum number of visits at any site = 22
Number of species = 161
Number of sit
In terms of editors, I think RStudio is pretty good
(http://www.rstudio.org/download/preview). Or LyX
(http://yihui.name/knitr/demo/lyx/), or TeXmaker, WinEdit
(http://yihui.name/knitr/demo/editors/)... All of them start a new R
session when weaving the document, and all support one-click
compilati
sos is a great way to search help pages, agreed. But the question is
about functions AND mailing list archives, which requires an online
solution. (See subject line.)
Sarah
On Thu, Apr 5, 2012 at 12:56 PM, Spencer Graves
wrote:
> The "sos" package is designed to search help pages only and sort t
As you got the error message, to use ggplot function, you had better make a
data.frame with your data "d".
for example, d[ n x p], n : observations, p : variables
n = dim(d)
dd = data.frame(x=d[,2:n[2]], y=d[,1])
then, you may get the better result after apply "dd" to the ggplot function.
> p1 +
The "sos" package is designed to search help pages only and sort the
results by package. It includes a vignette describing how to get the
results as an Excel file giving an efficient summary of which packages
contain help pages of interest including the latest date updated, etc.
I designed the
On Apr 5, 2012, at 12:00 AM, Daisy Englert Duursma wrote:
random selection of cells in raster based on distance from xy
locations
Hi,
I am trying to sample a raster for random cells that occur within a
specific distance of point locations. I have successfully found
multiple
ways of doing
Hello,
>
> #Here is how I have tried to sample but it is not sampling from the right
> part of the list
>
> bg<- z_nonna[sample(1:length(z_nonna), 5000, replace=FALSE)]
>
You are sampling from the length of z_nonna, with no guarantee that they are
indices to unique list elements.
Try this.
#
Dear all,
I want to do piecewise CAPM linear regression in R:
RRiskArbâRf = (1âδ)[αMktLow+βMktLow(RMktâRf)] + δ[αMkt High
+βMkt High(RMkt âRf )]
where δ is a dummy variable if the excess return on the value-weighted CRSP
index is above a threshold level and zero otherwise. and
Hi Thomas,
Thank you so much for your suggestion.
I tried your code and it is working fine. Now when I change the values of Y
in yobs I am getting so many warnings.
say,
yobs <- data.frame(
time = 0:7,
Y = c(0.00, 3.40, 4.60 ,5.80, 5.80, 6.00, 6.00 ),
Z = c(0.1, 0.11, 0.119, 0.128, 0.136, 0.
On Thu, Apr 05, 2012 at 07:46:52AM -0700, ali_protocol wrote:
> Hi all,
>
> I have a matrix (n*2), I want to compare 2 operators (2 normalization for
> array results) on these matrix.
> The 2 columns should ideally become the same after operations
> (normalization). So to compare operations,
>
I usually use http://www.rseek.org
On Thu, Apr 5, 2012 at 11:36 AM, Jonathan Greenberg wrote:
> R-helpers:
>
> It looks like http://finzi.psych.upenn.edu/search.html has stopped
> spidering the mailing lists -- this used to be my go-to site for
> searching for R solutions. Are there any good rep
Use rseek.org
On Thu, Apr 5, 2012 at 10:36 AM, Jonathan Greenberg wrote:
> R-helpers:
>
> It looks like http://finzi.psych.upenn.edu/search.html has stopped
> spidering the mailing lists -- this used to be my go-to site for
> searching for R solutions. Are there any good replacements for this?
Thanks for the nice ideas, Duncan. I think that would work nicely in
most cases. The major issue with that workflow in my case is that the
scripts to set up my workspace take around a half-hour to run (I really
wish CUDA was working with my setup!), so running R each time in that
case is time-con
On 04/04/2012 3:25 PM, Alexander Shenkin wrote:
Hello Folks,
When I run the document below through sweave, rgui.exe/rsession.exe
leaves a file handle open to the sweave-001.pdf graphic (as verified by
process explorer). Pdflatex.exe then crashes (with a Permission Denied
error) because the grap
On 05/04/2012 08:54, arvanitis.christos wrote:
Hi to all,
Do you know how I can use Baddperiods from RBloomberg
Most of us cannot even use 'RBloomberg': it has been removed at the
request of Bloomberg's lawyers.
--
Brian D. Ripley, rip...@stats.ox.ac.uk
Professor of Applie
Dear Richard and Jinsong,
Others output with library agricolae. See manual.
##
library(agricolae)
comp1 <- LSD.test(x.aov,"a", group=FALSE)
comp2 <- LSD.test(x.aov,"b", group=TRUE)
# interaction ab
# Tukey's test
comp3 <- HSD.test(xi.aov,"ab")
# graphics
par(mfrow=c(2,2))
bar.err(comp1,ylim=c(0,1
http://www.rseek.org/ perhaps. [Take a look at the tabs on the RHS
after you do a search]
Michael
On Thu, Apr 5, 2012 at 11:36 AM, Jonathan Greenberg wrote:
> R-helpers:
>
> It looks like http://finzi.psych.upenn.edu/search.html has stopped
> spidering the mailing lists -- this used to be my go-
Ramiro
I think the problem is the loop - R doesn't release memory allocated inside
an expression until the expression completes. A for loop is an expression,
so it duplicates fit and dataset on every iteration. An alternative
approach that I have found successful in similar circumstances is to use
A final, final followup. Uwe, your suggestion is spot on - disabling the
virus scanner fixes the problem. UNL recently changed virus scanning
software, so this issue arose with Windows XP and Symantec Endpoint
Protection. It can be readily disabled and reenabled from the system tray,
so not too bi
This example is from "The R Book" by Michael J. Crawley.
d=read.table(
"http://www.bio.ic.ac.uk/research/mjcraw/therbook/data/diminish.txt";
,header=TRUE)
p=qplot(xv,yv,data=d); p
m1=lm(yv~xv,data=d)
p1=p + geom_abline(intercept=coefficients(m1)[1],
slope=coefficients(m1)[2] ); p1
m2=lm(yv~xv + I
On 05.04.2012 17:40, Drew Tyre wrote:
A final, final followup. Uwe, your suggestion is spot on - disabling the
virus scanner fixes the problem. UNL recently changed virus scanning
software, so this issue arose with Windows XP and Symantec Endpoint
Protection. It can be readily disabled and ree
Don't know how you searched, but perhaps this might help:
https://stat.ethz.ch/pipermail/r-help/2007-March/128064.html
> -Original Message-
> From: r-help-boun...@r-project.org
> [mailto:r-help-boun...@r-project.org] On Behalf Of Jenn Barrett
> Sent: Tuesday, April 03, 2012 1:23 AM
> To
R-helpers:
It looks like http://finzi.psych.upenn.edu/search.html has stopped
spidering the mailing lists -- this used to be my go-to site for
searching for R solutions. Are there any good replacements for this?
I want to be able to search both functions and mailing lists at the
same time. Cheer
> * [2012-04-05 09:58:24 -0500]:
>
> I'll look into whether thechecking can be made to take this into
> account; it may be more trouble than it is worth though.
Just to clarify: it would be nice if R noticed "stupid mistakes" like
overriding functions in packages from the top-level and either pr
Adding plot=FALSE to the hist() call will prevent it from being plotted.
On Thu, Apr 5, 2012 at 10:52 AM, arunkumar wrote:
> hi
>
> I have a dataframe and a parameter
>
> the parameter can have any one value min max mean sum hist
>
> i'm using the function match.fun
>
> fun=match.fun(input
hi
I have a dataframe and a parameter
the parameter can have any one value min max mean sum hist
i'm using the function match.fun
fun=match.fun(input)
fun(dataset)
but if input is hist the plot pops up. is there any method to avoid it. else
should use only if condition for histogram
-
The compiler doesn't currently look beyond the first definition found
(the generated code does the right thing, but the compiler won't
optimize calls to functions masked by non-functions). I'll look into
whether thechecking can be made to take this into account; it may be
more trouble than it is
Thanks to you both. Calling recover (an option hitherto unknown to me) helped
me identify the problem.
For the record, the error occurred in the geom_path() line, not the list
concatenation, as I had previously thought. It was a logic problem: when
typeof == NULL the function jumped, but i remaine
Hi all,
I have a matrix (n*2), I want to compare 2 operators (2 normalization for
array results) on these matrix.
The 2 columns should ideally become the same after operations
(normalization). So to compare operations,
I do this for each normalization:
s= sum (apply (normalized.matrix, 2,sd))
Thank you very much for your comments Ista and David! I will
experiment and see which one serves my needs best.
__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/po
To expand on Duncan's answer, you haven't replaced it. The following
should make that clear:
## starting in a fresh session
> c
function (..., recursive = FALSE) .Primitive("c")
> find('c')
[1] "package:base"
> c <- 1
> find('c')
[1] ".GlobalEnv" "package:base"
> c
[1] 1
> rm(c)
> find('c')
[1]
> * Duncan Murdoch [2012-04-04 21:46:57 -0400]:
>
> On 12-04-04 5:15 PM, Sam Steingold wrote:
>>> * Duncan Murdoch [2012-04-04 17:00:32 -0400]:
>>>
>>> There's no warning when you mask a function with a non-function at top
>>> level, and little need for one, because R does the right search based
http://r.789695.n4.nabble.com/file/n4534914/Rplot01.png
I have some dataset
ak[1:3,]
[,1] [,2] [,3] [,4] [,5] [,6] [,7]
[,8] [,9]
[1,] 0.3211745 0.4132568 0.5649930 0.6920562 0.7760113 0.8118568 0.8609301
0.9088819 0.9326736
[2,] 0.3159234 0.
Thanks Lell
It worked well.
-
Thanks in Advance
Arun
--
View this message in context:
http://r.789695.n4.nabble.com/help-in-paste-command-tp4534756p4534925.html
Sent from the R help mailing list archive at Nabble.com.
__
R-help@r-project.o
On Thu, Apr 5, 2012 at 9:18 AM, Pam wrote:
>
> Hi,
>
> The code below does exactly what I want in sequential mode. But, it is slow
> and I want to run it in parallel mode. I examined some
> windows version packages (parallel, snow, snowfall,..) but could not solve my
> specific problem. As far
On Apr 5, 2012, at 8:55 AM, arunkumar wrote:
i have a character variable
tablename="DressMaterials"
var1=("red","blue","green","white")
My output should be like
select * from DressMaterialswhere colors in
("red","blue","green","white")
i'm not able to get the where part.
?match
On Apr 5, 2012, at 7:01 AM, Michael Bach wrote:
Dear R users,
how do I e.g. square each second element of a vector with an even
number of elements? Or more generally to apply a function to every
'nth' element of a vector. I looked into the apply functions, but
found no hint.
For example:
v <
You can get an OR from a 2x2 table (which is equivalent to doing logistic
regression with a single dummy variable that indicates the group) or from some
continuous exposure (where the logistic regression model will then include that
continuous variable). The various packages are set up to accept
Here is Dr. Leisch's advice for dealing with open handles (and it works):
> On 4/5/2012 4:22 AM, Friedrich Leisch wrote:
> ...
> You need to close the pdf device, not an open connection:
>
> R> Sweave("test.Rnw")
> Writing to file test.tex
> Processing code chunks with options ...
> 1 : keep.sour
Hello Arun,
> paste("select * from ", tablename , " where colors in
(",paste(var1,collapse=","),")")
[1] "select * from DressMaterials where colors in (
red,blue,green,white )"
Regards!
__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailma
Dear list,
I am trying to reclaim what I think is lost memory in R, I have been using
gc(), rm() and also using Rprof to figure out where all the memory is going but
I might be missing something.
I have the following situation
basic loop which calls memoryHogFunction:
for i in (1:N) {
dat
Hi,
The code below does exactly what I want in sequential mode. But, it is slow and
I want to run it in parallel mode. I examined some windows version packages
(parallel, snow, snowfall,..) but could not solve my specific problem. As far
as I understood, either I have to write a new function
For some reason I was under the false impression that these packages were
made for meta-analyses of RCT-like studies in which two groups are
compared. I am glad to see that I was wrong and that I can use one of these
packages.
All studies reported using the same units for the exposure so the
1 - 100 of 129 matches
Mail list logo