Sorry, in regards to the previous post where I said aggregate(z, identity,
tail, 1), replace it with aggregate(z, identity, mean)
--
View this message in context:
http://r.789695.n4.nabble.com/Noob-question-Identity-argument-within-aggregate-function-tp4439806p4440424.html
Sent from the R help ma
I've also searched "?identity" in the R shell and it doesn't seem to be the
definition I'm looking for for this particular usage of "identity" as an
argument in the aggregate function. I simply would appreciate a conceptual
explanation of what it does here and how it relates to the error.
--
View
z is a zoo object as a result from reading in the following series
z = suppressWarnings(zoo(1:8), c(1, 2, 2, 2, 3, 4, 5, 5))
This is what z is in the aggregate function. So then that brings us to
"aggregate(z, identity, tail, 1)". All I was trying to accomplish was trying
to reproduce an example
Greetings.
I'm a Master's student working on an analysis of herbivore damage on plants.
I have a tried running a glm with one categorical predictor (aphid
abundance) and a binomial response (presence/absence of herbivore damage).
My predictor has four categories: high, medium, low, and none. I us
Dear Oscar,
Â
I have used the the following codes to perform a Bayesian HMM for the exchange
rate data.
But, one intresting result is that the model fits a 6-state HMM with a common
variance.
This is very hard to understand. Because, from the plot graph, we could see
there are obviously differe
On 02/03/2012 23:36, steven mosher wrote:
1. How much RAM do you have (looks like 2GB ) . If you have more than 2GB
then you can allocate
more memory with memory.size()
Actually, this looks like 32-bit Windows (unstated), so you cannot. See
the rw-FAQ for things your sysadmin can do even
I would use the regular text function instead of mtext (remembering to
set par(xpd=...)), then use the grconvertX and grconvertY functions to
find the location to plot at (possibly adding in the results from
strwidth or stheight).
On Thu, Mar 1, 2012 at 4:52 PM, Frank Harrell wrote:
> Rich's poin
Hi everyone,
I'm having trouble adding error bars to a grouped barchart in lattice. I know
that this topic has been addressed quite a bit, as I've been searching the
internet for a while to try to troubleshoot the issue, but I've not been able
to find any solution that I could get working on my
Hello all,
I have become somewhat confused with options available for dealing
with a highly unbalanced data set (1 in one class, 50 in the
other). As a summary I am unsure:
a) if I am perform the two class weighting methods properly,
b) if the data are too unbalanced and that this type of ana
Try this:
x <- structure(list(day = 19, C1 = structure(1L, .Label = c("", "C1"
), class = "factor"), C2 = structure(2L, .Label = c("", "C2"), class =
"factor"),
C3 = structure(1L, .Label = c("", "C3"), class = "factor"),
Q1 = structure(2L, .Label = c("", "Q1"), class = "factor"),
Q2 = str
I don't think you can speed it up by a whole lot... but you can try a
few things, especially if you don't have missing data in the matrix
(which you probably don't). The main question is what takes most of
the time- the api calls or the cor() call? If it's cor, here's what
you can try:
1. Pre-stan
Unfortunately they only know how to use Excel and Word. They are not
folks who use a computer every day. Many of them run factories or
warehouses and asking them to use something like Access would not
happen in my lifetime (I have retired twice already).
I don't have any problems with them "mess
On 03/03/12 13:35, David Winsemius wrote:
On Mar 2, 2012, at 7:05 PM, Duncan Murdoch wrote:
On 12-03-02 4:47 PM, Jun Shen wrote:
Dear list,
If I know the standard error for k1 and k2, is there anything I can
call in
R to calculate the standard error of k1/k2? Thanks.
No, because it depen
Hi,
I have a 900,000,000*9,000 matrix where I need to calculate the correlation
between all entries along the smaller dimension, thus creating a 9k*9k
correlation matrix. This matrix is too big to be uploaded in R, and is saved
as a binary file. To access the data in the file I use mmap and some
a
On Fri, Mar 2, 2012 at 5:15 PM, sluedtke wrote:
> Dear List,
>
> I am struggling with the trellis graphic. A similar problem was mentioned
> here:
>
> http://r.789695.n4.nabble.com/R-How-can-you-get-N-replicates-of-a-multi-screen-multivariate-time-series-plot-td811850.html
>
>
> I do have 2 time s
Hello,
Where is the reproducible example?
apricum wrote
>
> Hi,
> I need to find number of occurence of each word from one string in other
> string. So I need a function, which is similar to pmatch, but returns not
> references, but number of matches. Is there any function like this? If
> no,
On 03/03/12 12:41, Greg Snow wrote:
It is possible to do the right thing in
Excel, but Excel does not encourage (let alone force) you to do the
right thing, but makes it easy to do the wrong thing.
Fortune!
cheers,
Rolf Turner
__
R-h
On Mar 2, 2012, at 7:05 PM, Duncan Murdoch wrote:
On 12-03-02 4:47 PM, Jun Shen wrote:
Dear list,
If I know the standard error for k1 and k2, is there anything I can
call in
R to calculate the standard error of k1/k2? Thanks.
No, because it depends on the joint distribution of k1 and k2.
Others explained why it happens, but you might want to look at the
zapsmall function for one way to deal with it.
On Thu, Mar 1, 2012 at 2:49 PM, Mark A. Albins wrote:
> Hi!
>
> I'm running R version 2.13.0 (2011-04-13)
> Platform: i386-pc-mingw32/i386 (32-bit)
>
> When i type in the command:
>
>
?xspline
On Thu, Mar 1, 2012 at 8:15 AM, hendersi wrote:
>
> Hello,
>
> I have a spreadsheet of pairs of coordinates and I would like to plot a line
> along which curves/arcs connect each pair of coordinates. The aim is to
> visualise the pattern of point connections.
>
> Thanks! Ian
>
> --
> Vie
If you know that your first date is a Friday then you can use seq with
by="7 day", then you don't need to post filter the vector.
On Thu, Mar 1, 2012 at 1:40 PM, Ben quant wrote:
> Great thanks!
>
> ben
>
> On Thu, Mar 1, 2012 at 1:30 PM, Marc Schwartz wrote:
>
>> On Mar 1, 2012, at 2:02 PM, Ben
Hi,
If you're going to use different text sizes and convert between units,
it might be easier to do the calculations with grid.
par(mar=c(1,1,1,5))
plot(1:10)
labels = c(1, 2, 10, 123, 3.141592653589, 1.2, 2)
sizes = c(1, 1, 2, 1, 0.4, 1, 3) # cex of individual labels
## pure base graphics
max_w
On 12-03-02 4:47 PM, Jun Shen wrote:
Dear list,
If I know the standard error for k1 and k2, is there anything I can call in
R to calculate the standard error of k1/k2? Thanks.
No, because it depends on the joint distribution of k1 and k2. Even if
you knew they were independent, that would no
Unfortunately, a lot of people who use MS Office don't have or know how
to use MS Access. Where I work now (as in the past) I have to tie
someone to their chair, give them a few pokes with the cattle prod and
then show them that a CSV file will load straight into Excel before I
can convince the
Thanks Elai. axis(2) looks like a good approach. I think the way to solve
for the pos= argument is to use:
usr <- par('usr'); plt <- par('plt')
usr[2] + (usr[2] - usr[1])/(plt[2] - plt[1]) * (1 - plt[2])
I think pos should have only one element.
Thanks for your help,
Frank
ilai-2 wrote
>
Or
lapply(LIST, cat, file='outtext.txt', append=TRUE)
On Thu, Mar 1, 2012 at 6:20 AM, R. Michael Weylandt
wrote:
> Perhaps something like
>
> sink("outtext.txt")
> lapply(LIST, print)
> sink()
>
> You could replace print with cat and friends if you wanted more
> detailed control over the look of
On 03/02/2012 11:49 PM, SMcG wrote:
Hi,
I'm wondering if anybody could possibly help me? I have a table with
5 tab-delimited columns. Each column has 'e-value' scores for 5
different proteins.
I'd like to plot a distribution curve using hist() for the 5
different proteins and show the 5 distrib
Try sending your clients a data set (data frame, table, etc) as an MS
Access data table instead. They can still view the data as a table,
but will have to go to much more effort to mess up the data, more
likely they will do proper edits without messing anything up (mixing
characters in with number
1. How much RAM do you have (looks like 2GB ) . If you have more than 2GB
then you can allocate
more memory with memory.size()
2. If you have 2GB or less then you have a couple options
a) make sure your session is clean of unnecessary objects.
b) Dont read in all the data if you dont
On Mar 2, 2012, at 3:51 PM, knavero wrote:
aggregate(z, identity, mean)
1 2 3 4 5
1.0 3.0 5.0 6.0 7.5
aggregate(z, mean)
Error: length(time(x)) == length(by[[1]]) is not TRUE
As generally happens when you call a function and fail to provide
enough arguments to fill up its formal
Hi,
On Fri, Mar 2, 2012 at 3:51 PM, knavero wrote:
>>aggregate(z, identity, mean)
> 1 2 3 4 5
> 1.0 3.0 5.0 6.0 7.5
>> aggregate(z, mean)
> Error: length(time(x)) == length(by[[1]]) is not TRUE
>
> Can someone help me understand the error above and why "identity" is
> necessary to satisf
I have this type of format:
structure(list(day = 19, C1 = structure(1L, .Label = c("", "C1"
), class = "factor"), C2 = structure(2L, .Label = c("", "C2"), class =
"factor"),
C3 = structure(1L, .Label = c("", "C3"), class = "factor"),
Q1 = structure(2L, .Label = c("", "Q1"), class = "fact
Hi,
I need to find number of occurence of each word from one string in other
string. So I need a function, which is similar to pmatch, but returns not
references, but number of matches. Is there any function like this? If no,
that is the way to calculate what I need?
--
View this message in conte
Hi,I am using package deSolve to run some ordinary differential equations
(ODE) as part of a mathematical modeling project. I have solved for the
following equilibrium states: Seq1<-a*(1-Neq1)/(f*Veq1+m+d)
Ceq1<-(f*Seq1*Veq1+g*Ieq1+r*(1-Neq1)-b1*Veq1*Ieq1)/(b2+m+d+g)
Ieq1<-(-b2*Ceq1)-r*(1-N
Dear List,
I am struggling with the trellis graphic. A similar problem was mentioned
here:
http://r.789695.n4.nabble.com/R-How-can-you-get-N-replicates-of-a-multi-screen-multivariate-time-series-plot-td811850.html
I do have 2 time series data sets. The 2 time series differ in some orders
of ma
>aggregate(z, identity, mean)
1 2 3 4 5
1.0 3.0 5.0 6.0 7.5
> aggregate(z, mean)
Error: length(time(x)) == length(by[[1]]) is not TRUE
Can someone help me understand the error above and why "identity" is
necessary to satisfy the error
--
View this message in context:
http://r.7896
On Mar 2, 2012, at 4:47 PM, Jun Shen wrote:
Dear list,
If I know the standard error for k1 and k2, is there anything I can
call in
R to calculate the standard error of k1/k2? Thanks.
This does not appear to be a well-posed question yet, and it is
arguably more a statistics question than
Dear list,
If I know the standard error for k1 and k2, is there anything I can call in
R to calculate the standard error of k1/k2? Thanks.
Jun
[[alternative HTML version deleted]]
__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailm
I'll have to give this a try this weekend. Thank you!
ben
On Fri, Mar 2, 2012 at 12:07 PM, jim holtman wrote:
> One way to speed up the merge is not to use merge. You can use 'match' to
> find matching indices and then manually.
>
> Does this do what you want:
>
> > ua <- read.table(text = '
On Fri, Mar 2, 2012 at 1:17 PM, Frank Harrell wrote:
> Hi Rich and Peter,
>
> What I am trying to do is the right-justify a vector of numbers to the right
> of the y-axis so that the leftmost digit of all of the numbers is one
> character to the right of the axis line. axis() plots tick marks and
Hi,
I *think* this is what you want...
On Fri, Mar 2, 2012 at 12:29 PM, robgriffin247
wrote:
> Hello,
> I have a large data set which I am trying to get in to a long/narrow format.
> I have given an example below of how I want my data to look before and
> after... any ideas for an easy way to do
Hi Rich and Peter,
What I am trying to do is the right-justify a vector of numbers to the right
of the y-axis so that the leftmost digit of all of the numbers is one
character to the right of the axis line. axis() plots tick marks and
left-justifies the numbers.
Peter's idea:
-
Since you
On Mar 2, 2012, at 1:52 PM, jon waterhouse wrote:
> I have a very standard barplot. My labels are too long to be printed
> horizontally under each bar, so I am using text to put the labels on a 45
> degree slant.
>
> However, the labels are spaced more narrowly than the bars, so on an 8
> vertic
The return value of barplot contains the locations
of the bars that it just drew. Use that instead of
1:8 when you draw the text:
> barCenters <- barplot(X2sum)
> text(barCenters, par("usr")[3] - 0.5, srt = 45, adj = 1, labels =X2.labels,
> xpd = TRUE)
Look at help(barplot) for details.
Bil
On Mar 2, 2012, at 11:19 AM, labbig wrote:
Hi
i am running a glm model family Gamma(link=log) trying to predict a
vector
of 1554 (real) values
Using predict() i got a vector of 950 predicted values instead of
1554.
The predictions are good though
The model doesnt take account of negativ
try using glm(...,na.action=na.exclude)
See ?na.exclude
for the explanation
On Fri, Mar 2, 2012 at 11:19 AM, labbig wrote:
> Hi
>
> i am running a glm model family Gamma(link=log) trying to predict a vector
> of 1554 (real) values
>
> Using predict() i got a vector of 950 predicted values inst
I have a very standard barplot. My labels are too long to be printed
horizontally under each bar, so I am using text to put the labels on a 45
degree slant.
However, the labels are spaced more narrowly than the bars, so on an 8
vertical bar plot, the end of the eighth label is lined up with the s
One way to speed up the merge is not to use merge. You can use 'match' to
find matching indices and then manually.
Does this do what you want:
> ua <- read.table(text = ' AName rt_date
+ 2007-03-31 "14066.580078125" "2007-04-01"
+ 2007-06-30 "14717" "2007-
Hi Arne,
thanks for the improvements in the package. I'm using it right now and it's
working very well.
Best,
*Felipe Nunes*
CAPES/Fulbright Fellow
PhD Student Political Science - UCLA
Web: felipenunes.bol.ucla.edu
On Fri, Mar 2, 2012 at 2:13 AM, Arne Henningsen <
arne.henning...@googlemail.c
Hi,
On Fri, Mar 2, 2012 at 11:19 AM, labbig wrote:
> Hi
>
> i am running a glm model family Gamma(link=log) trying to predict a vector
> of 1554 (real) values
>
> Using predict() i got a vector of 950 predicted values instead of 1554.
> The predictions are good though
> The model doesnt take acco
I am plotting some data using sp package
library (sp)
library(maps)
data.aggm # data
# Define standard projection
ll <- CRS("+proj=longlat +datum=WGS84")
# convert to a SpatialPointsDataFrame
xy <- cbind(data.aggm[,1], data.aggm[,2])
ch4.spPoints <- SpatialPointsDataFrame(coor
Hello,
I have a large data set which I am trying to get in to a long/narrow format.
I have given an example below of how I want my data to look before and
after... any ideas for an easy way to do this?
*###Start With this...
*set.seed(1)
a=rnorm(10)
b=rnorm(10)
c=rnorm(10)
d=rnorm(10)
e=rnorm(10)
Hello,
> HI,
> this is my problem I want to subset this file df, using only unique
> df$exon printing the line once even if df$exon appear several times:
>
> unique(df$exon) will show me the unique exons
> If I try to print only the unique exon lines
> with df[unique(df$exon),] -this doesn't pr
Hi
i am running a glm model family Gamma(link=log) trying to predict a vector
of 1554 (real) values
Using predict() i got a vector of 950 predicted values instead of 1554.
The predictions are good though
The model doesnt take account of negative values and NAs which are only 121
values.
Any clu
Hi,
I am running the MNP package in R. The model runs well. There are actually 4
choices and 4th is considered as base category. I got the result of all 19
covariates for all 3 model choices. What I want to do with the result is to
eliminate all the covariates from one model choice except const
Thanks for the advice. I guess I should have read the acf help page
more thoroughly to appreciate the role of plot.acf(). Typical.
Thanks Duncan.
-Original Message-
From: Duncan Murdoch [mailto:murdoch.dun...@gmail.com]
Sent: March 2, 2012 9:53 AM
To: Folkes, Michael
Cc: r-help@r-proje
Please always cc the list for archival/threading reasons.
Sort answer is that unique() gives the unique elements rather than something
you should subset by, like a set of logical indices or row numbers.
Note that in general unique(x) == x[!duplicated(x)] I'd imagine there are cases
where this
On 02/03/2012 11:40 AM, Folkes, Michael wrote:
Hello all,
I found a funny problem with y-axis labels when plotting acf(matrix) -
the labels are too close to one of the margins and cut in half.
Here's the problem:
test<-matrix(rnorm(200),ncol=4)
acf(test)
This doesn't fix the problem:
test<-matr
On Fri, Mar 2, 2012 at 1:22 AM, peter dalgaard wrote:
>
> Er, yes (scalar does not imply integer)
Dough! awkward... Sorry Shantanu.
I've added
cat('###\n # ',substr(fortunes::fortune(90)$quote,1,146),'\n ### \n')
To .First in my Rhelp directory.
Hope that helps (me).
>
> As a general ma
I'm not sure. I'm still looking into it. Its pretty involved, so I asked
the simplest answer first (the merge question).
I'll reply back with a mock-up/sample that is testable under a more
appropriate subject line. Probably this weekend.
Regards,
Ben
On Fri, Mar 2, 2012 at 4:37 AM, Hans Ekbran
I believe you want the duplicated() function.
Michael
On Mar 2, 2012, at 10:19 AM, nathalie wrote:
> HI,
> this is my problem I want to subset this file df, using only unique df$exon
> printing the line once even if df$exon appear several times:
>
> unique(df$exon) will show me the unique e
Okay, one simply has to use label.pos=0.5 in pairs() to get the correct
behavior.
On 2012-03-02, at 09:10 , Marius Hofert wrote:
> Dear Ilai,
>
> I tried to also adjust the diagonal panels. However, the variable names are
> not
> positioned correctly anymore. Do you know a solution?
>
> Chee
Hello all,
I found a funny problem with y-axis labels when plotting acf(matrix) -
the labels are too close to one of the margins and cut in half.
Here's the problem:
test<-matrix(rnorm(200),ncol=4)
acf(test)
This doesn't fix the problem:
test<-matrix(rnorm(200),ncol=4)
par(mar=c(3,3,2,0.2),oma=c(
Here is my code:
##Centering predictors###
verbal.ability_C <- verbal.ability - mean(verbal.ability)
children_C <- children - mean(children)
age_C <- age - mean(age)
education_C <- education - mean(education)
work.from.home.frequency_C <- work.from.home.frequency -
mean(work.from.home.fre
Thank you Vito for your help.
Works very nice.
Have a nice day,
Phil
--
View this message in context:
http://r.789695.n4.nabble.com/Help-with-segmented-package-tp4435550p4438589.html
Sent from the R help mailing list archive at Nabble.com.
__
R-help@
On 02-03-2012, at 16:12, Diogo Alagador wrote:
> Dear all,
>
> Sorry to insist in this, but I am passing really "bad times" trying to solve
> the problem. Just to remember you:
>
> I am tryng to solve a nonlinear optimization probel using the solnp function.
> I have different datasets. For th
HI,
this is my problem I want to subset this file df, using only unique
df$exon printing the line once even if df$exon appear several times:
unique(df$exon) will show me the unique exons
If I try to print only the unique exon lines
with df[unique(df$exon),] -this doesn't print only the unique
Dear all,
Sorry to insist in this, but I am passing really "bad times" trying to
solve the problem. Just to remember you:
I am tryng to solve a nonlinear optimization probel using the solnp function.
I have different datasets. For the smaller I get full solutions, for
the bigger I got an erro
Let's see...
You could delete objects from your R session.
You could buy more RAM.
You could see help(memory.size).
You could try googling to see how others have dealt with memory
management in R, a process which turns up useful information like
this: http://www.r-bloggers.com/memory-management-in
Hi everyone,
Any ideas on troubleshooting this memory issue:
> d1<-read.csv("arrears.csv")
Error: cannot allocate vector of size 77.3 Mb
In addition: Warning messages:
1: In class(data) <- "data.frame" :
Reached total allocation of 1535Mb: see help(memory.size)
2: In class(data) <- "data.frame"
On 02-03-2012, at 14:13, Roey Angel wrote:
> Hi Bernard, thanks for the quick reply.
> Of course, I understand that an escape is needed because parenthesis are
> reserved symbols in regular expressions.
> My problem is that if I just use \( I get the error:
>
> Error: '\(' is an unrecognized es
Hi
>
> Hi Petr!
>
> Thank you for responding to my post.
>
> I checked out all my variables in the way you suggested and they are all
in
> integer form, but here are many missing values in some of my vectors,
> denoted with NA.
>
> So, they are in the correct form, I am just wondering if there
The # is the default comment character in read.table(), but that can
easily be changed:
> tc <- textConnection(
+ "yes yes yes yes yes
+ yes yes yes yes yes
+ yes yes # yes yes"
+ )
> x <- read.table(tc, comment.char="")
> x
V1 V2 V3 V4 V5
1 yes yes yes yes yes
2 yes yes yes yes yes
3 yes y
Have a look at http://had.co.nz/ggplot2/stat_density.html You'll find some
exampled and the code to generate them.
ir. Thierry Onkelinx
Instituut voor natuur- en bosonderzoek / Research Institute for Nature and
Forest
team Biometrie & Kwaliteitszorg / team Biometrics & Quality Assurance
Kliniek
Roey, you imply that this is unusual in implementations of regex, yet some of
the oldest applications using regex out there are sed or awk, where extra
quoting is so common that some people don't recognize regex patterns that are
missing this extra level of quoting. Sigh.
---
Hi Waldir
I think this is easier via an lappy()
lapply(1:30, function(x) mlp(...your settings here, including size=x...) )
Regards,
Kees
On Fri, Mar 2, 2012 at 2:36 PM, Waldir de Carvalho Junior
wrote:
> Hi help-list
> I try to better explain my problem.
> My problem is below. For each cycle
Dear R colleagues,
for a statistics tutorial I want to develop a nice 3d-graphic of the
well known target comparison/analogy of accuracy and precision (see i.e.
http://en.wikipedia.org/wiki/Accuracy_and_precision for a simple hand
made 2d graphic).
The code for a really beautiful graphic is alrea
Hi,
I'm wondering if anybody could possibly help me?
I have a table with 5 tab-delimited columns. Each column has 'e-value'
scores for 5 different proteins.
I'd like to plot a distribution curve using hist() for the 5 different
proteins and show the 5 distribution curves on the same graph in d
Hi help-list
I try to better explain my problem.
My problem is below. For each cycle of (n) I need to save the model
(model.mlp), because the neuralnet is unique and when I had choose the best
architecture of the neuralnet, and need the neuralnet (the model.mlp) that
has already been trainned.
I ca
Hi Petr!
Thank you for responding to my post.
I checked out all my variables in the way you suggested and they are all in
integer form, but here are many missing values in some of my vectors,
denoted with NA.
So, they are in the correct form, I am just wondering if there is something
else I need
> On 02-Mar-2012 IOANNA wrote:
> > Hello,
> > I run a linear regression I get the summary, e.g.:
> >
> >>
> >Call:
> >lm(formula = signal ~ conc)
> >Residuals:
> > 12 3 4 5
> > 0.4 -1.0 1.6 -1.80.8
> >Coefficients:
> >
It is by no means clear what the "peaks" function does or if it has
any R equivalent, but perhaps looking at demo(rgl) will get you
started. After that, you should probably show what you've tried (at
least as far as replicating the calculation aspects).
Michael
On Fri, Mar 2, 2012 at 1:32 AM, e-m
On 02-Mar-2012 IOANNA wrote:
> Hello,
> I run a linear regression I get the summary, e.g.:
>
>>
>Call:
>lm(formula = signal ~ conc)
>Residuals:
> 12 3 4 5
> 0.4 -1.0 1.6 -1.80.8
>Coefficients:
> Estimate Std. Er
Hello,
I run a linear regression I get the summary, e.g.:
> summary(lm.r)
Call:
lm(formula = signal ~ conc)
Residuals:
12 3 4 5
0.4 -1.0 1.6 -1.80.8
Coefficients:
Estimate Std. Errort valuePr(>|t|)
On 12-03-02 5:32 AM, Tom Hopper wrote:
I would like to set up identical R installations, with the same packages,
on multiple computers and with minimal interaction by users. Ideally, I
would like to have an installation script that the user can just run that
will set up everything, including R it
On Fri, Mar 02, 2012 at 03:24:20AM -0700, Ben quant wrote:
> Hello,
>
> I have a nasty loop that I have to do 11877 times.
Are you completely sure about that? I often find my self avoiding
loops-by-row by constructing vectors of which rows that fullfil a
condition, and then creating new vectors
Hi Ben,
It seems you merge a matrix and a vector. As far as I understand the
first thing merge does is convert these to data.frame. Is it possible
to make the preceding steps give data frames?
Regards,
Kees
On Fri, Mar 2, 2012 at 11:24 AM, Ben quant wrote:
>
> Hello,
>
> I have a nasty loop tha
I would like to set up identical R installations, with the same packages,
on multiple computers and with minimal interaction by users. Ideally, I
would like to have an installation script that the user can just run that
will set up everything, including R itself and base packages.
Standard package
Hello,
I have a nasty loop that I have to do 11877 times. The only thing that
slows it down really is this merge:
xx1 = merge(dt,ua_rd,by.x=1,by.y= 'rt_date',all.x=T)
Any ideas on how to speed it up? The output can't change materially (it
works), but I'd like it to go faster. I'm looking at gett
Dear Felipe
On 29 September 2011 14:10, Arne Henningsen
wrote:
> Hi Felipe
>
> On 25 September 2011 00:16, Felipe Nunes wrote:
>> Hi Arne,
>> my problem persists. I am still using censReg [version - 0.5-7] to run a
>> random effects model in my data (>50,000 cases), but I always get the
>> messa
On 02-03-2012, at 09:36, Roey Angel wrote:
> Hi,
> I was recently misfortunate enough to have to use regular expressions to sort
> out some data in R.
> I'm working on a data file which contains taxonomical data of bacteria in
> hierarchical order.
> A sample of this file can be generated using
Hi
>
> Hi,
>
> I am trying to run two Non-Gaussian regressions: logistic and probit. I
am
> receiving two different errors when I try to run these regressions and I
am
> not sure what they mean or how to fix my syntax.
>
> Here is the logistic regression error:
>
> Error in family$linkfun(mu
Hi
my favourite would be
test$v[which(test$pattern==1)]<-NA
Regards
Petr
> Hi,
>
> On Mar 1, 2012, at 12:38 PM, Sarah Goslee wrote:
>
> > Hi,
> >
> > On Thu, Mar 1, 2012 at 11:11 AM, mails wrote:
> >> Hello,
> >>
> >>
> >> consider the following data.frame:
> >>
> >> test <- data.frame(
Dear Oscar,
Â
Thanks for your help.It's so nice of you to explain this package to me.
Â
Best Regards,
Â
James LAN
åä»¶äººï¼ Oscar Rueda [via R]
æ¶ä»¶äººï¼ monkeylan
å鿥æï¼ 2012å¹´2æ29æ¥, ææä¸, ä¸å 9:21
主é¢: Re: Bayesian Hidden Markov Models
Dear James,
The dis
Hi,
I was recently misfortunate enough to have to use regular expressions to
sort out some data in R.
I'm working on a data file which contains taxonomical data of bacteria
in hierarchical order.
A sample of this file can be generated using:
tax.data <- read.table(header=F, con <- textConnecti
use the 'comment.char' parameter of read.table
Sent from my iPad
On Mar 1, 2012, at 17:51, Rui Barradas wrote:
> Hello,
>
>>
>> The problem is that I get a the following error bacause anything after the
>> # is ignored.
>>
>> Error in scan(file, what, nmax, sep, dec, quote, skip, nlines, na.
On Thu, Mar 01, 2012 at 11:18:33PM -0800, statquant2 wrote:
> Hey Petr,
> ok I was thinking that R would handle the split by itself.
> I guess using eval we can even make arg1=val1 being executed by R.
Hi.
For executing the assignments, try myRscript.R containing
args <- commandArgs(TRUE);
a
On Mar 2, 2012, at 05:55 , ilai wrote:
> What do you make of the following from ?riwish
> "
> riwish(v, S)
>
> v: Degrees of freedom (scalar).
> "
> does a m/2 parameterization yield a scalar for, say, 3 dof ?
Er, yes (scalar does not imply integer)
As a general matter:
1. This i
Dear Ilai,
I tried to also adjust the diagonal panels. However, the variable names are not
positioned correctly anymore. Do you know a solution?
Cheers,
Marius
count <- 0
mypanel <- function(x, y, ...){
count <<- count+1
bg <- if(count %in% c(1,4,9,12)) "#FDFF65" else "transparent"
ll
99 matches
Mail list logo