> dd
[,1] [,2]
[1,] "OP" "SU"
[2,] "XA" "YQ"
sapply( lapply(
+ strsplit(dd, split=""), sort),
+ paste, collapse="")
[1] "OP" "AX" "SU" "QY"
The result is not what I intended since it is a single line.
It should be:
[,1] [,2]
[1,] "OP" "SU"
[2,] "AX" "QY"
--
View this message in conte
Hi all,
I have 2 data frames the first contains a list with repeats of words
and an associated response time (RT) measure for each word. The second is a
tabulation of each unique word and other information such as the amount and of
responses for each word. I need to determine the mean RT
dd=rbind(c("OP", "SU"),c("XA", "YQ"))
--
View this message in context:
http://r.789695.n4.nabble.com/how-to-order-each-element-according-to-alphabet-tp3668997p3669194.html
Sent from the R help mailing list archive at Nabble.com.
__
R-help@r-project.org
On Jul 14, 2011, at 11:56 PM, onthetopo wrote:
Thank you very much for your reply doctor.
I tried to apply your command to my table but couldn't
Would you please enlighten me on how to do this when 'lets2' is a
4X4 matrix
for example.
The message doesn't seem to be getting through. Let's s
Thank you very much for your reply doctor.
I tried to apply your command to my table but couldn't
Would you please enlighten me on how to do this when 'lets2' is a 4X4 matrix
for example.
--
View this message in context:
http://r.789695.n4.nabble.com/how-to-order-each-element-according-to-alpha
Hello List,
The question is how to plot a bar chart in which bars are sorted in ascending
order for each level of X. I would appreciate receiving your advice and help.
Thanks,
Pradip Muhuri
**
The following codes work when producing the chart in which bars are NOT sorted.
Please see
On Jul 14, 2011, at 11:19 PM, onthetopo wrote:
Hi,
There are many more patterns than VL to LV. In fact, too many to be
listed manually.
For example ML should be ordered as LM, QL should be ordered as LQ.
The order is according to the alphabet.
A more complete (reproducible) answer woul
Hi,
There are many more patterns than VL to LV. In fact, too many to be
listed manually.
For example ML should be ordered as LM, QL should be ordered as LQ.
The order is according to the alphabet.
--
View this message in context:
http://r.789695.n4.nabble.com/how-to-order-each-element-ac
On Jul 14, 2011, at 9:18 PM, onthetopo wrote:
Hi there,
I have a large amino acid csv file like this:
input.txt:
P,LV,Q,Z
P,VL,Q,Z
P,ML,QL,Z
Are you also asking how to read a comma separated file?
? read.csv # and read more introductory material
There is a problem with this file, sin
I am looking for (or interested in writing) a function that calculates Az, an
alternative measure of discriminability from SDT (alternative to d', Az). I
have written my own functions for d', A', B"d, and am aware of the 'sdtalt'
package, but I have yet to find a way to calculate Az, since it requi
Dear helpers,
I am not able to export Unicode characters from R. Below is an example
where the Unicode character is correctly rendered as long as I am stay
within R. When I export it, the character appears only with its basic
code, and the same happens when I import it back into R . I'm using R
2.
Hi there,
I have a large amino acid csv file like this:
input.txt:
P,LV,Q,Z
P,VL,Q,Z
P,ML,QL,Z
There is a problem with this file, since LV and VL are in fact the same
thing.
How do I order each element according to alphabetical order so that the
desired output would look like:
output.txt:
Hello,
I am attempting to solve the least squares problem Ax = b in R, where A and b
are known and x is unknown. It is simple to solve for x using one of a variety
of methods outlined here:
http://cran.r-project.org/web/packages/Matrix/vignettes/Comparisons.pdf
As far as I can tell, none of th
R 2.11.1 on Mac OS X. I didn't see the Note.
--
View this message in context:
http://r.789695.n4.nabble.com/computing-functions-with-Euler-s-number-e-n-tp3655205p3668849.html
Sent from the R help mailing list archive at Nabble.com.
__
R-help@r-project
The documentation for val.surv tried to be clear about that.
Note that val.surv is only for the case where the predictions were developed
on a separate dataset, i.e., that the validation is truly 'external'.
Frank
yz wrote:
>
> Dear R users:
>
> I want to externally validate a model with val.su
Dear R users:
I want to externally validate a model with val.surv.
Can I use only calculated survival (at 1 year) and actual survival?
Or I needed the survival function and actual survival.
Thanks
*Yao Zhu*
*Department of Urology
Fudan University Shanghai Cancer Center
Shanghai, China*
To add to what David and Duncan wrote: If you want to plot something at a point
where the x coordinate is in user coordinates, but the y-coordinate is
something like the middle of the plot, or 1/5th of the way from the top then
you can use the grconvertY function along with the text function.
I
On Jul 14, 2011, at 6:47 PM, Madana_Babu wrote:
Hi i have the data in the following format:
rent,100,1,common,674
pipe,200,0,usual,864
car,300,1,uncommon,392:jump,700,0,common,664
car,200,1,uncommon,864:snap,900,1,usual,746
stint,600,1,uncommon,257
pull,800,0,usual,594
where as i want the abo
On Jul 14, 2011, at 6:15 PM, Tyler Rinker wrote:
Good Afternoon R Community,
I often work with very large data bases and want to search for
select cases by a particular word or numeric value. I created the
following simple function to do just that. It searchs a particular
column for
Hi,
I'm running Redhat Linux (I believe it is Fedora 13 or 14) With the latest
version of R
Everything works nicely, except for the fonts on some plots. I see small empty
boxes instead of the number. It seems as if this is only the case when the
fonts are small.
I've installed all of the f
One more thing - for large data sets, the packages flashClust and
fastcluster provide much faster hierarchical clustering that (at least
for flashClust which I'm the maintainer of) give the exact same
results. Simply insert a
library(flashClust)
before you call the function and your code will run
Hi Paul,
I assume you are using the argument cutoff to specify the p-value
below which nodes are considered connected and above which they are
not connected.
I would use single linkage hierarchical clustering. If you have two
groups of nodes and any two nodes between the groups are connected
(i.e
Thanks a lot Gabor. It helped a lot. Appreciate your time and effort.
Thanks
--- On Thu, 7/14/11, Gabor Grothendieck wrote:
> From: Gabor Grothendieck
> Subject: Re: [R] SQldf with sqlite and H2
> To: "Mandans"
> Cc: r-help@r-project.org
> Date: Thursday, July 14, 2011, 2:22 PM
> On Thu, Jul
Sorry bad example. My data is undirected. It's a correlation matrix so probably
better to look at something like:
foomat<-cor(matrix(rnorm(100), ncol=10))
foomat
mine are pvalues from the correlation but same idea.
On 14 Jul 2011, at 11:23, Erich Neuwirth wrote:
> cliques only works for undir
Hi i have the data in the following format:
rent,100,1,common,674
pipe,200,0,usual,864
car,300,1,uncommon,392:jump,700,0,common,664
car,200,1,uncommon,864:snap,900,1,usual,746
stint,600,1,uncommon,257
pull,800,0,usual,594
where as i want the above 6 lines data into 8 lines as below (Spliting row
Good Afternoon R Community,
I often work with very large data bases and want to search for select cases by
a particular word or numeric value. I created the following simple function to
do just that. It searchs a particular column for the phrase and returns a data
frame with the rows that
Hi , everybody !!!
I want to perform a cost-sensitive classification using the rpart as a base
classifier .
Is it possible ?
Nissim
--
View this message in context:
http://r.789695.n4.nabble.com/Cost-sensitive-classification-tp3668749p3668749.html
Sent from the R help mailing list archive at N
Hi:
I think Bill's got the right idea for your problem, but for the fun of
it, here's how Bert's suggestion would play out:
# Kind of works, but only for the first variable in myvars...
> aggregate(get(myvars) ~ group + mydate, FUN = sum, data = example)
group mydate get(myvars)
1 group1 2
Hi,
from example(capitalize) of the Hmisc package (v 0.8.3) you get:
> capitalize(c("Hello", "bob", "daN"))
[1] "Hello" "Bob" "daN"
Is that "daN" correct?
If so, then this behavior that only *all lowercase strings*, which the
code indicates, will be capitalized is not documented.
> Hmisc::c
> Date: Thu, 14 Jul 2011 12:44:18 -0800
> From: toshihide.hamaz...@alaska.gov
> To: r-h...@stat.math.ethz.ch
> Subject: Re: [R] Very slow optim(): solved
>
> After Googling and trial and errors, the major cause of optimization was not
> functions, but d
Hi
I am posting in the topic related to the "non-numeric argument to binary
operator" as I got similar problem while running the netcdf code. I have
attached the file with this post. It is a climate data from NOAA site. The code
follows as:
library(survival)
library(RNetCDF)
library(ncdf)
ups...already found the solution
matrix2 <- matrix1[sample(samplenumber,replace=F),]
--
View this message in context:
http://r.789695.n4.nabble.com/random-selection-of-elements-from-a-matrix-tp3668574p3668594.html
Sent from the R help mailing list archive at Nabble.com.
You may find it easier to use the data.frame method for aggregate
instead of the formula method when you are using vectors of column
names. E.g.,
responseVars <- c("mpg", "wt")
byVars <- c("cyl", "gear")
aggregate(mtcars[responseVars], by=mtcars[byVars], FUN=median)
gives the same result
Hi!
How can I make a random selection of "n" row elements from a matrix.
The matrix was previously created from a table with different rows and
columns. However I want to keep all the information (columns), I just want
to reduce the number of observations.
Thanks,
Ana
--
View this message in co
Hi
I keep getting an error like this:
Error in `coef<-.corARMA`(`*tmp*`, value = c(18.3113452983211,
-1.56626248550284, :
Coefficient matrix not invertible
or like this:
Error in gls(archlogfl ~ co2, correlation = corARMA(p = 3)) : false
convergence (8)
with the gls function in nlme.
The f
thank you for your reply..
As i have told you earlier... i want to plot the total no.of birds counted
for each day and plot total no.of birds for each day.. one level for each
day ..
i wanted to normalize the data.. since i don't have the data for equal no.of
hours for all days.. for example
o
Ok...lets try again with some code...
---
Hello fellow R users,
I am having a problem finding the estimates for some overall treatment
effects for my mixed models using 'lme' (package nlme). I hope someone
can help.
Fi
David - I tried exactly the thing you did (and after that asked my
question to the forum):
> form <- as.formula(paste(
> "cbind(",
> paste( myvars, collapse=","),
> ") ~ group+mydate",
> sep=" ") )
And it did not wo
Thanks a lot!
actually, what I tried to do is very simple - just passing tons of
variable names into the formula. Maybe that "get" thing suggested by
Bert would work...
Dimitri
On Thu, Jul 14, 2011 at 4:01 PM, David Winsemius wrote:
> Dmitri:
>
> as.matrix makes a matrix out of the dataframe t
After Googling and trial and errors, the major cause of optimization was not
functions, but data setting.
Originally, I was using data.frame for likelihood calculation. Then, I changed
data.frame to vector and matrix for the same likelihood calculation. Now
convergence takes ~ 14 sec instead o
Dear All,
I've been trying to run a Weighted Least Squares (WLS) regression:
Dependent variables: a 60*200 matrix (*Rit*) with 200 companies and 60 dates
for each company
Independent variables: a 60*4 matrix (*Ft*) with 4 factors and 60 dates for
each factor
Weights: a 60*200 matrix (*Wit*) with
If I understood your question
x<-data.frame(matrix(rnorm(2000,10,10),ncol=50))
sapply(1:5,function(i) summary(lm(x[,i]~x[,i+10]+x[,50])))
Weidong Gu
On Thu, Jul 14, 2011 at 2:27 PM, Jon Toledo wrote:
>
> Hi,
> First let me thank you for the incredible help and resource that this forum
> is.
>
Why go to so much trouble? Why not fit a single full model and use it? Even
better why not use a quadratic penalty on the full model to get optimum
cross-validation?
Frank
nofunsally wrote:
>
> Hello,
> I'd like to sum the weights of each independent variable across linear
> models that have be
Dmitri:
as.matrix makes a matrix out of the dataframe that is passed to it.
As a further note I attempted and failed for reasons that are unclear
to me to construct a formula that would (I hoped) preserve the column
names which are being mangle in the posted effort:
form <- as.formula(past
Dmitri:
Look at my vars from myvars<-c("value1","value2")
It's just a character vector of length 2!
You can't cbind a character vector of length 2! These are not
references/pointers.
It's not at all clear to me what you ultimately want to do, but IF
it's: pass a character vector of names to be u
Thank you, David, it does work.
Could you please explain why? What exactly does changing it to "as matrix" do?
Thank you!
Dimitri
On Thu, Jul 14, 2011 at 3:25 PM, David Winsemius wrote:
>
> On Jul 14, 2011, at 3:05 PM, Dimitri Liakhovitski wrote:
>
>> Hello!
>>
>> I am aggregating using a formula
On Jul 14, 2011, at 3:05 PM, Dimitri Liakhovitski wrote:
Hello!
I am aggregating using a formula in aggregate - of the type:
aggregate(cbind(var1,var2,var3)~factor1+factor2,sum,data=mydata)
However, I actually have an object (vector of my variables to be
aggregated):
myvars<-c("var1","var2
Hello!
I am aggregating using a formula in aggregate - of the type:
aggregate(cbind(var1,var2,var3)~factor1+factor2,sum,data=mydata)
However, I actually have an object (vector of my variables to be aggregated):
myvars<-c("var1","var2","var3")
I'd like my aggregate formula (its "cbind" part) to b
Probably no hope of help until you do as the posting guide asks:
> __
> R-help@r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
> and *** provide comm
Hi,
First let me thank you for the incredible help and resource that this forum is.
I am trying to compare the repeated measurement of more than 100 analytes that
have been take in 70 subjects at 2 time points adjusted for the time difference
of sample times(TimeDifferenceDays), therefore I wan
sjaffe riskspan.com> writes:
[snip snip]
> This can also be used to 'snap' the current view during an interactive
> session and restore it later during that same session, which could be quite
> useful.
>
> To save typing, a (trivial) pair of functions to encapsulate this:
>
> snap.view<- f
On 14/07/2011 12:46 PM, warmstron1 wrote:
I resolved this issue. It appears that "^" won't work for this case, but
"**" worked. I can't find any reference to this, but where "^" seems to be
used to raise a value to a numerical function, "**" is used for a y raised
to the power of x where x it a
I am fairly new at using R/programming in general so I apologize if I am
leaving crucial parts of the puzzle out, but here goes.
First and foremost this is the error I am receiving:
Error in muPriors[priors[, 1:2]] <- priors[, 3] :
NAs are not allowed in subscripted assignments
This occ
Hello fellow R users,
I am having a problem finding the estimates for some overall treatment
effects for my mixed models using 'lme' (package nlme). I hope someone
can help.
Firstly then, the model:
The data: Plant biomass (log transformed)
Fixed Factors: Treatment(x3 Dry, Wet, Control) Yea
Hi all,
I have just begun to use R and am hoping to receive some advice about the
problem I need to solve. I have a file containing xy points that I need to
find all significant clusters and write each of their xy coordinates to
file(total points ~ 75000 and sig. cluster = 2500 points. I want t
On Thu, Jul 14, 2011 at 10:33 AM, Mandans wrote:
> SQldf with sqlite and H2
>
> I have a large csv file (about 2GB) and wanted to import the file into R and
> do some filtering and analysis. Came across sqldf ( a great idea and product)
> and was trying to play around to see what would be the be
Terrific! This is great to know.
I first tried saving and restoring the entire set from par3d but this
produced some changes (eg bg) and also one must call par3d with
no.readonly=TRUE. Clearly this is the way to go if one has changed a variety
of rgl properties. But if one has only used the mouse
Yes, because from your previous posts, you appeared to have read in
the data as "character":
file=read.table("file.txt",fill=T,colClasses = "character",header=T)
But, of course, without a reproducible example, one cannot be sure.
-- Bert
On Thu, Jul 14, 2011 at 10:46 AM, Bansal, Vikas wrote:
On 07/14/2011 01:46 PM, Bansal, Vikas wrote:
> I have tried that also.But it is showing this error-
>
> aggregate(file[,3:6], by = list(file[,2]), FUN = sum)
>
> Error in FUN(X[[1L]], ...) : invalid 'type' (character) of argument
>
>
Farther down in your previous e-mail you state that you re
warmstron1 wrote:
>
> I solved this in two ways:
> 1. "**" was necessary to raise (-dummy + 1) to the power of B. "^"
> doesn't work here, for some reason.
> ...
>
Using which version R on which platform?
Most strange. The help page for "Arithmetic operators" clearly states in a
Note that "*
I have tried that also.But it is showing this error-
aggregate(file[,3:6], by = list(file[,2]), FUN = sum)
Error in FUN(X[[1L]], ...) : invalid 'type' (character) of argument
Thanking you,
Warm Regards
Vikas Bansal
Msc Bioinformatics
Kings College London
__
Bansal, Vikas kcl.ac.uk> writes:
> I am using this-
>
> aggregate(x = file[,3:6], by = list(file[,2]), FUN = "sum")
>
Better, although still not reproducible (please *do* read the posting
guide -- it is listed at the bottom of every R list post and is the
*first* google hit for "posting guid
I solved this in two ways:
1. "**" was necessary to raise (-dummy + 1) to the power of B. "^" doesn't
work here, for some reason.
2. I needed to use "as.complex" which greatly simplified my code and
produces the correct response. (I had to revisit math that I had not used
in many years.)
W
I resolved this issue. It appears that "^" won't work for this case, but
"**" worked. I can't find any reference to this, but where "^" seems to be
used to raise a value to a numerical function, "**" is used for a y raised
to the power of x where x it a computation.
--
View this message in co
Thanking you,
Warm Regards
Vikas Bansal
Msc Bioinformatics
Kings College London
From: Bansal, Vikas
Sent: Thursday, July 14, 2011 6:07 PM
To: Bert Gunter
Subject: RE: [R] Adding rows based on column value
Yes sir.I am trying.
I am using this-
aggregate(x
Thanks. I have installed PBAT on my computer.
--
View this message in context:
http://r.789695.n4.nabble.com/R-package-pbatR-tp3667844p3667907.html
Sent from the R help mailing list archive at Nabble.com.
__
R-help@r-project.org mailing list
https://st
I have checked it but did not get any results.Is there a way I can do it?I will
be very thankful to you.
Thanking you,
Warm Regards
Vikas Bansal
Msc Bioinformatics
Kings College London
From: Bert Gunter [gunter.ber...@gene.com]
Sent: Thursday, July 14, 201
On 14/07/2011 11:12 AM, sjaffe wrote:
After interacting with a 3d plot (eg plot3d, persp3d), is there a way to
capture the final settings of view angles, etc, so that the final plot could
be easily reproduced? The plot functions themselves just return a vector of
'ids'.
Yes, saving the result
Points taken and terms(formula) is used now. Thanks.
-Original Message-
From: William Dunlap [mailto:wdun...@tibco.com]
Sent: Thursday, July 14, 2011 11:56 AM
To: Pang Du; r-help@r-project.org
Subject: RE: [R] question on formula and terms.formula()
I think you should replace
terms.fo
Thank you so much for your suggestion, Bill.
The R program I try to modify needs match.call() for something else. But the
problem does seem to be caused by this statement as you suggested. Following
this clue, I find out that
terms.formula(formula)
does essentially what I want for "terms.formul
After interacting with a 3d plot (eg plot3d, persp3d), is there a way to
capture the final settings of view angles, etc, so that the final plot could
be easily reproduced? The plot functions themselves just return a vector of
'ids'.
--
View this message in context:
http://r.789695.n4.nabble.com/r
I think you should replace
terms.formula(formula)
by
terms(formula)
When terms() is given a formula object it will
execute terms.formula but for other classes of
inputs it will invoke the appropriate method.
E.g., your formula may already be a terms object,
in which case terms.formula(formula
?tapply (in base R)
?aggregate ?by (wrapper for tapply)
?ave (in base R -- based on tapply)
Also package plyr (and several others, undoubtedly).
Also google on "R summarize data by groups" or similar gets many relevant hits.
-- Bert
2011/7/14 Bansal, Vikas :
> Dear all,
>
> I have one prob
Dear all,
I have one problem and did not find any solution.(I have also attached the
problem in text file because sometimes column spacing is not good in mail)
I have a file(file.txt) attached with this mail.I am reading it using this code
to make a data frame (file)-
file=read.table("file.txt
Hi Simon
A combination of functions gDistance, gBuffer and gIntersects from package
rgeos should do the job. Also, have a look at www.naturalearthdata.com. They
have various shapefiles with coastlines and land polygons, though I don't know
how the resolution compares with the worldHires datab
I am guessing (from other evidence of lapses in attention to
documentation) that you failed to pay attention when you encountered
these sentences on the page you offered a link to:
"For analysis, this package provides a frontend to the PBAT software"
"For analysis, users must download PBAT
Dear All,
Does anybody have experience with R package pbatR
(http://cran.r-project.org/web/packages/pbatR/index.html)? I am trying to
use it to analyze the family-based case-control data, but the package
totally doesn’t work on my computer. I contacted the authors of the package,
but I haven’t hea
Hi All,
Does anybody know of any existing functions that will calculate distance
inland from a coastline?
It's possible to test if a lon,lat location is land or sea using
map.where(), but I need to add a buffer to this of say 2km, to allow for
points that are just on the coast, and below the res
Peter Maclean yahoo.com> writes:
>
> In glm() you can use the summary() function to recover
> the shape parameter (the reciprocal of the
> dispersion parameter). How do you recover the scale parameter?
> Also, in the given example, how I estimate
> and save the geometric mean of the predicted v
Hi,
I'm trying to plot some data (z) that is linked to lat&long coordinates
(x&y). These coordinates are not however on a regular grid. I also have some
shapefiles on which I would like to overlay the data. I can plot the
shapefiles (country/city outlines) and overplot the data, but only using
quil
Thanks a lot! Great help!
--
View this message in context:
http://r.789695.n4.nabble.com/Problem-with-x-labels-of-barplot-tp3667337p3667498.html
Sent from the R help mailing list archive at Nabble.com.
__
R-help@r-project.org mailing list
https://stat.
This is what I was looking for. When I initially read about model.avg
I didn't recognize it also provided variable scores.
Thank you kindly,
Mike
__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the postin
SQldf with sqlite and H2
I have a large csv file (about 2GB) and wanted to import the file into R and do
some filtering and analysis. Came across sqldf ( a great idea and product) and
was trying to play around to see what would be the best method of doing this.
csv file is comma delimited with
This is a hack which uses the output of the density function.
rrr <- cumsum(diff(cumsum(denspoints$y))*diff(denspoints$x))
lines(denspoints$x[-512],rrr)
I don't believe this is the best solution, because it is not a direct
estimate of the cumulative density function. I think there are methods fo
On 07/14/2011 10:35 PM, Jim Lemon wrote:
On 07/13/2011 10:08 PM, Kishorenalluri wrote:
Hi Jim,
Saving is not a problem. I wanted to load/read the columns from the
file followed by plotting the area plot using ggplot2.
I am a basic user. I am trying to reproduce the plot similar to the
example
g
On 07/13/2011 10:08 PM, Kishorenalluri wrote:
Hi Jim,
Saving is not a problem. I wanted to load/read the columns from the
file followed by plotting the area plot using ggplot2.
I am a basic user. I am trying to reproduce the plot similar to the example
given here.
http://processtrends.c
Hi Student (since you have no other name),
On Thu, Jul 14, 2011 at 6:42 AM, Economics Student
wrote:
> Dear R-helpers,
>
> In a data frame I have 100 securities,monthly closing value,from 1995 to
> present,which I have to
>
> 1. Sampling with replacement,make 50 samples of 10 securities each,each
On 11-07-14 7:51 AM, Don wrote:
Hello everyone,
i am currently creating a barplot.
This barplot takes a vector of ~200 datapoints.
Each datapoint represents one bar.
http://img96.imageshack.us/i/human1w.png/
(Ok as you see, it is not only one barplot, but a series of barplots).
Now, these barpl
Hi list,
this is my second try for first post on this list (I tried to post via email
and nothing appeared in my email-inbox, so now I try to use the
nabble-web-interface) - I hope that you will only have to read one post in
your inbox! Okay, my question ...
I was able to plot a histogram and add
Hello everyone,
i am currently creating a barplot.
This barplot takes a vector of ~200 datapoints.
Each datapoint represents one bar.
http://img96.imageshack.us/i/human1w.png/
(Ok as you see, it is not only one barplot, but a series of barplots).
Now, these barplots represent a human chromosome.
Hi,
I already have matched samples, which were matched using different software.
I need to calculate Abadie-Imbens standard errors, together with the average
treatment effect. I know that the Matching package enables me to calculate
these after a Matching procedure, but is there any way to do it o
Dear R-helpers,
In a data frame I have 100 securities,monthly closing value,from 1995 to
present,which I have to
1. Sampling with replacement,make 50 samples of 10 securities each,each
sample hence will be a data frame with 10 columns.
2. With uniform probabilty,mark a month from 2000 onwards as
Thanks! For testing purposes this rescaling works! But unfortunately due to
timing constraints I'm not able to do the rescaling of the data, so as I
mentioned I have to work on with unscaled data. So I have to calculate
$f(\vec x) = sum_{i \in sv} coefs_i \langle \vec x_i \cdot \vec x \rangle -
\rh
Július 7-től 14-ig irodán kívül vagyok, és az emailjeimet nem érem el.
Sürgős esetben kérem forduljon Kárpáti Edithez (karpati.e...@gyemszi.hu).
Üdvözlettel,
Mihalicza Péter
I will be out of the office from 7 July till 14 July with no access to my
emails.
In urgent cases please contact Ms. Ed
On 11-07-14 3:23 AM, Mark Heckmann wrote:
Thanks Duncan,
the problem now is that, the space between R code and R output is also
increased.
I would like to avoid this, i.e.
vertical space
R code
NO SPACE
R results
vertical space
Don't modify \topsep then, just put the spacing directly into the
Dear all,
I am searching for a possibility to view large data sets (e.g. stored in
ffdf objects) in a GUI window in a memory-efficient way. So far I looked
at gtkDfEdit (package RGtk2Extras) and gdf (package gWidgets). Both
operate (as far as I can see) on data frames stored in memory. gtkDfEd
Duncan's suggestion is probably the way to go, but I will just point out
that R does have a facility to perform a task when an error occurs. I
have my code set up to send me an email when my batch code fails.
(email() is a function I wrote that executes sql command to send email
via dbmail.)
.Err
?model.avg
Look at "relative importance".
> Message: 102
> Date: Wed, 13 Jul 2011 18:01:14 -0500
> From: Michael Just
> To: r-help
> Subject: [R] Sum weights of independent variables across models (AIC)
> Message-ID:
>
> Content-Type: text/plain; charset=ISO-8859-1
>
> Hello,
> I'd lik
Hi!
What about as.forumla()?
Like this:
form <- as.formula(paste("num", y, "~MemberID", sep=""))
agg<-aggregate(form, right.a, sum)
Would it work as you expect to?
HTH,
Ivan
Le 7/13/2011 19:30, Daniel Nordlund a écrit :
-Original Message-
From: r-help-boun...@r-project.org [mailto:r-
Thanks Duncan,
the problem now is that, the space between R code and R output is also
increased.
I would like to avoid this, i.e.
vertical space
R code
NO SPACE
R results
vertical space
TIA,
Mark
Am 14.07.2011 um 02:13 schrieb Duncan Murdoch:
> On 13/07/2011 7:14 PM, Mark Heckmann wrote:
>> I
100 matches
Mail list logo