No, you can't (at the moment), though it shouldn't be too hard to extend.
I can't run your example, though. I get:
Error in eval(expr, envir, enclos) : object 'M' not found
-thomas
Thomas Lumley
Professor of Biostatistics
University of Auckland
From: Chris
Hi
Thanks for your reply. I do have them installed it seems:
[dasneved@drdsv01zatcrh ~]$ yum list installed | grep xml2
*Note* Red Hat Network repositories are not listed below. You must run this
command as root to access RHN repositories.
libxml2.x86_64 2.7.6-21.el6_8.1 @RHEL6.8-Server-P
Well, have you looked to see what:
https://cran.r-project.org/web/views/Spatial.html
has to offer? And, if so, why did you not follow their advice to post
on the r-sig-geo list; if not, you should consider posting there
rather than here.
Cheers,
Bert
Bert Gunter
"The trouble with having an op
Whoops. I forgot to add the one line that I included later in the script.
(I extracted the prior code from a longer document). The code should run
now. It uses dplyr although this could have been done with alternative
methods.
How would you extend svyratio to included calculations for DEFT?
libr
hi, your code isn't runnable at
fpc= ~M + Nbar)
On Wed, Dec 7, 2016 at 5:03 PM, Chris Webb wrote:
> To Dr. Lumley or anyone who may know the answer,
>
> I am trying to obtain ratio estimates from Levy and Lemeshow's Sampling of
> Populations 4th ed. page 281. The results in the book
To Dr. Lumley or anyone who may know the answer,
I am trying to obtain ratio estimates from Levy and Lemeshow's Sampling of
Populations 4th ed. page 281. The results in the book are from STATA.
According to the STATA output, the DEFT is 0.830749
I can recreate all of the results except for DEFT.
Hi Marna,
If we assume a sample size of 1, something like this:
dat[sample(which(dat$group!="C"),ceiling(14*0.4),TRUE),]
dat[sample(which(dat$group=="C"),floor(14*0.6),TRUE),]
Then just step through the two subsets to access your samples.
One problem is that you will not get exactly 40 or 60 %,
I don't know, the only thing with ldap in the name that I have
installed is "openldap".
On Wed, Dec 7, 2016 at 2:21 PM, Das Neves, David, Vodacom South Africa
wrote:
> Hi
>
> Thanks for your reply. I do have them installed it seems:
>
> [dasneved@drdsv01zatcrh ~]$ yum list installed | grep xml2
>
On Wed, 7 Dec 2016, Marc Girondot via R-help wrote:
Hi,
From the documentation of ?options
Options set in package parallel
These will be set when package parallel (or its namespace) is loaded if not
already set.
mc.cores:
a integer giving the maximum allowed number of additional R processes
As far as I know you only need libxml and libxml2-devel. Do you have
those installed?
--Ista
On Wed, Dec 7, 2016 at 8:00 AM, Das Neves, David, Vodacom South Africa
wrote:
> Hi
>
> I am trying install the XML package on R 3.3.0 on RHEL. After it complained
> that the curl library was missing and
hello Michael,I specified each column like this screenshot:(A.C1 means
experiment A and first control,A.T3 means experiment A and third treatment
) I have two transposed file,one for coding and one for lncoding.
On Wednesday, December 7, 2016 6:11 PM, Michael Dewey
wrote:
I
Hello everyone,
I am new to R and need help with a project I am currently running. I am
tracking about 24 great white sharks in Mossel Bay South Africa. I wish to
summarize these sharks' movements in and around the bay. In order to do
that I have a shapefile of a map that I wish to import into R an
This sounds like a question better suited for R-sig-geo.
-Don
--
Don MacQueen
Lawrence Livermore National Laboratory
7000 East Ave., L-627
Livermore, CA 94550
925-423-1062
On 12/6/16, 12:00 PM, "R-help on behalf of Vetter, Vanessa"
wrote:
>Hi everyone,
>I have a very large shapefile with
> On 07 Dec 2016, at 12:46 , Andrea Nygård Østvik wrote:
>
> Hi!
>
>
> I have a problem with R studio running the whole script whenever I run one
> seperate line at the time. It keeps on running the whole script over and over
> and does not read the seperate line. I have exam tomorrow, pleas
This sounds like an RStudio interface problem, not an R programming
problem, so a better place for help would be some resource provided by
RStudio.
That said, the phrase "does not read the separate line" suggests that
maybe your script is not a plain text file, so you could check that, and
make su
Hi Andrea,
The R-studio shortcut for running the current line or selection is Ctrl +
Enter. The shortcut for sourcing the whole script is Ctrl+Shift+Enter
If you press the 'source' button it will source the whole script - I
believe there's a Run-button which will make you able to run the current
Hi
I am trying install the XML package on R 3.3.0 on RHEL. After it complained
that the curl library was missing and we installed it, it continues to fail the
linking step:
gcc -m64 -std=gnu99 -shared -L/usr/lib64/R/lib -ldl -lpthread -lc -lrt -lcurl
-lidn -lssh2 -lssh2 -lssl -lcrypto -lssl -l
Dear R users,
I am happy to announce that the R package 'productivity: Indices of
Productivity Using Data Envelopment Analysis' is now available on CRAN
(https://cran.r-project.org/package=productivity).
Productivity allows computing various transitive measures of
productivity and profitabili
NIMBLE version 0.6-2 has been released on CRAN and at r-nimble.org.
NIMBLE is a system that allows you to:
- Write general hierarchical statistical models in BUGS code and
create a corresponding model object to use in R.
- Build Markov chain Monte Carlo (MCMC), particle filters, Monte
Carlo Ex
Hi!
I have a problem with R studio running the whole script whenever I run one
seperate line at the time. It keeps on running the whole script over and over
and does not read the seperate line. I have exam tomorrow, please help!!
?Best Regards!
[[alternative HTML version deleted]]
_
On Wed, Dec 7, 2016 at 7:04 AM, Jeff Newmiller wrote:
> Your error is in thinking that environment variables are the same thing as
> options. While environment variables might be used to set initial values of
> certain options, options are completely separate from environment variables.
> If y
Your error is in thinking that environment variables are the same thing as
options. While environment variables might be used to set initial values of
certain options, options are completely separate from environment variables.
If you want to change the option on the fly, then change the option
If I understand this correctly you are choosing all the rows from each
of cod and lnc which contain .c (ie any character followed by a C) and
deleting the first column from each of cod and lnc. You then correlate
them so that you get the correlation between corresponding columns of
each. Since
Hi,
From the documentation of ?options
Options set in package parallel
These will be set when package parallel (or its namespace) is loaded if
not already set.
mc.cores:
a integer giving the maximum allowed number of additional R processes
allowed to be run in parallel to the current R proce
Hi Jim, Rui and William;
I do appreciate for your explanations and help. These are very helpful.
Regards,
Hayrettin
On Tue, Dec 6, 2016 at 4:06 PM, Jim Lemon wrote:
> Hi Greg,
> What is happening is easy to see:
>
> ph<-matrix(sample(1:100,40),ncol=4)
> colnames(ph)<-c("M1","X1","X2","X3")
>
Hello,
If 60% of the 14 samples come from group C, then 8.4 samples should come
from a group with 6 elements. Do you want sampling with replacement? If
so maybe the following will do.
perc <- c(0.4, 0.6)
tmp <- split(seq_len(nrow(dat)), dat$group == "C")
idx <- sapply(seq_along(tmp), functio
Hi R user,
I have samples with covariates for different classes, I wanted to choose
the samples of different groups with different probabilities. For example,
I have a 22 samples size with 3 classes,
groupA has 8 samples
groupB has 8 samples
groupC has 6 samples
I want to select a total 14 samples
Hi All,I have 11 human RNA-seq data (control/treatment),The effect of various
drugs on various cancers,I want to calculate the genes-lncRna correlations for
all tumors considered together for network,I did differential expression
analysis and prepared normalized values (rpkm) of coding and lncod
28 matches
Mail list logo