Hello, a newbie here just trying to figure out where to start with R.
I want to build an algorithm that can detect 'duplicate' photos from a
series of photos - the common example is when you take multiple photos of a
group of people, hoping that one shot captured everyone smiling. Any batch
of kid
Dear R-User,
Appreciate any helps.
Given that I have a dataframe of tree population with three variable:
sp=species ,
d0=initial_size
grow=growth increment from initial size per year
How can I calculate the future growth of each tree for the next 10 years.
The following Rscript was written,
Hi all
i have a question:
why and when do we use odds ratio per standard deviation instead of odds
ratio?
--
View this message in context:
http://r.789695.n4.nabble.com/odds-ratio-per-standard-deviation-tp4669315.html
Sent from the R help mailing list archive at Nabble.com.
Dear R-User,
Appreciate any helps. It looks simple, but I don't have a clue.
Given that I have a dataframe of tree population with three variables:
sp=species ,
d0=initial_size
grow=growth increment from initial size per year
How can I calculate the future growth increment of each tree for th
Alexander Shenkin ufl.edu> writes:
>
> Hi Folks,
>
> I'm trying to load devtools in R 3.0.1 in order to run the dev version
> of lme4. I've updated devtools, and just installed Rtools30.exe.
> However, I get the following warning (in R-Studio, RGui, and R.exe, both
> x64 and i386):
There's
On Jun 11, 2013, at 5:25 PM, Kaptue Tchuente, Armel wrote:
> I'm trying to fit the gamma probability distribution to time series datasets
> suing the following command gam<-fitdistr(x=hist2fit,"gamma") where hist2fit
> is the bar histogram of a sample distribution.
>
> The problem is that for
Hello everyone,
I'm trying to fit the gamma probability distribution to time series datasets
suing the following command gam<-fitdistr(x=hist2fit,"gamma") where hist2fit is
the bar histogram of a sample distribution.
The problem is that for some points, it is not possible to fit the gamma
Greg Snow <538...@gmail.com> writes:
> Some would argue that "big" and "well structured" are not compatible. Part
> of structuring a project well is knowing when and how to break it into
> smaller pieces, so those authors who are best at creating well structured R
> code will often split it betwe
Hello,
I have some measurements that I am trying to fit a model to. I also
have uncertainties for these measurements. Some of the measurements
are not well detected, so I'd like to use a limit instead of the
actual measurement. (I am always dealing with upper limits, i.e. left
censored data.)
Hi Folks,
I'm trying to load devtools in R 3.0.1 in order to run the dev version
of lme4. I've updated devtools, and just installed Rtools30.exe.
However, I get the following warning (in R-Studio, RGui, and R.exe, both
x64 and i386):
-
WARNING: Rtools is required to build R pack
Hello,
Thanks for the help!
Your answer resolved my problem with the function I listed, but brought up
a larger question. How is the output of the importdata function stored for
use with other functions (as in, how do I call on that data for use with
other functions)? As a simple example I have a
Thanks for those suggestions.
There was no previous part of the session . I opened r then ran the script
as seen. In this case the
> rm(list = ls())
was superfluous - I just tend to have it at the beginning of scripts
to remove any rubbish if I have run previous stuff in the session.
It wou
Hello,
I believe you are making a confusion on how to call a function in R. You
don't replace the argument in the function declaration. what you do is
to call the function like this:
importdata("~/path to/filename.xyzuvwrgb")
leaving the function definition alone.
Hope this helps,
Rui Barr
I think the ggpairs equivalent is
ggpairs(dat1, upper=list(continuous="points"), axisLabels="show")
oddly enough.
ggpairs(dat1)
should default to the same graph as
plotmatrix(dat1)
but there seems to be a conflict between the default
axisLabels="internal" and density plots. Or something. Ther
Hi,
Try this:
lines1<- readLines("file1.txt")
lines1<- lines1[lines1!=""]
#In "file2.txt",
>or1|1234
ATCGGATTCAGG
>or2|347
GAACCTATCAATTTA
TATAA###this should be a single line
>or3|56
ATCGGAGATATAACCAATC
>or3|23
TTAACAAGAGAATAGACAAA
>or4|793
ATCTCTCTCCTCTCTCTCTA
>or7|1
Some would argue that "big" and "well structured" are not compatible. Part
of structuring a project well is knowing when and how to break it into
smaller pieces, so those authors who are best at creating well structured R
code will often split it between several small files rather than one big
fil
Sarah Goslee writes:
> On Tue, Jun 11, 2013 at 11:06 AM, Thorsten Jolitz wrote:
>>
>> Hi List,
>>
>> I'm looking for a rather big, but well structured R file that contains
>> as much of R language features as possible (i.e. that uses a lot of the
>> functionality described in the 'R Reference Ca
Hello!
I am trying to do ancestral reconstruction under a split BiSSE model.
#phy is my tree
nodes<-c(755,620,602,448,6,340) #vector of nodes at which to split the
phylogeny
nodes.i<-match(nodes,phy$node.label)+length(phy$tip.label)
pars.b<-c(428.597, 90.777, 421.878, 81.815, 0.201, 2.900) #p
The data size isn't an issue. Can you send a reproducible example?
Max
On Jun 11, 2013, at 10:31 AM, Ferran Casarramona
wrote:
> Hello,
>
> I'm training a set of data with Caret package using an elastic net (glmnet).
> Most of the time train works ok, but when the data set grows in size I ge
I recommend the hexView package for setting up such conversions.
---
Jeff NewmillerThe . . Go Live...
DCN:Basics: ##.#. ##.#. Live Go...
I would be very nervous about relying on an anova call here. It will
attempt a generalized likelihood ratio test, but gamm is using penalized
quasi likelihood and there is really no likelihood here (even without
the problem that if there was a likelihood the null hypothesis would
still be on th
index the columns to select
lets say you want to select a set of colmns 2,4,6,8
Try something like this. (not run)
mycols <- c(2,4,6,8)
select(mydata[ , mycols] , mdata$x == 3)
John Kane
Kingston ON Canada
> -Original Message-
> From: bcrom...@utk.edu
> Sent: Tue, 11 Jun 2013 09:18:25
HI,
May be this helps:
dat1<- read.table(text="
x1 y1 x2 y2 x3 y3 output
2 100 190 99 1430 79 89
2 100 192 63 1431 75 69
2 100 192 63 1444 51 57
3 0 195 99 1499 50 74.5
3 0 198 98 1500 80 89
On Jun 11, 2013, at 9:18 AM, bcrombie wrote:
> How do I let R know that I always want to select the same columns in my
> subset functions (below), so that I don't have to keep copy/pasting the same
> selection? (thanks)
> devUni2 <- subset(devUni1, dind02 != 52,
> select=c(paidhre,earnhre,e
On Tue, Jun 11, 2013 at 2:14 PM, David Winsemius wrote:
>
> On Jun 11, 2013, at 9:01 AM, Bikash Agrawal wrote:
>
>> Is there any packages available in R, that can convert Bytes array to Float.
>> Using rJava we can do it. But it is kind of slow. Is there any R
>> specific packages.
>> I am having
Hello R-users,
I am trying to simulate from truncated skew normal distribution. I know
there are ways to simulate from skewed normal distribution such as rsn(sn)
or rsnorm(VGAM), but I could not find any command to simulate from a
truncated skew-normal distribution. Does anyone know how to do that
On Jun 11, 2013, at 9:01 AM, Bikash Agrawal wrote:
> Is there any packages available in R, that can convert Bytes array to Float.
> Using rJava we can do it. But it is kind of slow. Is there any R
> specific packages.
> I am having problem converting my bytes array to floating point.
> Could any
HI,
You could use:
result3<-
data.frame(result2[,-5],read.table(text=as.character(result2$comment),sep="|",fill=TRUE,na.strings=""),stringsAsFactors=FALSE)
colnames(result3)[5:7]<- paste0("DataComment",1:3)
A.K.
From: Shreya Rawal
To: arun
Sent: Tuesday, June
Hi,
Try this:
dat1<- read.table(text="
DEPTH SALINITY DEPTH SALINITY
18 87 39.06 94 39.06
19 173 39.05 141 NA
20 260 39.00 188 39.07
21 312 38.97 207 39.03
22 1 39.36 1 39.35
23 10 39.36 10 39.33
24 20 39.36 20 39.33
25 30
I don't think I understand exactly what you want. Can you resent the attachment
perhaps as a png or pdf file?
And you're right it does not recreate the plotmatrix plot. I find the ggpairs
output less than completely intuitive but I may be okay with it in a while.
OTOH I may have to ask RStud
On Jun 11, 2013, at 10:44 AM, Brian Smith wrote:
> Hmm...I think it used to work before, but it gives an error now. Here is
> some sample code:
>
> =
> library(ggplot2)
> Sample <- rep(c('A','B'),rep(10,2))
> Vals <- sample(1:1000,20)
> dataf <- as.data.frame(cbind(Sample,Vals))
It'
1. What does "common" mean?
(noting that 39.35 != 39.33 )
2. But:
?"["
## or easier, but less flexible
?subset
Also, spend some time with "An Introduction to R."
Unless I misunderstand, this is very basic, and you need to first put
in some time to learn R's basic procedures instead of posting h
On 11-06-2013, at 21:14, Anhai Jin wrote:
> Hi R users,
>
> I am trying to figure out if there is a package in R that can maximize
> likelihood function with EM algorithm. Right now, I have derived the
> log-likelihood function, which is a function of 9 indicator variables with 14
> paramete
On Tue, Jun 11, 2013 at 11:06 AM, Thorsten Jolitz wrote:
>
> Hi List,
>
> I'm looking for a rather big, but well structured R file that contains
> as much of R language features as possible (i.e. that uses a lot of the
> functionality described in the 'R Reference Card' and, if possible, S4
> clas
Hi,
On Tue, Jun 11, 2013 at 11:41 AM, Wobbe Gong wrote:
> #Hi, I am trying to run an MRPP with community data (spp-site-matrix). I
> use the following code:
>
> mzbtaxa_mrpp <- mrpp(mzbdist,mzbsites$Site)
Are you using mrpp() from the vegan package? It's good practice to say.
> #mzbdist being
Yes. I was able to run it in RStudio but it did seem much slower than in R.app
(on the Mac).
Note that the "it" that I ran still didn't give the same results as plotmatrix.
Thanks,
KW
--
On Jun 11, 2013, at 11:16 AM, John Kane wrote:
> Note that the code below might not work in RStudio. I a
John,
Thanks for that. Unfortunately it doesn't reproduce the chart in the plotmatrix
call from the original question.
That chart had what looked like densities (I think that is correct as I looked
at the plotmatrix code) down the diagonal.
I am not sure which options would give that result in
Hi R-helpers,
I inherited some code that I'm trying to use. As a very new R user I'm
having some confusion.
I have some input files in the form: filename.xyzuvwrgb which I'm trying to
import using:
importdata = function(filename) {
p = scan(filename,what=list(x = double(), y = double(), z =
Gavin et al.,
Thanks so much for the help. Unfortunately, the command
> anova(g1$lme, g2$lme)
gives "Error in eval(expr, envir, enclos) : object 'fixed' not found
and for bam (which is the one that can use a known ar1 term), the error is
> AR1 parameter rho unused with generalized model
Appa
Hi
I have a dataframe as below:
x1y1x2y2x3y3output
21001909914307989
21001926314317569
21001926314445157
301959914995074.5
301989815008089
300198
How do I let R know that I always want to select the same columns in my
subset functions (below), so that I don't have to keep copy/pasting the same
selection? (thanks)
devUni2 <- subset(devUni1, dind02 != 52,
select=c(paidhre,earnhre,earnwke,uhourse,hourslw,otc,ind02,dind02,occ00,docc00,lf
Is there any packages available in R, that can convert Bytes array to Float.
Using rJava we can do it. But it is kind of slow. Is there any R
specific packages.
I am having problem converting my bytes array to floating point.
Could any one help me with this problem.
Thanks
Bikash
--
With Best Re
Hello, I'm trying to extract the common rows from a data frame, which is
something like this :
* DEPTH SALINITY DEPTH SALINITY*
*188739.069439.06*
*19 17339.05 141 NA*
*20 26039.00 18839.07*
*21 31238.97 20739.03*
*22 139.36 1
#Hi, I am trying to run an MRPP with community data (spp-site-matrix). I
use the following code:
mzbtaxa_mrpp <- mrpp(mzbdist,mzbsites$Site)
#mzbdist being a distance object (Bray-Curtis similarity matrix) derived
from my sqrt transformed community data set, created with function
'vegdist', mzbsi
Hello,
I'm training a set of data with Caret package using an elastic net (glmnet).
Most of the time train works ok, but when the data set grows in size I get
the following error:
Error en { :
task 1 failed - "arguments imply differing number of rows: 9, 10"
and several warnings like this one:
Hi List,
I'm looking for a rather big, but well structured R file that contains
as much of R language features as possible (i.e. that uses a lot of the
functionality described in the 'R Reference Card' and, if possible, S4
classes too).
I want to check some code I wrote against such a file and
Hi R users,
I am trying to figure out if there is a package in R that can maximize
likelihood function with EM algorithm. Right now, I have derived the
log-likelihood function, which is a function of 9 indicator variables with 14
parameters. Is there a package that I can specify the log-likelih
Hi,
Try this:
lines1<- readLines(textConnection("gene1 or1|1234 or3|56 or4|793
gene4 or2|347
gene5 or3|23 or7|123456789"))
lines2<-readLines(textConnection(">or1|1234
ATCGGATTCAGG
>or2|347
GAACCTATCAATTTATATAA
>or3|56
ATCGGAGATATAACCAATC
>or3|23
TTAACAAGAGAATAGACAAA
>or4|793
D'oh!
On Tue, Jun 11, 2013 at 2:26 PM, arun wrote:
>
>
> Hi,
> dataf <- as.data.frame(cbind(Sample,Vals))
> str(dataf)
> #'data.frame':20 obs. of 2 variables:
> # $ Sample: Factor w/ 2 levels "A","B": 1 1 1 1 1 1 1 1 1 1 ...
> # $ Vals : Factor w/ 20 levels "121","154","159",..: 20 12 13
Hi,
dataf <- as.data.frame(cbind(Sample,Vals))
str(dataf)
#'data.frame': 20 obs. of 2 variables:
# $ Sample: Factor w/ 2 levels "A","B": 1 1 1 1 1 1 1 1 1 1 ...
# $ Vals : Factor w/ 20 levels "121","154","159",..: 20 12 13 1 2 14 18 5 17
10 ...
ggplot(dataf,aes(x=Vals,colour=Sample))+geom_
I am looking at the aov function in R. I see that it uses a modified QR
factorization routine dqrdc2 based on the Linpack routine dqrdc. Pivoting is
done different than the original Linpack function.
My questions:
* Why is it necessary to modify the pivoting strategy? Something necessary
f
It is the problem of norm distribution. You know that 0 and 1are
coresponding to -inf and inf by qnorm function, which is no physical
meaning. That is why NA occured in sensitivity index. The author
recommended either using truncated norm distribution or tricky number of
sampling to avoid the 0 an
I just wanted to point out that the construction:
dataf <- as.data.frame(cbind(Sample,Vals))
is **EVIL** .
Why?
cbind() constructs a matrix out of the separate vectors, and must
coerce columns of different types, as is the case here, to do so (a
matrix must be of one data type). Consequently
>
Hmm...I think it used to work before, but it gives an error now. Here is
some sample code:
=
library(ggplot2)
Sample <- rep(c('A','B'),rep(10,2))
Vals <- sample(1:1000,20)
dataf <- as.data.frame(cbind(Sample,Vals))
myplot <- ggplot(dataf,aes(x=Vals,colour=Sample)) + geom_density()
mypl
On Tue, 2013-06-11 at 10:08 -0700, William Shadish wrote:
> Gavin et al.,
>
> Thanks so much for the help. Unfortunately, the command
>
> > anova(g1$lme, g2$lme)
>
> gives "Error in eval(expr, envir, enclos) : object 'fixed' not found
This is with mgcv:::gamm yes? Strange - did you load nlme f
Hi,
If the dataset is like this with the comments in the order:
dat2<-read.table(text="
Row_ID_N, Src_Row_ID, DataN1
1a, 1, This is comment 1
2a, 1, This is comment 2
3a, 2, This is comment 1
4a,
Hi Arun,
Thanks for your reply. Unfortunately the Comments are just text in the real
data. There is no way to differentiate based on the value of the Comments
column. I guess because of that reason I couldn't get your solution to work
properly. Do you think I can try it for a more general case whe
Hi Jim,
Thanks for you reply. Your solution works well for most of if the part
except that in the end its creating one column for all the comments and in
the result the comments needs to be in a separate column like DataComment1,
DataComment2 and so on.
Is there an option with which I can further
HI,
Not sure if this is what you wanted.
mat1<- matrix(c(1, 1, -1, -1, 1, -1, -1, -2, 1, 1, 1, 1), byrow=TRUE, nc=4)
fun1<- function(mat){
matP<- mat
matN<- mat
matP[matP<0]<- NA
matN[matN>0]<- NA
resP<-rowSums(matP,na.rm=TRUE)/ncol(matP)
resN<- rowSums(matN,na.rm=TRU
Note that the code below might not work in RStudio. I am gettting an
intermittant crash when I use the ggpairs() command in RStudio and sometimes I
get a density plot and sometimes not. Also the command is taking 3-5 minutes
to execute.
This may just be a peculiarity of my machine but the c
Not quite sure if this is what you're after ... but perhaps it will help.
m <- matrix(c(1, 1, -1, -1, 1, -1, -1, -2, 1, 1, 1, 1), byrow=TRUE, ncol=4)
apply(m, 1, function(x) sum(x[x>0]))/dim(m)[2]
apply(m, 1, function(x) sum(x[x<0]))/dim(m)[2]
Jean
On Tue, Jun 11, 2013 at 7:18 AM, felice wrote
Hi,
I have some subroutines using function and subroutine as well from
Fortran modules.
In the f90 source code I used the statement:
use mymodule
and it compile well through the R CMD SHL command.
Anyway when I call dyn.load('myF90.so') form R I get the following error:
unable to load shared
Hello,
You can write your own function, allowing for a condition argument.
rowMeansCond <- function(x, cond = ">", na.rm= FALSE){
rowm <- function(x, cond = ">", na.rm = FALSE){
f <- function(x){
eval(parse(text = paste("x", cond, "0")))
Hi Keith,,
ggpairs(dat1, upper = list(continuous = "density", combo = "box"))
appears to be what you want.
John Kane
Kingston ON Canada
> -Original Message-
> From: kw1...@gmail.com
> Sent: Tue, 11 Jun 2013 09:25:48 -0400
> To: r-help@r-project.org
> Subject: Re: [R] R-help Digest, Vo
Hi
I am using the package googleVis and the function gvisGeoChart
Is it possible to put a title on the map ?
Here is the call of the function :
library(googleVis)
G1 <- gvisGeoChart(PaysProjets, locationvar='Pays', colorvar='NbProj',
options=list(
region= "world",
displayMode="regions",
height=3
hello,
when i use the function rowMeans, which is sum/n, can i divide it in 2
parts, -> Sum(just positive values)/n and Sum(just negative values)/n. i
need both for my regression but dont know how to do it.
for example we have the matrix
1 1 -1 -1 -> rowMeans([1:3 , 2]) just positive -> 1
Dear R-help-list-readers,
I wonder if it is possible to calculate calculate Rao's quadratic
entropy based on fuzzy coded trait data with R? As I have understood it
the packages 'FD' and 'ade4' don't seem to support fuzzy coded data for
calculation Rao's quadratic entropy. Thank you very much fo
Thanks very much
On 11 Jun 2013, at 4:59 am, William Dunlap wrote:
> Try adding the argument
> na.action = na.exclude
> to your call to lm(). See help("na.exclude") for details.
>
> Bill Dunlap
> Spotfire, TIBCO Software
> wdunlap tibco.com
>
>
>> -Original Message-
>> From: r-help-
Folks,
Sorry for butting in here. I ran the code from John Kane below and it worked
fine.
I did however get a deprecation message suggesting the use of ggpairs from the
GGally package to make this chart.
Unfortunately I haven't found the correct incantation to get the diagonal to
display th
Dear Martin,
Thank you for your answer. Here is the exact call to agnes():
setwd("E:/Hugo")
library(cluster)
load("mydata.rda")
tableauTani<-dist.binary(mydata, method = 4, diag = FALSE, upper = FALSE)
resAgnes.Tani<-agnes(tableauTani, diss = inherits(tableauTani,
"dist"),method = "ward")
classe.a
the
> rm(list = ls())
at the top of your snippet makes me wonder if you had loaded any packages
(like XLConnect) that use Java in a previous part of the session? i
believe you must designate the ram allocation for java prior to loading any
java-related packages, and clearing out your objects wil
Unless you have detailed simulations to back up the performance of this
method I would avoid it. It violates several statistical principles.
Frank
Hari wrote
> Hello R geeks,
>
> Waiting for an reply.
>
> Thanks,
> Hari
-
Frank Harrell
Department of Biostatistics, Vanderbilt Universit
You may want loddsratio in the vcdExtra package
On 6/10/2013 12:27 PM, Vlatka Matkovic Puljic wrote:
Dear all,
I am using Epi package to calculate Odds ratio in my bivariate analysis.
How can I make *twoby2 *in variables that have 3 or more levels.
For example:
I have 4 level var (Age)
m=matri
folks,
any suggestions on how to estimate the following regression? i'm not even sure
if this kind of regression has a name?
y(t) = phi * y(t-1) + (1 - phi) * x(t) + e(t)
i need to determine phi, which has to be in (0, 1)
i don't know how to fit this into the lm() formulation.
thanks,
murali
Dear R users,
I would like to source a file independently from the operating system,
but I cannot figure out how.
I apologize for the verbosity of this mail,
but English is not my mother tongue, so I cannot be concise and precise
as I can be in my own language.
I'm writing a script which will be
Hi,
It would be better to provide a reproducible example.
set.seed(25)
all_dfs<-
list(df1=data.frame(col1=sample(1:40,20,replace=TRUE),col2=sample(20:40,20,replace=TRUE),col3=
sample(1:3,20,replace=TRUE)),df2=data.frame(col1=sample(30:60,20,replace=TRUE),col2=sample(35:65,20,replace=TRUE),col3=s
I am using some r scripts to reformat a large data set that needs to be
saved into xls format.
I am getting the Out of Memory Error (Java) despite having set a large
memory in the first line of the script ( on opening R and before loading
any libraries)
I am using R version 2.15.2 (2012-10-2
77 matches
Mail list logo