On 07/30/2012 11:00 PM, Luna wrote:
Dear R users,
I have a hard time interpreting the covariances in the parameter estimates
output (standardized), even in the example documented (PoliticalDemocracy).
Can anyone tell me if the estimated covariances are residual covariances
(unexplained by the mod
Greetings.
I'm trying to understand a problem on a Dell Laptop. Details below,
also uploaded the R working example that I pasted below.
http://pj.freefaculty.org/scraps/testSymbols.R
> sessionInfo()
R version 2.15.1 (2012-06-22)
Platform: x86_64-pc-linux-gnu (64-bit)
locale:
[1] LC_CTYPE=en_U
Hello,
I hope this helps.
p1<-barplot(counts, main="Car
Distribution by Gears and VS",xlab="Number of Gears",
col=c("darkblue","red"),legend = rownames(counts), beside=TRUE, horiz=TRUE)
mtext(text=counts,las=1,side=2,outer=FALSE,at=p1)
A.K.
- Original Message -
From: Manish Gupta
On Aug 10, 2012, at 7:55 PM, Manish Gupta wrote:
How to write values on bars using mtext?
Grouped Bar Plot
counts <- table(mtcars$vs, mtcars$gear) barplot(counts, main="Car
Distribution by Gears and VS",xlab="Number of Gears",
col=c("darkblue","red"),legend = rownames(counts), beside=TRUE,
h
Dear R users,
Is there a way to apply a stopping
rule in hierarchical Clustering? I have a data and I want to find the optimal
number of clusters while doing hierarchical clustering. I would be deeply
obliged
if someone, whoever applied stopping rule, could send me the scripts. thanks in
adva
How to write values on bars using mtext?
Grouped Bar Plot
counts <- table(mtcars$vs, mtcars$gear) barplot(counts, main="Car
Distribution by Gears and VS",xlab="Number of Gears",
col=c("darkblue","red"),legend = rownames(counts), beside=TRUE, horiz=TRUE)
mtext(counts ) # But poistion is not at eac
How to write values on bars using mtext?
Grouped Bar Plot
counts <- table(mtcars$vs, mtcars$gear) barplot(counts, main="Car
Distribution by Gears and VS",xlab="Number of Gears",
col=c("darkblue","red"),legend = rownames(counts), beside=TRUE, horiz=TRUE)
mtext(counts ) # But poistion is not at eac
HI,
Not sure how you want to align the columns.
If you want to write the columns in fixed width format, you can use,
write.fwf() from library(gdata).
A.K.
- Original Message -
From: sharx
To: r-help@r-project.org
Cc:
Sent: Friday, August 10, 2012 9:39 PM
Subject: [R] Align columns i
HI,
This may also help:
someTags <- data.frame(tag_id = c(1, 2, 2, 3, 4, 5, 6, 6), lgth = 50*(1:8),
stage=factor(rep(".",8), levels=c(".","J")))
f2<-function(x){
needsChanging<-with(someTags,is.na(match(tag_id,tag_id[duplicated(tag_id)]))&lgth<300)
x$stage[needsChanging]<-"J"
x
}
f2(some
On Aug 10, 2012, at 6:23 PM, Elaine Jones wrote:
I am running R version 2.15.1 in Windows XP
I am having problems with a function I'm trying to create to:
1. subset a data.frame based on function arguments (colname &
parmname)
2. rename the PARMVALUE column in the data.frame based on
On Aug 10, 2012, at 6:39 PM, sharx wrote:
Does anyone know of a way to specify the alignment of individual
columns in a
data frame so that after using write.table the columns are aligned
in the
file?
Do you mean by padding with spaces? set numzer.pad to the desired
width and then perhap
I do not know of any option in write.table() that would allow a
variable spacer, such as padding with spaces to make columns centered
or right aligned. Everything is just separated somehow. You could
look at ?format or ?sprintf which have padding/alignment options.
Once you had properly padded ch
Does anyone know of a way to specify the alignment of individual columns in a
data frame so that after using write.table the columns are aligned in the
file?
--
View this message in context:
http://r.789695.n4.nabble.com/Align-columns-in-data-frame-write-table-tp4640007.html
Sent from the R hel
On Aug 10, 2012, at 4:06 PM, andrej wrote:
I put "sos::findFn('fits')" into the search engine and it returned 0
results,
so how are you finding this?
'sos' is an R package. I'm guessing you did not install and load 'sos'
before typing that command at the console. (You might be able to ge
I am running R version 2.15.1 in Windows XP
I am having problems with a function I'm trying to create to:
1. subset a data.frame based on function arguments (colname & parmname)
2. rename the PARMVALUE column in the data.frame based on function
argument (xvar)
3. generate charts
p
HI,
Same result, with data.frame:
dat1<-data.frame(V1=v[1:3],V2=v[4:6],V3=v[7:9],V4=c(v[10],rep(0,2)))
sapply(dat1,cumsum)[3,]
V1 V2 V3 V4
6 15 24 10
sapply(dat1,sum)
V1 V2 V3 V4
6 15 24 10
A.K.
- Original Message -
From: David Winsemius
To: Michael Weylandt
Cc: "r-help@r-proj
I put "sos::findFn('fits')" into the search engine and it returned 0 results,
so how are you finding this?
Also: Why was my mailing list message rejected as a duplicate? I don't even
understand how these underground subscriptions work, am I off the mailing
list because it's a duplicate?
--
View
On 08/10/2012 03:46 PM, Frederic Fournier wrote:
Hello everyone,
I would like to parse very large xml files from MS/MS experiments and
create R objects from their content. (By very large, I mean going up to
5-10Gb, although I am using a 'small' 40M file to test my code.)
I'm not 100% sure of i
Your sum(tag_id==tag_id[i])==1, meaning tag_id[i] is the only entry with its
value, may be vectorized by the sneaky idiom
!(duplicated(tag_id,fromLast=FALSE) | duplicated(tag_id,fromLast=TRUE)
Hence f0() (with your code in a loop) and f1() are equivalent:
f0 <- function (tags) {
for (i in s
Hello,
Generally those error messages refer to something that preceeds them.
This is the current case: the parser is expecting a function's argument.
The function name is 'For' with uppercase 'F'. The first argument is
'i', then there should be a comma before a second argument.
The solution
As some additional information, I re-ran the model across the range of n = 50
to 150 (n being the 'top n' parameters returned by chi.squared), and this
time used a completed different subset of the data for both training and
test. Nearly identical results, with the typical train AUC about 0.98 and
Dear all,
The following function code fails with errors (see below):
RegPlots <- function (data, ContrVar, RespVar){
intNmbrRows<-length(RespVar);intNmbrCols<-lenght(ContrVar)
par(mfrow=c(intNmbrRows,intNmbrCols))
For(i in 1:intNmbrRows){
For (j in 1:intNmbrCols){
Certainly ... but this is of course limited to the few C coded
functions available. Back to apply-type stuff for, say, median as a
summary statistic.
-- Bert
On Fri, Aug 10, 2012 at 3:58 PM, David Winsemius wrote:
>
> On Aug 10, 2012, at 3:42 PM, Michael Weylandt wrote:
>
>> I wouldn't be surpri
Is it possible to adapt the 'spatial' function Kfn to analyze the correlation
between two 1D point processes? It's not obvious to me from the documentation;
maybe a pointer to an example would help me get started. Thanks in advance.
Red
__
R-help@r
Hi all,
I am working on a really big dataset and I would like to vectorize a
condition in a if loop to improve speed.
the original loop with the condition is currently writen as follow:
if(sum(as.integer(tags$tag_id==tags$tag_id[i]))==1&tags$lgth[i]<300){
tags$stage[i]<-"J"
}
On Aug 10, 2012, at 3:42 PM, Michael Weylandt wrote:
I wouldn't be surprised if one couldn't get an *apply-free solution
by using diff(), cumsum() and selective indexing as well.
What about colSums on a matrix extended with the right number of zeros.
> colSums(matrix (c(v, rep(0, 3- length(
Hello,
A search using sos::findFn('fits') returned package FITSio as number three.
It would enable you to read FITS files. As for the question, how do you
do what you want to do, I don't know.
Hope this helps,
Rui Barradas
Em 10-08-2012 15:50, andrej escreveu:
Greetings!
I am still new to R
Hello everyone,
I would like to parse very large xml files from MS/MS experiments and
create R objects from their content. (By very large, I mean going up to
5-10Gb, although I am using a 'small' 40M file to test my code.)
My first attempt at parsing the 40M file, using the XML package, took more
I wouldn't be surprised if one couldn't get an *apply-free solution by using
diff(), cumsum() and selective indexing as well.
Cheers,
Michael
On Aug 10, 2012, at 5:07 PM, David Winsemius wrote:
>
> On Aug 10, 2012, at 12:57 PM, Bert Gunter wrote:
>
>> ... or perhaps even simpler:
>>
>>> sz
Oh yes, I stand corrected. I didn't look at your code carefully enough.
-- Bert
On Fri, Aug 10, 2012 at 3:07 PM, David Winsemius wrote:
>
> On Aug 10, 2012, at 12:57 PM, Bert Gunter wrote:
>
>> ... or perhaps even simpler:
>>
>>> sz <- function(x,k)tapply(x,(seq_along(x)-1)%/%k, sum)
>>> sz(1:10
Hi
Here is some code I used to produce an html table file within my
Sweave chunk .
I needed to produce html tables to go into word as well as producing a pdf
file.create(fhtml) # file name
# open to append
ff <- file(fhtml, "a+")
# Table
fchars <-
c('\n',
'\n',
Oh yes, I stand corrected. I didn't look at your code carefully enough.
-- Bert
On Fri, Aug 10, 2012 at 3:07 PM, David Winsemius wrote:
>
> On Aug 10, 2012, at 12:57 PM, Bert Gunter wrote:
>
>> ... or perhaps even simpler:
>>
>>> sz <- function(x,k)tapply(x,(seq_along(x)-1)%/%k, sum)
>>> sz(1:10
On Aug 10, 2012, at 12:57 PM, Bert Gunter wrote:
... or perhaps even simpler:
sz <- function(x,k)tapply(x,(seq_along(x)-1)%/%k, sum)
sz(1:10,3)
0 1 2 3
6 15 24 10
Note that this works for k>n, where the previous solution does not.
sz(1:10,15)
0
55
I agree that it is more elegant, but
Hi
Here is a script I have used sometime ago to do a job i cannot
remember what Windows OS it was. the first line is the general form
# R CMD BATCH [options] infile [outfile]
c:\rw\bin\R CMD BATCH --no-restore --no-save
D:/Feh/R/FEH_Index_in.R D:/Feh/R/FEH_Batch.txt
the FEH_Batch.txt is
Read ?Startup _carefully_ (it's pretty dense!).
Does the .First file on the search path give you the functionality you seek?
-- Bert
On Fri, Aug 10, 2012 at 12:34 PM, Ryan Rene wrote:
>
> Hi all,
>
> I had a specific question about the loading of objects
> into R. I apologize in advance if I ha
Thanks David & Bert.
It turned out that what I actually wanted was much simpler.
my vector's elements are 0&1 and the right way to "summarize" it is
hist(which(v==1))
however, your replies were quire educational!
Thanks again,
Sam.
> * Bert Gunter [2012-08-10 12:57:40 -0700]:
>
>> sz <- function(
Hi, I have researched batch mode for windows and could not find anything that
worked.
I know the code should be
$ R CMD BATCH inputfile.R outputfile.Rout
or
R outputfile.Rout
I tried these without success. I need detailed, step by step instructions on
how to do this. I have tried typing C:\R\bi
If you read my bug report, I just ran the same thing on both regular
mode and debug mode. That is why I think it is related with R base,
though there might be some other bugs related with glmulti or rJava.
Peng
On 08/10/2012 04:03 PM, peter dalgaard wrote:
> On Aug 10, 2012, at 21:23 , Zhang, P
Hi all,
I had a specific question about the loading of objects
into R. I apologize in advance if I have overlooked anything in the
manual but as far as I can tell I have yet to find a solution to my
problem.
I am on a Windows platform.
So what I am trying
to do is have R read in a binary fi
Ok. I will stop here. I have just created a bug report.
https://bugs.r-project.org/bugzilla3/show_bug.cgi?id=15013
If anyone is interested, please see if you can reproduce it.
Thanks,
Peng
On 08/10/2012 03:04 PM, peter dalgaard wrote:
> Not to spoil your fun, but this is getting a bit off-topic
Hi all R users,
I'm finding it a bit hard to interpret the output from the cajorls and VECM
function. I'm trying to model a VECM model with cointegration rank of 6, and
therefore I get the varibles ECT1, ECT2... ECT6 in my output. Are these
representing the estimates for my loading matrix or also
Per your suggestion I ran chi.squared() against my training data and to my
delight, found just 50 parameters that were non-zero influencers. I built
the model through several iterations and found n = 12 to be the optimum for
the training data.
However, results still no so good for the test data. H
Dear Andrew,
Maximum likelihood estimation with missing data typically makes some
rather strong assumptions. If I am not mistaken, the default
covariance coverage in Mplus is .05, the fact that you need to set it
lower suggests you have some combinations of variables with less than
5% jointly pre
On Aug 10, 2012, at 21:23 , Zhang, Peng wrote:
> Ok. I will stop here. I have just created a bug report.
>
> https://bugs.r-project.org/bugzilla3/show_bug.cgi?id=15013
...which is exactly what you should NOT do, if the bug is likely to live in a
contributed package!!
-pd
>
> If anyone is i
... or perhaps even simpler:
> sz <- function(x,k)tapply(x,(seq_along(x)-1)%/%k, sum)
> sz(1:10,3)
0 1 2 3
6 15 24 10
Note that this works for k>n, where the previous solution does not.
> sz(1:10,15)
0
55
-- Bert
On Fri, Aug 10, 2012 at 12:37 PM, David Winsemius
wrote:
>
> On Aug 10, 201
Thanks Bill! Works great! Thanks again guys!
On Fri, Aug 10, 2012 at 2:43 PM, William Dunlap wrote:
> If you think about this as a runs problem you can get a loopless solution
> that I think is easier to read (once the requisite functions are defined).
>
> First define the function to canonicali
This is what my code looks like now. However, there is one thing I
can't/don't know how to fix.
I can't get it to be "once dead always dead", i.e., in any given row, after
a "D" or a "d" there should be only zeros.
I've tried applying a flag to break the loop if dead but I can't get it to
work.
Cou
On Aug 10, 2012, at 12:20 PM, Sam Steingold wrote:
I have a long numeric vector v (length N) and I want create a shorter
vector of length N/k consisting of sums of k-subsequences of v:
v <- c(1,2,3,4,5,6,7,8,9,10)
N=10, k=3
===> [6,15,24,10]
I can, of course, iterate:
w <- vector(mode="num
I have a long numeric vector v (length N) and I want create a shorter
vector of length N/k consisting of sums of k-subsequences of v:
v <- c(1,2,3,4,5,6,7,8,9,10)
N=10, k=3
===> [6,15,24,10]
I can, of course, iterate:
> w <- vector(mode="numeric",length=ceiling(N/k))
> for (i in 1:length(w)) w
Hi,
I recently tried to estimate a linear unconditional latent growth curve on
7 repeated measures using lavaan (most recent version):
modspec='
alpha =~ 1*read_g0 + 1*read_g1 + 1*read_g2 + 1*read_g3 + 1*read_g4 +
1*read_g5 + 1*read_g6
beta =~ 0*read_g0 + 1*read_g1 + 2*read_g2 + 3*read_g3 + 4*r
Not to spoil your fun, but this is getting a bit off-topic for R-help. If you
wish to continue the debugging process in public, I think you should move to
R-devel.
Also, it sounds like the problem is in the glmulti package, so you might want
to involve its maintainer at some point.
-pd
On A
If you think about this as a runs problem you can get a loopless solution
that I think is easier to read (once the requisite functions are defined).
First define the function to canonicalize the name
nickname <- function(x) sub(" .*", "", x)
then define some handy runs functions
isFirstInRun
Thanks so much, and thanks for the clarification. "New York" ---> "New"
should not match "Other New" because "New" is not the first.
Thanks so much, testing it on my data now.
On Fri, Aug 10, 2012 at 2:35 PM, Rui Barradas wrote:
> Hello,
>
> My code doesn't predict a point you've made clear in
Hello,
My code doesn't predict a point you've made clear in this post. Inline.
Em 10-08-2012 19:05, Fred G escreveu:
Thanks Arun. The only issue is that I need the code to be very
generalizable, such that the grep() really has to be if the first string up
to the whitespace in a row (ie "New", "B
Thanks! It is interesting that Windows has pointed the problem to Java.
So it is probable that how I did my debug led me to the wrong direction.
Since I was unsure how to debug S4 class, I copied the source implement
R function from glmulti package into my testing program.
The segmentation faul
Hi,
Try this:
dat1<-read.table(text="
ID, NAME, YEAR, SOURCE
1, New York Mets, 1900, ESPN
2, New York Yankees, 1920, Cooperstown
3, Boston Redsox, 1918, ESPN
4, Washington Nationals, 2010, ESPN
5, Detroit Tigers, 1990, ESPN
",sep=",",header=T
Hello,
Try the following.
d <- read.table(textConnection("
ID NAME YEAR SOURCE
1 'New York Mets' 1900 ESPN
2 'New York Yankees' 1920 Cooperstown
3 'Boston Redsox' 1918 ESPN
4 'Washington Nationals' 2010
Oh, okay. I just missed it completely since your data didn't have any missing
data so I could not visualise why it was there. I assume -999.9 was in older
data.
John Kane
Kingston ON Canada
> -Original Message-
> From: aprendizprog...@hotmail.com
> Sent: Fri, 10 Aug 2012 20:51:48 +0
Hi All,
As mentioned in the manual of SAMseq function of samr package, missing values
in the data are allowed in the input data matrix.
"x-- Feature matrix: p (number of features) by n (number of samples), one
observation per column (missing values allowed)"
When I try a matrix with missing
Thanks Arun. The only issue is that I need the code to be very
generalizable, such that the grep() really has to be if the first string up
to the whitespace in a row (ie "New", "Boston", "Washington", "Detroit
below) is the same as the first string up to the whitespace in the row
directly below it,
Hi everyone, my apologies in advance if I'm overlooking something simple in
this question. I am trying to use R's survey package to make a direct
method age-adjustment to some complex survey data. I have played with
postStratify, calibrate, rake, and simply multiplying the base weights by
the cor
Hi,
The problem solved!
thank you very much !!!
Kane,
the command > is.na(dados) <- dados == -999.9,substituting missing values
ââ(-999.9) for NA
> From: dcarl...@tamu.edu
> To: jrkrid...@inbox.com; aprendizprog...@hotmail.com; r-help@r-project.org
> Subject: RE: [R] help error histo
Hi all,
My code looks like the following:
inname = read.csv("ID_error_checker.csv", as.is=TRUE)
outname = read.csv("output.csv", as.is=TRUE)
#My algorithm is the following:
#for line in inname
#if first string up to whitespace in row in inname$name = first string up
to whitespace in row + 1 in in
Hello,
I am trying to figure out how to plot the profile likelihood curve of a GLM
parameter with 95% pCI's on the same plot. The example I have been trying
with is below. The plots I am getting are not the likelihood curves that I
was expecting. The y-axis of the plots is tau and I would like
Hi,
Try this:
dat <- read.table(text="
ParticipID ERP Electrode
1 s1 0.0370 FP1
2 s2 35.0654 FP2
3 s3 -3.3852 F4
4 s4 2.6119 P3
5 s5 0.1224 P4
6 s6 -5.3153 O1
7 s7 -3.88 F4
8 s8 -4.988
Okay the data sets dat1 and dat 2 are the same dat1 just has fewer
covariates.
David, I understand your concern with the number of events and number of
variables I am using however. 611 is only the unique times at which the
events occur where as there are 6987 events in my data of 77272
observatio
On 2012-08-10 15:42, Zhang, Peng wrote:
You are right. I am running Arch Linux. However, I obtained a
segmentation directly, so didn't know where to find the bug??
> library("glmulti")
Loading required package: rJava
> testdata = cbind(Y=rnorm(100), data.frame(matrix(rnorm(100*50), ncol
=
Greetings!
I am still new to R but have been asked to look at doing astronomy with R.
I have a FITS file which contains an optical telescope image (it can be
viewed in SAOimageDS9).
I need to estimate the magnitude of a galaxy... and eventually other optical
sources.
How do I find the apparent mag
Thanks, David
I need an all-R solution for this, because the author.csv file is
exported from a database that enforces the HTML
encoding and the import into R may have to be repeated several times as
the database is updated.
-Michael
On 8/10/2012 12:40 PM, David L Carlson wrote:
It's not qu
On Fri, Aug 10, 2012 at 9:16 AM, S Ellison wrote:
>> > R in general tries hard to prohibit this behavior (i.e., including an
>> > interaction but not the main effect). When removing a main effect and
>> > leaving the interaction, the number of parameters is not reduced by
>> > one (as would be ex
It's not quite an R solution, but I just pasted your examples into a script
window in R and saved it as chars.html. Then I opened it in Firefox and
pasted the results here (with returns inserted to match your original).
> grep("&", author$lname, value=TRUE)
[1] "Frère de Montizon" "Lumière"
[3]
Hi Folks,
I'm using Sweave to generate png & pdf graphics that I then "Import &
Link" in a Word document. This let's me create sharable and editable
dynamic documents. They are dynamic in that I can regenerate figures
when the data changes, and have those figures automatically updated in
my Word
Sheesh! Yes.
... and in the case where B is a factor with k levels and x is
continuous, the model ~B:x yields k+1 parameters, which in default
contrasts would be a constant term, x, and k-1 interactions between x
and the corresponding k-1 "contrasts"(which they aren't really) for B.
~B*x would add
On Aug 10, 2012, at 15:47 , Jennifer Kaiser wrote:
>
> Model: poisson, link: log
>
> Response: sb_ek_ber
>
> Terms added sequentially (first to last)
>
>
> Df Deviance Resid. Df Resid. Dev Pr(>Chi)
> NULL 1237837 4.4998e+10
> ABWHALT_C 2 244
On 10.08.2012 11:52, baschti wrote:
Hi r-users,
i have a problem with image.plot. When i add two image.plots with the add
function, the second is out of border.
http://r.789695.n4.nabble.com/file/n4639881/test.png
In r helps are the hint to use the par function with plt, but it did not
work.
Now that John has put your data into a readable format, there are a number of
issues with your histogram that don't make much sense. You have enlarged the
text of the labels and greatly enlarged the size of the title, but then printed
no title (you have cex.main=6 and main="") and you have set p
> > R in general tries hard to prohibit this behavior (i.e., including an
> > interaction but not the main effect). When removing a main effect and
> > leaving the interaction, the number of parameters is not reduced by
> > one (as would be expected) but stays the same, at least
> > when using
I've imported a .csv file where character strings that contained
accented characters were written as HTML
character entities. Is there a function that works on a vector to
translate them back to accented (latin1) characters?
Some examples:
> grep("&", author$lname, value=TRUE)
[1] "Frère de M
WinXP says
OS reports request to set locale to "en_US.UTF-8" cannot be honored.
Sigh.
JN
On 08/10/2012 11:40 AM, Frans Marcelissen wrote:
> Hi,
> I had the same problem unde linux. My friend Albert Jan Roskam knew the
> solution: add
> Sys.setlocale(category = "LC_ALL", locale = "en_US.UTF-8"
On Aug 10, 2012, at 8:09 AM, Joshua Wiley wrote:
On Fri, Aug 10, 2012 at 7:39 AM, Henrik Singmann
wrote:
Dear Michael and list,
R in general tries hard to prohibit this behavior (i.e., including an
interaction but not the main effect). When removing a main effect and
leaving the interaction,
In a Conjoint study, it's difficult for respondents to evaluate more than 6
product attributes at a time. Some studies require more attributes.
Often this is solved via the use of Adaptive Conjoint Analysis (ACA), in which
the questionnaire is modified for each individual respondent as the surv
Hi,
I had the same problem unde linux. My friend Albert Jan Roskam knew the
solution: add
Sys.setlocale(category = "LC_ALL", locale = "en_US.UTF-8")
I suppose this also works under windows.
Frans
-Oorspronkelijk bericht-
Van: r-help-boun...@r-project.org [mailto:r-help-boun...@r-project.
On Fri, Aug 10, 2012 at 7:39 AM, Henrik Singmann
wrote:
> Dear Michael and list,
>
> R in general tries hard to prohibit this behavior (i.e., including an
> interaction but not the main effect). When removing a main effect and
> leaving the interaction, the number of parameters is not reduced by o
> -Original Message-
> From: r-help-boun...@r-project.org
> [mailto:r-help-boun...@r-project.org] On Behalf Of aprendiz programa
> Sent: 10 August 2012 01:37
> To: r-help@r-project.org
> Subject: [R] help error histograma
>
>
> Hi,
> My error isErro em hist.default(dados[[1]], freq =
> > Thanks to R, the internet and pretentious, semi-literate public
> > servants around the world, there probably is now.
> >
> Wait a minute now. Perhaps it's some recursive compensation scheme/
>
Indeed, perhaps the general remark might be reconsidered. As jane Austen said,
"Where any one
On Aug 10, 2012, at 4:07 AM, simona mancini wrote:
> Hi,
>
>
> I need to subset different levels of vector in a dataset to create a new
> dataframe that contains only these. These observations are not numerical, so
> I can't use the subset() function (at least this is the response I get from
Thank you but it is not a good idea to send attachments. I received them
because the email came directly to me butw probablyw others did not because
the R-list usually removes them to protect against viruses. It is better to
put everything in the email.
I have included your code
subset should work fine. My guess would be that Electrode is a character or
factor variable. Use str() to see what kind of variables you have in the data
set.
If we call the data set dat1 this works.
subset(dat1, dat1$Electrode =="FP1" | dat1$Electrode =="FP2" | dat1$Electrode
== "F4")
Hello,
I don't see the problem.
d <- read.table(text="
ParticipIDERP Electrode
1 s1 0.0370 FP1
2 s2 35.0654 FP2
3 s3 -3.3852F4
4 s4 2.6119P3
5 s5 0.1224P4
6 s6 -5.3153O1
", header=TRUE)
str(d)
w
I am working on modifying a REDCap survey. The data dictionary column for
the response field has the following value.
1, Strongly disagree | 2, Disagree | 3, Agree | 4, Strongly Agree | 5, Don't
Know | 6, Refuse to Answer | 7, Not Applicable
I am wanting to convert this so that it looks as follo
HI,
Try this:
n<-100
dat1<-data.frame(hunting.prev=sample(c("success","fail"),n,
replace=TRUE),groupsize=sample(c("small","large"),n,replace=TRUE),dogs=sample(c("yes","no"),n,replace=TRUE),
guns=sample(c("yes","no"),n,replace=TRUE))
mytable<-xtabs(~hunting.prev+groupsize+dogs+guns,data=dat1)
ft
Hi,Â
I have a problem with the output of my anova.
> Tabelle2 <- data.frame(sb_ek_ber, JE, ABWHALT_C, ALLEINF,
> Alter_Ãltester_inklVN_C, Alter_Jüngster_C_inkl_AlterNutz, ALTERKAU_C,
> Wuerfel, BEGKUNDE_C, Geoscore_C, GESCHL_VN,
+ Â NUTZART, NUTZKREIS, NICHTK_C_korrigiert, TARIFDAT_C, TARIFG
You are right. I am running Arch Linux. However, I obtained a
segmentation directly, so didn't know where to find the bug??
> library("glmulti")
Loading required package: rJava
> testdata = cbind(Y=rnorm(100), data.frame(matrix(rnorm(100*50), ncol
= 50)))
> glmulti(Y~(X1+X2+X3+X4+X5+X6+X7+X8
Dear R users,
I'm struggling with applying a user-defined link function in lmer. For
analyzing data of a 2AFC psychophysical experiment, I would like to model my
binary data with a logistic function with a lower limit at 0.5 instead of 0. In
a previous question this has been described as a half
On 2012-08-10 06:10, Zhang, Peng wrote:
Thanks to both for your reply.
library(glmulti)
testdata = cbind(Y=rnorm(100), data.frame(matrix(rnorm(100*50), ncol = 50)))
glmulti(Y~(X1+X2+X3+X4+X5+X6+X7+X8+X9+X10+X11+X12+X13+X14+X15)*X16, data
= testdata, level = 2)
This is reproducible to get a seg
Hi,
I am new to R for solving optimization problems, I have set of communication
channels with limited capacity with two types of costs, fixed and variable
cost. Each channel has expected gain for a single communication.
I want to determine optimal number of communications for each channel
maximiz
HI,
I guess names() work if the zoo object has columns.
(https://stat.ethz.ch/pipermail/r-help/2006-November/117448.html)
x <- sin(1:4)
library(zoo)
x2<-zoo(as.matrix(x))
names(x2)<-"test"
names(x2)
#[1] "test"
x2
test
1 0.8414710
2 0.9092974
3 0.1411200
4 -0.7568025
x3<-zoo(x)
na
Hello,
I have a serious problem with odfWeave. I use odfWeave to produce on-line
reports on a webserver. When the template file contains trema's (ë) or accents
(é), odfWeave breaks with the unfamous error "Unable to convert h.odt to the
current locale. You may need to process this file in a UTF
Yes,it's genomic alignment files.
Thanks for your help.
At 2012-08-10 18:10:02,"peter dalgaard" wrote:
>
>On Aug 10, 2012, at 08:24 , mengxin wrote:
>
>> Hi all:
>> I've got a data of ".bam" which is created from my partner under linux
>> sysyem.
>> My system is window xp, and I wanna kn
On Fri, Aug 10, 2012 at 10:23 AM, Rui Barradas wrote:
> Hello,
>
> The main critique, I think, is that we assume a certain type of model
> where the times can decrease until zero. And that they can do so linearly.
> I believe that records can allways be beaten but 40-50 years ago times were
> mea
1 - 100 of 137 matches
Mail list logo