On Mar 7, 2013, at 5:12 PM, Nicole Thompson wrote:
> Thank you, Peter, for your response.
>
> I can see then that comparative.data() is performing some operation
> that calls an invalid subset of either my phylo or data file
> (mammaldata or mammaltree).
>
> It would be helpful to understand wh
Hi John ,
I tried this code
==
library(RCurl)
library(XML)
script <- getURL("www.r-bloggers.com")
and now getting an error like ,
âError in function (type, msg, asError = TRUE) : couldn't connect to hostâ
From: John Kane [via R] [mailto:ml-node+s789695n4660
Hi Arun, thanks for the responses.
1) I added more data to the data set located at
http://stackoverflow.com/questions/11548368/making-multiple-plots-in-r-f
rom-one-textfile to more resemble the actual data set that I have. The
example data set is called tempdata. I have used the most recent code
s
thanks! It works.
I couldn't possibly figure out such trick...
soichi
2013/3/8 Jorge I Velez
> If I understood correctly,
>
> lapply(x, "[", 1:3)
>
> will do what you want.
>
> HTH,
> Jorge.-
>
>
> On Fri, Mar 8, 2013 at 5:05 PM, ishi soichi <> wrote:
>
>> hi. I have a list like
>>
>> x <- lis
If I understood correctly,
lapply(x, "[", 1:3)
will do what you want.
HTH,
Jorge.-
On Fri, Mar 8, 2013 at 5:05 PM, ishi soichi <> wrote:
> hi. I have a list like
>
> x <- list(1:10,11:20,21:30)
>
> It's a sort of a 3 x 10 matrix in list form.
> I would like to reduce the dimension of this lis
hi. I have a list like
x <- list(1:10,11:20,21:30)
It's a sort of a 3 x 10 matrix in list form.
I would like to reduce the dimension of this list.
it would be something like
list(1:3, 11:13, 21,23)
I tried
x[,1:3]
does not work of course. Neither
lapply(x, [1:3])
works...
Any suggestions?
On Mar 7, 2013, at 4:47 AM, Kjetil Kjernsmo wrote:
> On Wednesday 6. March 2013 16.33.34 Peter Claussen wrote:
>> But you don't have enough data points to estimate all of the possible
>> interactions; that's why you have NA in your original results.
>
> Yes, but it seems to me that lm is doing
Thank you, Peter, for your response.
I can see then that comparative.data() is performing some operation
that calls an invalid subset of either my phylo or data file
(mammaldata or mammaltree).
It would be helpful to understand what "newNb" is, as
comparative.data() creates it while constructing
Hi All,
i'm on a debian linux 64bit,
i'm tying to install the netcdf intraface, i tried both ncdf and ncdf4
but trying to build i received the error :
(i have necdf installed on my machine and it is able to fiund it .. no missed
.h)
epy@epinux:~$ sudo R CMD INSTALL
--configure-args="-with-ne
*Hi Bert and all,*
*
*
*Thanks a lot for your response. Bert's method works very well.*
*
*
*
*
2013/3/8 Bert Gunter
> Use format() or formatC() to convert your numeric data to character
> and then "call write.table on that."
>
> e.g.
>
> > z <-formatC(pi,digits=10,format="f")
> > z
> [1] "3.14
thanks, appreciate it
On Fri, Mar 8, 2013 at 2:49 PM, arun wrote:
>
>
> Hi,
> If you look at ?cov(),
> there are options for 'use':
> set.seed(15)
> a=array(rnorm(9),dim=c(3,3))
> a[3,2]<- NaN
>
> cov(a,use="complete.obs")
> # [,1][,2] [,3]
> #[1,] 1.2360602 -0.321677
Hi,
If you look at ?cov(),
there are options for 'use':
set.seed(15)
a=array(rnorm(9),dim=c(3,3))
a[3,2]<- NaN
cov(a,use="complete.obs")
# [,1] [,2] [,3]
#[1,] 1.2360602 -0.32167789 0.8395953
#[2,] -0.3216779 0.08371491 -0.2185001
#[3,] 0.8395953 -0.21850006 0.57029
Hi all,
I have a matrix that has many NaN values. As soon as one of the columns has
a missing (NaN) value the covariance estimation gets thrown off.
Is there a robust way to do this?
Thanks,
Sachin
a=array(rnorm(9),dim=c(3,3))> a[,1] [,2] [,3]
[1,] -0.79418236 0.7813952
yep, that did the trick.
Thanks,
Sachin
On Fri, Mar 8, 2013 at 1:24 PM, Jeff Newmiller wrote:
> Something along the lines of
>
> top100 <- A[match(B,A[,1]),]
>
> Please provide R code with sample data and desired output. See
> http://stackoverflow.com/questions/5963269/how-to-make-a-great-r-rep
On 03/08/2013 02:22 PM, Sachinthaka Abeywardana wrote:
Hi all,
I have two dataframes. The first (A) contains all the stock prices for
today including today. So the first column is the stock Symbol and the
second column is the stock price. The second (B) is the symbol list in the
top 100 stocks.
Something along the lines of
top100 <- A[match(B,A[,1]),]
Please provide R code with sample data and desired output. See
http://stackoverflow.com/questions/5963269/how-to-make-a-great-r-reproducible-example
---
Jeff Newmille
Presumably because no one is maintaining it. It is still in the archives, and
it is licensed under GPL. You could contact the author or revive it yourself.
Please read the Posting Guide before posting again. Repeating yourself is not
okay, nor is posting in HTML format.
-
Hi all,
I have two dataframes. The first (A) contains all the stock prices for
today including today. So the first column is the stock Symbol and the
second column is the stock price. The second (B) is the symbol list in the
top 100 stocks.
I want to pick out from dataframe A only the rows contai
Use format() or formatC() to convert your numeric data to character
and then "call write.table on that."
e.g.
> z <-formatC(pi,digits=10,format="f")
> z
[1] "3.1415926536"
If this still is not clear to you, I give up, as I do not know how to
make it any clearer. Perhaps someone else can.
-- Ber
Hi R users:
I am using the plm package for linear panel data analysis but encountered the
following message when I try plm function to estimate an random model with
individual effect.
data.re.ind <- plm(X.RETURN. ~ IOB + IOBS,data=E,model="random",effect =
"individual")
Error in swar(objec
Dear Marin,
May be not the cleanest way to do it, but the following seems to work:
write.table(as.character(round(pi, 10)), "pi.txt", row.names = FALSE,
col.names = FALSE, quote = FALSE)
Best,
Jorge.-
On Fri, Mar 8, 2013 at 11:24 AM, Marino David wrote:
> Hi Bert,
>
> I read both options and
> Hi All,
>
> i'm on a debian linux 64bit,
> i'm tying to install the netcdf intraface, i tried both ncdf and ncdf4
[error lines cut]
Hi Massimo,
things are getting confused because nc-config thinks that the netcdf
library is installed in one place, and you are telling R that it's
installed in a
Hi Bert,
I read both options and write.table help, but I still can't make it to save
the data into txt file with fixed precision.
To let you know more clearly what I want, I still you use the previous
simple example to illustrate.
I want to save pi into pi.txt file with 10 decimal places, that
i
Mr/Mrs
I am Lili Puspita Rahayu, student from Bogor Agriculture University.
I wanna ask that why package ZIGP (Zero inflated Generalized Poisson) is not
there anymore?
is there any other packages that can analyze ZIGP?
I am very
grateful for the assistance of R.
I am looking forward to hearing
Hi All,
i'm on a debian linux 64bit,
i'm tying to install the netcdf intraface, i tried both ncdf and ncdf4
but trying to build i received the error :
(i have necdf installed on my machine and it is able to fiund it .. no missed
.h)
epy@epinux:~$ sudo R CMD INSTALL
--configure-args="-with-ne
Hi Bert,
I want to save the data into .txt file for another software process.
Thanks for suggestion.
2013/3/8 Bert Gunter
> ?write.table
>
> which says, under details:
>
> "In almost all cases the conversion of numeric quantities is governed
> by the option "scipen" (see options), but with the
HI Douglas,
index(res)
#[1] "2012-09-10 23:59:00 EDT" "2012-09-11 23:59:00 EDT"
#[3] "2012-09-12 02:15:00 EDT"
str(index(res))
#POSIXct[1:3], format: "2012-09-10 23:59:00" "2012-09-11 23:59:00" ...
When you use this:
strsplit(index(res)," ")
#Error in strsplit(index(res), " ") : non-character
Katja Hebestreit uni-muenster.de> writes:
>
> Hello,
>
> optim hangs for some reason when called within the betareg function
> (from the betareg package).
>
> In this special case, the arguments which are passed to optim cause
> never ending calculations.
>
> I uploaded the arguments passed t
?write.table
which says, under details:
"In almost all cases the conversion of numeric quantities is governed
by the option "scipen" (see options), but with the internal equivalent
of digits=15. For finer control, use format to make a character
matrix/data frame, and call write.table on that. "
Hi all mailing listers,
I want to export data with specified precision into .txt file. How can I
make it? See below
sprintf("%.10f",pi)
[1] "3.1415926536"
when carry out write.matrix(pi,"pi.txt"), 3.141592653589793115998 in pi.txt
file not with 10 decimal places like using sprintf("%.10f",pi)
Re-read
?"["
(always better to read the docs before archives...)
and note in particular:
"[[ can be applied recursively to lists, so that if the single index i
is a vector of length p, alist[[i]] is equivalent to
alist[[i1]]...[[ip]] providing all but the final indexing results in a
list."
I agr
Hi Yao He,
this doesn't sound like R to me. I'd go for perl (or awk).
See e.g. here:
http://stackoverflow.com/questions/1729824/transpose-a-file-in-bash
HTH
Claudia
Am Wed, 6 Mar 2013 22:37:14 +0800
schrieb Yao He :
> Dear all:
>
> I have a big data file of 6 columns and 6 rows like
Hi,
Try this:
betas<- c(0.01,0.01,0.01)
LData<- list(int=rep(1,10), date=
matrix(c(152:161,163:168,162:165),nrow=2,ncol=10,byrow=TRUE),
land=c(rep(0,4),1,0,1,1,0,0))
betas2<- c(0.01,1,2)
mapply(`*`,LData,betas2)
#$int
# [1] 0.01 0.01 0.01 0.01 0.01 0.01 0.01 0.01 0.01 0.01
#$date
# [,1] [,
Dear R Users.
This seems like a simple task, but I'm stuck.
I have a list with 3 elements: (2 vectors and 1 matrix). I wish to extract
each of these data elements using index subscripts and multiply them with a
vector multiplier.
What I have:
> betas
[1] 0.01 0.01 0.01
> LData[1]
$int
[1]
Hello,
optim hangs for some reason when called within the betareg function
(from the betareg package).
In this special case, the arguments which are passed to optim cause
never ending calculations.
I uploaded the arguments passed to optim on:
https://www.dropbox.com/s/ud507gbpt3gkbcp/optim_argum
On 07/03/2013 2:07 PM, Not To Miss wrote:
Hi R users,
The margin parameter mar is measured in unit of lines, the size of which is
automatically adjusted during plotting. I am wondering how can get the size
of a line and how can I control the margin size by controlling the line
size? (I know I ca
Hi,
Try this:
library(xts)
Date1<- seq(as.POSIXct("2012-09-10 02:15:00",format="%Y-%m-%d %H:%M:%S"),
as.POSIXct("2012-09-12 02:15:00",format="%Y-%m-%d %H:%M:%S"), by="min")
length(Date1)
#[1] 2881
set.seed(15)
value<- rnorm(2881)
xt1<-xts(value,order.by=Date1)
res<-apply.daily(xt1,sum)
res1<- r
Hi R users,
The margin parameter mar is measured in unit of lines, the size of which is
automatically adjusted during plotting. I am wondering how can get the size
of a line and how can I control the margin size by controlling the line
size? (I know I can use mai to control the absolute size of ma
Anna,
You're right. Something I am doing seems to be messing up RStudio every
once in a while. I rebooted it and the faceting is working just fine both
with just the required packages loaded and with my normal set of packages.
I had real trouble getting the legend.position command
I have and XTS time series object that has date and time. I started with 1
minute data and used apply.daily(x, sum) to sum the data to one cumulative
value. This function works just fine however it leaves a time for the last
summed value which looks like this 2006-07-19 14:58:00. I need to just
Dear John and Ista,
Ista: Thank you so much for your help and for not shouting :-)
Sometimes one goes blind having stared at a script for too long. I realize
that the grouping is not needed at all - it is the remnant from another
figure I made showing two factor 1 -levels per plot instead of jus
On 07-03-2013, at 17:52, Heath Blackmon wrote:
> I have a large list of matrices and a vector that identifies the desired
> matrices that I would like to rbind. However, I am stuck on how to get
> this to work. I have written some code below to illustrate my problem:
>
> # 3 simple matrices
>
?do.call
## as in
do.call(rbind, list_of_matrices)
## Note that list_of_matrices must be a **list**.
-- Bert
On Thu, Mar 7, 2013 at 8:52 AM, Heath Blackmon wrote:
> I have a large list of matrices and a vector that identifies the desired
> matrices that I would like to rbind. However, I am
Not sure if this what you wanted.
do.call(rbind,(matrix.list[desired.matrices]))
# [,1] [,2] [,3]
#[1,] 1 4 7
#[2,] 2 5 8
#[3,] 3 6 9
#[4,] 19 22 25
#[5,] 20 23 26
#[6,] 21 24 27
A.K.
From: Heath Blackmon
To: r-
Hi all,
I am trying to estimate few parameters using the constained maximum
likelihood in R and more specifically the constOptim() from the stata
package in R. I am programming in Python and using R via the RPy2.
In my model, I am assuming that the data follow the Beta-distribution, so I
created
ggplot returned a ggplot object with no layers as indicated by the error:
"Error: No layers in plot"
You need to add the layers to the object.
So in you example:
p1 <- ggplot(asd.df, aes(x=Time, y=Values, colour=Country))
then add the points (or something else)
p1 + geom_point()
Thomas (Tom) Kell
I have a large list of matrices and a vector that identifies the desired
matrices that I would like to rbind. However, I am stuck on how to get
this to work. I have written some code below to illustrate my problem:
# 3 simple matrices
a<-matrix(1:9,3,3)
b<-matrix(10:18,3,3)
c<-matrix(19:27,3,3)
With the orginal data.frame being df1
df2 <- data.frame(matrix(rep(NA, nrow(df1)*ncol(df1)), nrow = nrow(df1)))df2
<- data.frame(matrix(rep(NA, nrow(df1)*ncol(df1)), nrow = nrow(df1)))
John Kane
Kingston ON Canada
> -Original Message-
> From: sahanasrinivasan...@gmail.com
> Sent: Th
On 2013-03-06 07:49, Nicole Thompson wrote:
Hello,
I'm doing a comparative analysis of mammal brain and body size data.
I'm following Charlie Nunn and Natalie Cooper's instructions for
"Running PGLS in R using caper".
I run into the following error when I create my comparative dataset,
combinin
If I try facet_wrap(~factor1, ncol = 2) I get no faceting at all. Strange.
John Kane
Kingston ON Canada
> -Original Message-
> From: istaz...@gmail.com
> Sent: Thu, 7 Mar 2013 12:14:01 -0500
> To: a...@ecology.su.se
> Subject: Re: [R] ggpliot2: reordering of factors in facets facet.gr
If you are just copying, why not:
opdf <- tab
On Thu, Mar 7, 2013 at 12:23 PM, Sahana Srinivasan <
sahanasrinivasan...@gmail.com> wrote:
> Hi, I am trying to create a data frame using the dimensions of another data
> frame that I have input. This is the code I am using:
>
> tab is the data fra
Hi, I am trying to create a data frame using the dimensions of another data
frame that I have input. This is the code I am using:
tab is the data frame that is input.
c.leng<-length(tab[,1]); r.leng<-length(tab[1,]);
opdf<-data.frame(ncol=c.leng, nrow=r.leng);
a<-1;
while(a<=c.leng)
{
opdf[[1]][a]
Hi Anna,
On Thu, Mar 7, 2013 at 10:16 AM, Anna Zakrisson wrote:
>
> Hi everyone (again),
> before you all start screaming that the reordering of factors has been
> discusse on several threads and is not particular to ggplot2, hear me out.
I'm sorry you have been traumatized like this! I promise
Hello everybody,
I am relatively new to R and struggling with the following problem: I want
to estimate a system of equations in R using the plm() command.
Unfortunately the data have a panel structure which should be exploited
during the estimation process. Hence I decided for a fixed/random effe
Here is one example:
http://gallery.r-enthusiasts.com/graph/Colored_Dendrogram_79
Kevin
On Thu, Mar 7, 2013 at 8:22 AM, Johannes Radinger <
johannesradin...@gmail.com> wrote:
> Hi,
>
> is there a way to color the branches or text label of the branches of
> dendrograms e.g. from hclust() accord
On Mar 7, 2013, at 9:33 AM, "Creighton, Sean" wrote:
>>
>> as.numeric(ImpVol[1,5,57]) == 0.0001
> [1] FALSE
>>
>> as.numeric(ImpVol[1,5,57])
> [1] 1e-04
>>
>> 0.0001
> [1] 1e-04
>>
>
>
> Any tips?
> Thanks
> Sean
See R "Super FAQ" 7.31:
http://cran.r-project.org/doc/FAQ/R-FAQ.html#Wh
Dear R users,
I would like to draw your attention to 'cec2013', a new package
providing R wrappers for the 28 benchmark functions defined in the
Special Session and Competition on Real-Parameter Single Objective
Optimization at CEC-2013 (http://www.cec2013.org/).
The focus of this package is to p
FAQ 7.31
ir. Thierry Onkelinx
Instituut voor natuur- en bosonderzoek / Research Institute for Nature and
Forest
team Biometrie & Kwaliteitszorg / team Biometrics & Quality Assurance
Kliniekstraat 25
1070 Anderlecht
Belgium
+ 32 2 525 02 51
+ 32 54 43 61 85
thierry.onkel...@inbo.be
www.inbo.be
To
In the recent SIAM Review, vol 54, No 3, pp 597-606, Robert Vanderbei
does a nice analysis of daily temperature data. This uses publicly
available data. A version of the paper is available at
http://arxiv.org/pdf/1209.0624
and there is a presentation at
http://www.princeton.edu/~rvdb/tex/talk
>
> as.numeric(ImpVol[1,5,57]) == 0.0001
[1] FALSE
>
> as.numeric(ImpVol[1,5,57])
[1] 1e-04
>
> 0.0001
[1] 1e-04
>
Any tips?
Thanks
Sean
R 2.15.3
windows 7
__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE
Hi.
I have a problem running the code (under Windows 7, 32bits):
> require(XLConnect)
> require(gWidgets)
> options(guiToolkit="tcltk")
> require(gWidgetstcltk)
> gfile()
This problem occurs in R 2.15.2 & 2.15.3 versions but not in the 2.15.1.
The result o
Perhaps
http://stackoverflow.com/questions/1395528/scraping-html-tables-into-r-data-frames-using-the-xml-package
may be of help
John Kane
Kingston ON Canada
> -Original Message-
> From: antony.akk...@ge.com
> Sent: Wed, 6 Mar 2013 19:23:24 -0800 (PST)
> To: r-help@r-project.org
> Subje
Hi,
directory<- "/home/arunksa111/dados" #renamed directory to dados
filelist<-function(directory,number,list1){
setwd(directory)
filelist1<-dir(directory)
direct<-dir(directory,pattern = paste("MSMS_",number,"PepInfo.txt",sep=""),
full.names = FALSE, recursive = TRUE)
list1<-lapply(direct, func
GREAT! Thank you! Will try this!
Anna
Anna Zakrisson Braeunlich
PhD student
Department of Ecology Environment and Plant Sciences
Stockholm University
Svante Arrheniusv. 21A
SE-106 91 Stockholm
Sweden
Lives in Berlin.
For paper mail:
Katzbachstr. 21
D-10965, Berlin - Kreuzberg
Germany/Deutschlan
Hi everyone (again),
before you all start screaming that the reordering of factors has been
discusse on several threads and is not particular to ggplot2, hear me out.
I can easily reorder my x-axis factor in facet.grid() in ggplot2. What I
cannot reorder are the factors represented on the strips.
Looking good. I think the function in this post is what you want. It worked on
your code for me.
http://stackoverflow.com/questions/13297155/add-floating-axis-labels-in-facet-wrap-plot
.
John Kane
Kingston ON Canada
> -Original Message-
> From: a...@ecology.su.se
> Sent: Thu, 07 M
Hi Irucka,
Regarding the first question.
If you look at the ouput of temp1, it already strips off any NA that was left
in the columns. It is always to give an example dataset that is similar to the
real dataset.
Here, I am guessing the situation is similar to this:
temp1<-lapply(temp,function
Or if you just want to see what the table will look like when you read it
but not clutter your workspace by assigning it, you can do
View(head(read.table(filename),20))
-- David
-Original Message-
From: r-help-boun...@r-project.org [mailto:r-help-boun...@r-project.org] On
Behalf Of Sar
I have a competing risk data where a patient may die from either AIDS or
Cancer. I want to compare the cox model for each of the event of interest
with a competing risk model. In the competing risk model the cumulative
incidence function is used directly. I used the jackknife (pseudovalue) of
the c
> So, there are at least two points of confusion here, one is
> how coef() differs from effects() in the case of fractional
> factorial experiments, and the other is the factor 1/4
> between the coefficients used by Wu & Hamada and the values
> returned by effects() as I would think from theory
Hi,
is there a way to color the branches or text label of the branches of
dendrograms e.g. from hclust() according to a grouping variable. Here
I have something in mind like:
http://www.sigmaaldrich.com/content/dam/sigma-aldrich/life-science/biowire/biowire-fall-2010/proteome-figure-1.Par.0001.Ima
I refer to a multivariate model. For example, I have two groups (control
and test) and multiple variables measured for each (V1, V2, V3... Vn). I
wasn't sure if there was any way to conduct power analysis other than
conducting it as you would with a single variable and just account for
multiple t
On Wednesday 6. March 2013 14.50.23 Ben Bolker wrote:
>Just a quick thought (sorry for removing context): what happens if
> you use sum-to-zero contrasts throughout, i.e.
> options(contrasts=c("contr.sum", "contr.poly")) ... ?
Ah, I've got it now, this pointed me in the right direction. Thanks
Just to add another option to what Arun has provided below. That approach is
very generalizable to data frames with >2 columns, where you want to filter
based upon a finding a maximum value (or other perhaps more complex criteria)
within one or more grouping columns and return all of the columns
On Thu, Mar 7, 2013 at 5:47 AM, Kjetil Kjernsmo wrote:
> On Wednesday 6. March 2013 16.33.34 Peter Claussen wrote:
>> But you don't have enough data points to estimate all of the possible
>> interactions; that's why you have NA in your original results.
>
> Yes, but it seems to me that lm is doing
On Mar 6, 2013, at 10:50 PM, Charles Determan Jr wrote:
> Generic question... I am familiar with generic power calculations in R,
> however a lot of the data I primarily work with is multivariate. Is there
> any package/function that you would recommend to conduct such power
> analysis? Any rec
On Mar 6, 2013, at 8:52 PM, pdbarry wrote:
> I am working on creating a program for some simulations I need to do and I
> want to execute a Perl script that I wrote using the system() command in R.
> I have spent a couple days trying to figure this out and it appears that my
> problem occurs whe
Dear Giovanni,
apologize for this late reply! I was testing and reading a lot of stuff. I
tried your suggestions and the problem of singularity in the regressor cross
product vanishes when using the Group Mean function 'pgm' instead of 'pvcm'.
Nevertheless, I found the collinearity in the regre
Hi,
have managed to get rid of the facet labels (so do not spend your time
explaining that to me). Tthere were some old code out there which did not
work. My only remaining issue is how to add the axis labels to the plot
without labels.
Anna
Summ <- ddply(mydata, .(factor3,factor1), summariz
Thankyou Micheal!!! That was so nice of you. It cleared everything.
Elisa
> From: michael.weyla...@gmail.com
> Date: Thu, 7 Mar 2013 11:31:59 +
> Subject: Re: [R] Error: no 'dimnames' attribute for array
> To: eliza_bo...@hotmail.com
> CC: r-help@r-project.org
>
> On Thu, Mar 7, 2013 at 11:1
On Thu, Mar 7, 2013 at 11:19 AM, eliza botto wrote:
> Thankyou very much M. Weylandt. i was actually more interested in knowing
> about the error.
Let's talk you through it then:
As you said before you have
b1 <- c(1L, 2L, 6L, 7L, 12L, 16L, 17L, 20L, 21L, 23L, 25L, 34L, 46L,
48L, 58L, 64L, 65L,
Thankyou very much M. Weylandt. i was actually more interested in knowing about
the error. I got the point.Thankyou
elisa
> From: michael.weyla...@gmail.com
> Date: Thu, 7 Mar 2013 11:15:36 +
> Subject: Re: [R] Error: no 'dimnames' attribute for array
> To: eliza_bo...@hotmail.com
> CC: r-h
On Thu, Mar 7, 2013 at 11:02 AM, eliza botto wrote:
>
> Dear XpeRts,
> I prepared a no qoute Character string by the following command
>
> s<-noquote(paste (b1, collapse=","))
>
> where, b1 is the vector of 24 intergers.
>
>> dput(b1)
>
> c(1L, 2L, 6L, 7L, 12L, 16L, 17L, 20L, 21L, 23L, 25L, 34L, 4
Dear XpeRts,
I prepared a no qoute Character string by the following command
s<-noquote(paste (b1, collapse=","))
where, b1 is the vector of 24 intergers.
> dput(b1)
c(1L, 2L, 6L, 7L, 12L, 16L, 17L, 20L, 21L, 23L, 25L, 34L, 46L, 48L, 58L, 64L,
65L, 68L, 82L, 97L, 98L, 101L, 113L, 115L)
> dp
On Wednesday 6. March 2013 16.33.34 Peter Claussen wrote:
> But you don't have enough data points to estimate all of the possible
> interactions; that's why you have NA in your original results.
Yes, but it seems to me that lm is doing the right thing, or at least the
expected thing, here, the NA
Am 06.03.2013 22:20, schrieb David L Carlson:
Actually, the http://www.sussex.ac.uk/its/pdfs/SPSS_Exact_Tests_20.pdf file
indicates that for small samples and a one-way chi square test, SPSS uses a
multinomial distribution to tabulate the distribution of chi square for a given
N, K, and probab
Thanks for that. Eliano
2013/3/7 David Winsemius [via R]
>
> On Mar 6, 2013, at 3:44 PM, Eliano wrote:
>
> > Thanks. Btw are you able to help with my issue? Thanks, Eliano
>
> I'm sorry, I was too busy answering the question from 'Eliano' over
> on StackOverflow. I didn't have time to addre
I have tried to remove the strips completely using either.
theme(strip.background = element_blank())
or
theme(strip.text.x = element_blank(),
strip.text.y = element_blank())
with no success.
I also have the problem that A, B, C, D, E and F are stations at sea.
Therefore, I would need a lagend
Hi
maybe
index <- which(is.na(dataset1$V2))
y <- dataset2$V1[index]
plot(y~x)
Regards
Petr
> -Original Message-
> From: r-help-boun...@r-project.org [mailto:r-help-bounces@r-
> project.org] On Behalf Of e-letter
> Sent: Thursday, March 07, 2013 8:28 AM
> To: r-help@r-project.org
> Subj
Hi
Not sure if it solves all possible misbehavior with sensor but
changing all jumps start to NA or 0, summing diferences and adding them to
start can help you to polish your data
> x
[1] NA NA 246 251 250 255 5987 5991 5994 5999
xd<-diff(x)
xd[xd>10]<-NA
xd[is.na(xd)]<-0
> cumsum(xd)
[
90 matches
Mail list logo