Re: [R] Pairwise table from cloumns

2016-06-01 Thread PIKAL Petr
Hi

Keep your replies on the list, you can get more and better replies.
Do not post in HTML
combination of values is easily achieved e.g. by expand grid

expand.grid(mat[,1], mat[,2])

Regards
Petr


From: ameneh deljoo [mailto:amene.del...@gmail.com]
Sent: Tuesday, May 31, 2016 3:45 PM
To: PIKAL Petr 
Subject: Re: [R] Pairwise table from cloumns

thanks for your reply.
I wanna a matrix with all possible combination of data.  For example, a 
combination of column 1 and 2.
{(1,2)(1,3)(1,4)(2,1)(2,3)(2,4)...}

On Tue, May 31, 2016 at 3:19 PM, PIKAL Petr 
mailto:petr.pi...@precheza.cz>> wrote:
Hi

your message is rather scrambled and to be honest not well understandable (by 
me).

having two column matrix

> mat<-matrix(1:8, 4,2)
> mat
 [,1] [,2]
[1,]15
[2,]26
[3,]37
[4,]48

You can calculate eg. distance

> dist(mat, diag=T, upper=T)
 1234
1 0.00 1.414214 2.828427 4.242641
2 1.414214 0.00 1.414214 2.828427
3 2.828427 1.414214 0.00 1.414214
4 4.242641 2.828427 1.414214 0.00

But from your description I do not understand how you want to reshape your data.

Example, please.

Regards
Petr

> -Original Message-
> From: R-help 
> [mailto:r-help-boun...@r-project.org] On 
> Behalf Of ameneh
> deljoo
> Sent: Tuesday, May 31, 2016 12:13 PM
> To: r-help@r-project.org
> Subject: [R] Pairwise table from cloumns
>
> *Hi Group
> **I have a large data set of individual pairwise values (100 rows) **that I**
> need to reshape into a pairwise matrix for mantel tests of similarity these
> values** .
> **I need this matrix for a Pathfinder network analysis. *
>
> *I have a different data(word) such as :*
>
>
>
>
>
>   living thing
>   0
>
>
>   animal
>   1
>
>
>   blood
>   2
>
>
>   bird
>   3
>
>
>   feathers
>   4
>
>
>   robin
>   5
>
>
>   chicken
> 
>   6
>
>
>
>   *I need the final matrix to be formatted as based on the similarity
> **  A1A2A3A4
> ** A1  0 32   40 32
> * *A2  32049 38
> ** A3  4049   0  53
> ** A4  3238   53 0*
>
> **
>
>
> Are there any functions/packages that will make this easier? Thanks Ameneh
>
>   [[alternative HTML version deleted]]
>
> __
> R-help@r-project.org mailing list -- To 
> UNSUBSCRIBE and more, see
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide http://www.R-project.org/posting-
> guide.html
> and provide commented, minimal, self-contained, reproducible code.


Tento e-mail a jakékoliv k němu připojené dokumenty jsou důvěrné a jsou určeny 
pouze jeho adresátům.
Jestliže jste obdržel(a) tento e-mail omylem, informujte laskavě neprodleně 
jeho odesílatele. Obsah tohoto emailu i s přílohami a jeho kopie vymažte ze 
svého systému.
Nejste-li zamýšleným adresátem tohoto emailu, nejste oprávněni tento email 
jakkoliv užívat, rozšiřovat, kopírovat či zveřejňovat.
Odesílatel e-mailu neodpovídá za eventuální škodu způsobenou modifikacemi či 
zpožděním přenosu e-mailu.

V případě, že je tento e-mail součástí obchodního jednání:
- vyhrazuje si odesílatel právo ukončit kdykoliv jednání o uzavření smlouvy, a 
to z jakéhokoliv důvodu i bez uvedení důvodu.
- a obsahuje-li nabídku, je adresát oprávněn nabídku bezodkladně přijmout; 
Odesílatel tohoto e-mailu (nabídky) vylučuje přijetí nabídky ze strany příjemce 
s dodatkem či odchylkou.
- trvá odesílatel na tom, že příslušná smlouva je uzavřena teprve výslovným 
dosažením shody na všech jejích náležitostech.
- odesílatel tohoto emailu informuje, že není oprávněn uzavírat za společnost 
žádné smlouvy s výjimkou případů, kdy k tomu byl písemně zmocněn nebo písemně 
pověřen a takové pověření nebo plná moc byly adresátovi tohoto emailu případně 
osobě, kterou adresát zastupuje, předloženy nebo jejich existence je adresátovi 
či osobě jím zastoupené známá.

This e-mail and any documents attached to it may be confidential and are 
intended only for its intended recipients.
If you received this e-mail by mistake, please immediately inform its sender. 
Delete the contents of this e-mail with all attachments and its copies from 
your system.
If you are not the intended recipient of this e-mail, you are not authorized to 
use, disseminate, copy or disclose this e-mail in any manner.
The sender of this e-mail shall not be liable for any possible damage caused by 
modifications of the e-mail or by delay with transfer of the email.

In case that this e-mail forms part of business dealings:
- the sender reserves the right to end negotiations about entering into a 
contract in any time, for any reason, and without stating any reasoning.
- if the e-mail contains an offer, the recipient is entitled to immediately 
accept such offer; The sender of this e-mail (offer) excludes any acceptance of 
the offer on the part 

Re: [R] Application of "merge" and "within"

2016-06-01 Thread peter dalgaard
Notice that within-group processing is intended. I'd try

> first <- function(x)x[1]
> s  <- within(q, {bl <- ave(b, paste(G,a), FUN=first); db <- b - bl})

Or perhaps

q <- within(q, Ga <- paste(G,a))
tbl <- with(q, tapply(b, Ga, first))
s <- within(q, {bl <- tbl[Ga]; db <- b - bl})

-pd


On 28 May 2016, at 22:53 , Duncan Murdoch  wrote:

> On 27/05/2016 7:00 PM, Santosh wrote:
>> Dear Rxperts!
>> 
>> Is there a way to compute relative values.. using within().. function?
>> 
>> Any assistance/suggestions are highly welcome!!
>> Thanks again,
>> Santosh...
>> ___
>> A sample dataset and the computation "outside" within()  function is shown..
>> 
>> q <- data.frame(GL = rep(paste("G",1:3,sep = ""),each = 50),
>>G  = rep(1:3,each = 50),
>>D = rep(paste("D",1:5,sep = ""),each = 30),
>>a = rep(1:15,each = 10),
>>t = rep(seq(10),15),
>>b = round(runif(150,10,20)))
>> r <- subset(q,!duplicated(paste(G,a)),sel=c(G,a,b))
>> names(r)[3] <- "bl"
>> s <- merge(q,r)
>> s$db <- s$b-s$bl
>> 
>>> head(s,5)
>>G  a GL  D  t  b bl db
>> 1   1  1 G1 D1  1 13 13  0
>> 2   1  1 G1 D1  2 16 13  3
>> 3   1  1 G1 D1  3 19 13  6
>> 4   1  1 G1 D1  4 12 13 -1
>> 5   1  1 G1 D1  5 19 13  6
> 
> Just use
> 
> s <- within(s, db <- b - bl)
> 
> Duncan Murdoch
> 
> __
> R-help@r-project.org mailing list -- To UNSUBSCRIBE and more, see
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
> and provide commented, minimal, self-contained, reproducible code.

-- 
Peter Dalgaard, Professor,
Center for Statistics, Copenhagen Business School
Solbjerg Plads 3, 2000 Frederiksberg, Denmark
Phone: (+45)38153501
Office: A 4.23
Email: pd@cbs.dk  Priv: pda...@gmail.com

__
R-help@r-project.org mailing list -- To UNSUBSCRIBE and more, see
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] sandwich package: HAC estimators

2016-06-01 Thread T.Riedle
Thank you very much. I have applied the example to my case and get following 
results:

crisis_bubble4<-glm(stock.market.crash~crash.MA+bubble.MA+MP.MA+UTS.MA+UPR.MA+PPI.MA+RV.MA,family=binomial("logit"),data=Data_logitregression_movingaverage)
> summary(crisis_bubble4)

Call:
glm(formula = stock.market.crash ~ crash.MA + bubble.MA + MP.MA + 
UTS.MA + UPR.MA + PPI.MA + RV.MA, family = binomial("logit"), 
data = Data_logitregression_movingaverage)

Deviance Residuals: 
Min   1Q   Median   3Q  Max  
-1.7828  -0.6686  -0.3186   0.6497   2.4298  

Coefficients:
 Estimate Std. Error z value Pr(>|z|)
(Intercept)   -5.2609 0.8927  -5.893 3.79e-09 ***
crash.MA   0.4922 0.4966   0.991  0.32165
bubble.MA 12.1287 1.3736   8.830  < 2e-16 ***
MP.MA-20.072496.9576  -0.207  0.83599
UTS.MA   -58.181419.3533  -3.006  0.00264 ** 
UPR.MA  -337.579864.3078  -5.249 1.53e-07 ***
PPI.MA   729.376973.0529   9.984  < 2e-16 ***
RV.MA116.001116.5456   7.011 2.37e-12 ***
---
Signif. codes:  0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1

(Dispersion parameter for binomial family taken to be 1)

Null deviance: 869.54  on 705  degrees of freedom
Residual deviance: 606.91  on 698  degrees of freedom
AIC: 622.91

Number of Fisher Scoring iterations: 5

> coeftest(crisis_bubble4)

z test of coefficients:

  Estimate Std. Error z value  Pr(>|z|)
(Intercept)   -5.260880.89269 -5.8933 3.786e-09 ***
crash.MA   0.492190.49662  0.9911  0.321652
bubble.MA 12.128681.37357  8.8300 < 2.2e-16 ***
MP.MA-20.07238   96.95755 -0.2070  0.835992
UTS.MA   -58.18142   19.35330 -3.0063  0.002645 ** 
UPR.MA  -337.57985   64.30779 -5.2494 1.526e-07 ***
PPI.MA   729.37693   73.05288  9.9842 < 2.2e-16 ***
RV.MA116.00106   16.54560  7.0110 2.366e-12 ***
---
Signif. codes:  0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1

> coeftest(crisis_bubble4,vcov=NeweyWest)

z test of coefficients:

  Estimate Std. Error z value Pr(>|z|)  
(Intercept)   -5.260885.01706 -1.0486  0.29436  
crash.MA   0.492192.41688  0.2036  0.83863  
bubble.MA 12.128685.85228  2.0725  0.03822 *
MP.MA-20.07238  499.37589 -0.0402  0.96794  
UTS.MA   -58.18142   77.08409 -0.7548  0.45038  
UPR.MA  -337.57985  395.35639 -0.8539  0.39318  
PPI.MA   729.37693  358.60868  2.0339  0.04196 *
RV.MA116.00106   79.52421  1.4587  0.14465  
---
Signif. codes:  0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1

> waldtest(crisis_bubble4, vcov = NeweyWest,test="F")
Wald test

Model 1: stock.market.crash ~ crash.MA + bubble.MA + MP.MA + UTS.MA + 
UPR.MA + PPI.MA + RV.MA
Model 2: stock.market.crash ~ 1
  Res.Df Df  F  Pr(>F)  
1698
2705 -7 2.3302 0.02351 *
---
Signif. codes:  0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1

> waldtest(crisis_bubble4, vcov = NeweyWest,test="Chisq")
Wald test

Model 1: stock.market.crash ~ crash.MA + bubble.MA + MP.MA + UTS.MA + 
UPR.MA + PPI.MA + RV.MA
Model 2: stock.market.crash ~ 1
  Res.Df Df  Chisq Pr(>Chisq)  
1698   
2705 -7 16.3110.02242 *
---
Signif. codes:  0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1

Do you agree with the methodology? I read in a book that it is also possible to 
use vcov=vcovHAC in the coeftest() function. Nevertheless, I am not sure what 
kind of HAC I generate with this command. Which weights does this command 
apply, which bandwith and which kernel?

Kind regards

From: Achim Zeileis 
Sent: 31 May 2016 17:19
To: T.Riedle
Cc: r-help@r-project.org
Subject: Re: [R] sandwich package: HAC estimators

On Tue, 31 May 2016, T.Riedle wrote:

> Many thanks for your feedback.
>
> If I get the code for the waldtest right I can calculate the Chi2 and
> the F statistic using waldtest().

Yes. In a logit model you would usually use the chi-squared statistic.

> Can I use the waldtest() without using bread()/ estfun()? That is, I
> estimate the logit regression using glm() e.g. logit<-glm(...) and
> insert logit into the waldtest() function.
>
> Does that work to get chi2 under HAC standard errors?

I'm not sure what you mean here but I include a worked example. Caveat:
The data I use are cross-section data with an overly simplified set of
regressors. So none of this makes sense for the application - but it shows
how to use the commands.

## load AER package which provides the example data
## and automatically loads "lmtest" and "sandwich"
library("AER")
data("PSID1976", package = "AER")

## fit a simple logit model and obtain marginal Wald tests
## for the coefficients and an overall chi-squared statistic
m <- glm(participation ~ education, data = PSID1976, family = binomial)
summary(m)
anova(m, test = "Chisq")

## replicate the same statistics with coeftest() and lrtest()
coeftest(m)
lrtest(m)

## the likel

Re: [R] sandwich package: HAC estimators

2016-06-01 Thread Achim Zeileis

On Wed, 1 Jun 2016, T.Riedle wrote:

Thank you very much. I have applied the example to my case and get 
following results:


crisis_bubble4<-glm(stock.market.crash~crash.MA+bubble.MA+MP.MA+UTS.MA+UPR.MA+PPI.MA+RV.MA,family=binomial("logit"),data=Data_logitregression_movingaverage)

summary(crisis_bubble4)


Call:
glm(formula = stock.market.crash ~ crash.MA + bubble.MA + MP.MA +
   UTS.MA + UPR.MA + PPI.MA + RV.MA, family = binomial("logit"),
   data = Data_logitregression_movingaverage)

Deviance Residuals:
   Min   1Q   Median   3Q  Max
-1.7828  -0.6686  -0.3186   0.6497   2.4298

Coefficients:
Estimate Std. Error z value Pr(>|z|)
(Intercept)   -5.2609 0.8927  -5.893 3.79e-09 ***
crash.MA   0.4922 0.4966   0.991  0.32165
bubble.MA 12.1287 1.3736   8.830  < 2e-16 ***
MP.MA-20.072496.9576  -0.207  0.83599
UTS.MA   -58.181419.3533  -3.006  0.00264 **
UPR.MA  -337.579864.3078  -5.249 1.53e-07 ***
PPI.MA   729.376973.0529   9.984  < 2e-16 ***
RV.MA116.001116.5456   7.011 2.37e-12 ***
---
Signif. codes:  0 ?***? 0.001 ?**? 0.01 ?*? 0.05 ?.? 0.1 ? ? 1

(Dispersion parameter for binomial family taken to be 1)

   Null deviance: 869.54  on 705  degrees of freedom
Residual deviance: 606.91  on 698  degrees of freedom
AIC: 622.91

Number of Fisher Scoring iterations: 5


coeftest(crisis_bubble4)


z test of coefficients:

 Estimate Std. Error z value  Pr(>|z|)
(Intercept)   -5.260880.89269 -5.8933 3.786e-09 ***
crash.MA   0.492190.49662  0.9911  0.321652
bubble.MA 12.128681.37357  8.8300 < 2.2e-16 ***
MP.MA-20.07238   96.95755 -0.2070  0.835992
UTS.MA   -58.18142   19.35330 -3.0063  0.002645 **
UPR.MA  -337.57985   64.30779 -5.2494 1.526e-07 ***
PPI.MA   729.37693   73.05288  9.9842 < 2.2e-16 ***
RV.MA116.00106   16.54560  7.0110 2.366e-12 ***
---
Signif. codes:  0 ?***? 0.001 ?**? 0.01 ?*? 0.05 ?.? 0.1 ? ? 1


coeftest(crisis_bubble4,vcov=NeweyWest)


z test of coefficients:

 Estimate Std. Error z value Pr(>|z|)
(Intercept)   -5.260885.01706 -1.0486  0.29436
crash.MA   0.492192.41688  0.2036  0.83863
bubble.MA 12.128685.85228  2.0725  0.03822 *
MP.MA-20.07238  499.37589 -0.0402  0.96794
UTS.MA   -58.18142   77.08409 -0.7548  0.45038
UPR.MA  -337.57985  395.35639 -0.8539  0.39318
PPI.MA   729.37693  358.60868  2.0339  0.04196 *
RV.MA116.00106   79.52421  1.4587  0.14465
---
Signif. codes:  0 ?***? 0.001 ?**? 0.01 ?*? 0.05 ?.? 0.1 ? ? 1


waldtest(crisis_bubble4, vcov = NeweyWest,test="F")

Wald test

Model 1: stock.market.crash ~ crash.MA + bubble.MA + MP.MA + UTS.MA +
   UPR.MA + PPI.MA + RV.MA
Model 2: stock.market.crash ~ 1
 Res.Df Df  F  Pr(>F)
1698
2705 -7 2.3302 0.02351 *
---
Signif. codes:  0 ?***? 0.001 ?**? 0.01 ?*? 0.05 ?.? 0.1 ? ? 1


waldtest(crisis_bubble4, vcov = NeweyWest,test="Chisq")

Wald test

Model 1: stock.market.crash ~ crash.MA + bubble.MA + MP.MA + UTS.MA +
   UPR.MA + PPI.MA + RV.MA
Model 2: stock.market.crash ~ 1
 Res.Df Df  Chisq Pr(>Chisq)
1698
2705 -7 16.3110.02242 *
---
Signif. codes:  0 ?***? 0.001 ?**? 0.01 ?*? 0.05 ?.? 0.1 ? ? 1

Do you agree with the methodology?


Well, this is how you _can_ do what you _wanted_ to do. I already 
expressed my doubts about several aspects. First, some coefficients and 
their standard errors are very large which may (or may not) hint at 
problems that are close to separation. Second, given the increase in the 
standard errors, the autocorrelation appears to be substantial and it 
might be good to try to capture these autocorrelations explicitly rather 
than just correcting the standard errors.


I read in a book that it is also possible to use vcov=vcovHAC in the 
coeftest() function.


Yes. (I also mentioned that in my e-mail yesterday, see below.)

Nevertheless, I am not sure what kind of HAC I generate with this 
command. Which weights does this command apply, which bandwith and which 
kernel?


Please consult vignette("sandwich", package = "sandwich") for the details. 
In short: Both, vcovHAC and kernHAC use the quadratic spectral kernel with 
Andrews' parametric bandwidth selection. The latter function uses 
prewhitening by default while the latter does not. In contrast, NeweyWest 
uses a Bartlett kernel with Newey & Wests nonparametric lag/bandwidth 
selection and prewhitening by default.



Kind regards

From: Achim Zeileis 
Sent: 31 May 2016 17:19
To: T.Riedle
Cc: r-help@r-project.org
Subject: Re: [R] sandwich package: HAC estimators

On Tue, 31 May 2016, T.Riedle wrote:


Many thanks for your feedback.

If I get the code for the waldtest right I can calculate the Chi2 and
the F statistic using waldtest().


Yes. In a logit model you would usually use the chi-squared statistic.


Can I use the waldtest() without using bread()/ estfun()? That is, I
estimate the logit re

[R] Help needed to format data for boxplot time-series

2016-06-01 Thread Thomas Adams
All:

I have used R in combination with GRASS GIS spatial data (using spgrass)
many times in the past to generate a 'time series' of boxplots, to show
variations over time. But I have a new problem, not involving spatial data,
but rather, true time-series data (snippet shown below). So, what I want to
do is to generate a 'time-series' of boxplots based on the column
'valid_time' for the 'values' column data. What I can not figure out is how
to either select or format the data for the series of individual boxplots.
Somehow it seems I need to use reshape; do I group the data within a loop?
This does not seem efficient. The full set of data I have covers a 30 day
period at 6-hourly time steps with 9320 rows

Data

lid|ens_num|basis_time|valid_time|value
MDBV1|ens01|2016-04-19 06:00:00|2016-04-21 00:00:00|1431.4787995285
MDBV1|ens01|2016-04-20 18:00:00|2016-04-21 00:00:00|740.777643846512
MDBV1|ens02|2016-04-20 18:00:00|2016-04-21 00:00:00|740.78561401
MDBV1|ens03|2016-04-20 18:00:00|2016-04-21 00:00:00|740.777441774178
MDBV1|ens04|2016-04-20 18:00:00|2016-04-21 00:00:00|740.777441774178
MDBV1|ens01|2016-04-19 06:00:00|2016-04-21 06:00:00|1430.25545361671
MDBV1|ens01|2016-04-20 18:00:00|2016-04-21 06:00:00|673.404235368919
MDBV1|ens02|2016-04-20 18:00:00|2016-04-21 06:00:00|673.404370083809
MDBV1|ens03|2016-04-20 18:00:00|2016-04-21 06:00:00|673.404235368919
MDBV1|ens04|2016-04-20 18:00:00|2016-04-21 06:00:00|673.404235368919
MDBV1|ens01|2016-04-19 06:00:00|2016-04-21 12:00:00|1429.0170196373
MDBV1|ens01|2016-04-20 18:00:00|2016-04-21 12:00:00|602.801441559601
MDBV1|ens02|2016-04-20 18:00:00|2016-04-21 12:00:00|602.801239487267
MDBV1|ens03|2016-04-20 18:00:00|2016-04-21 12:00:00|602.801441559601
MDBV1|ens04|2016-04-20 18:00:00|2016-04-21 12:00:00|602.801441559601
MDBV1|ens01|2016-04-19 06:00:00|2016-04-21 18:00:00|1427.75029553108
MDBV1|ens01|2016-04-20 18:00:00|2016-04-21 18:00:00|532.976794630909
MDBV1|ens02|2016-04-20 18:00:00|2016-04-21 18:00:00|532.976727273464
MDBV1|ens03|2016-04-20 18:00:00|2016-04-21 18:00:00|532.97639048624
MDBV1|ens04|2016-04-20 18:00:00|2016-04-21 18:00:00|532.976895667076
MDBV1|ens01|2016-04-19 06:00:00|2016-04-22 00:00:00|1426.44531239624
MDBV1|ens01|2016-04-20 18:00:00|2016-04-22 00:00:00|467.520648461056
MDBV1|ens02|2016-04-20 18:00:00|2016-04-22 00:00:00|467.520513746166
MDBV1|ens03|2016-04-20 18:00:00|2016-04-22 00:00:00|467.520379031277
MDBV1|ens04|2016-04-20 18:00:00|2016-04-22 00:00:00|467.520783175945
MDBV1|ens01|2016-04-19 06:00:00|2016-04-22 06:00:00|1425.14127226563
MDBV1|ens01|2016-04-20 18:00:00|2016-04-22 06:00:00|408.103669752502
MDBV1|ens02|2016-04-20 18:00:00|2016-04-22 06:00:00|408.105117937565
MDBV1|ens03|2016-04-20 18:00:00|2016-04-22 06:00:00|408.102255246162
MDBV1|ens04|2016-04-20 18:00:00|2016-04-22 06:00:00|408.193086760426
MDBV1|ens01|2016-04-19 06:00:00|2016-04-22 12:00:00|1423.73767783165
MDBV1|ens01|2016-04-20 18:00:00|2016-04-22 12:00:00|356.017269114971
MDBV1|ens02|2016-04-20 18:00:00|2016-04-22 12:00:00|356.245105671883
MDBV1|ens03|2016-04-20 18:00:00|2016-04-22 12:00:00|355.568634854126
MDBV1|ens04|2016-04-20 18:00:00|2016-04-22 12:00:00|357.646308916569
MDBV1|ens01|2016-04-19 06:00:00|2016-04-22 18:00:00|1422.30188653908
MDBV1|ens01|2016-04-20 18:00:00|2016-04-22 18:00:00|310.664962696362
MDBV1|ens02|2016-04-20 18:00:00|2016-04-22 18:00:00|310.956081572628
MDBV1|ens03|2016-04-20 18:00:00|2016-04-22 18:00:00|310.891788891602
MDBV1|ens04|2016-04-20 18:00:00|2016-04-22 18:00:00|311.764674018288
MDBV1|ens01|2016-04-19 06:00:00|2016-04-23 00:00:00|1420.79065490837
MDBV1|ens01|2016-04-20 18:00:00|2016-04-23 00:00:00|271.319441647482
MDBV1|ens02|2016-04-20 18:00:00|2016-04-23 00:00:00|271.90585556159
MDBV1|ens03|2016-04-20 18:00:00|2016-04-23 00:00:00|272.571818617964
MDBV1|ens04|2016-04-20 18:00:00|2016-04-23 00:00:00|272.197900602722
MDBV1|ens01|2016-04-19 06:00:00|2016-04-23 06:00:00|1419.24197253838
MDBV1|ens01|2016-04-20 18:00:00|2016-04-23 06:00:00|238.587209240341
MDBV1|ens02|2016-04-20 18:00:00|2016-04-23 06:00:00|238.386618769836
MDBV1|ens03|2016-04-20 18:00:00|2016-04-23 06:00:00|246.312821885538
MDBV1|ens04|2016-04-20 18:00:00|2016-04-23 06:00:00|237.956154179716
MDBV1|ens01|2016-04-19 06:00:00|2016-04-23 12:00:00|1417.63953892746
MDBV1|ens01|2016-04-20 18:00:00|2016-04-23 12:00:00|209.872343232489
MDBV1|ens02|2016-04-20 18:00:00|2016-04-23 12:00:00|209.899606158257
MDBV1|ens03|2016-04-20 18:00:00|2016-04-23 12:00:00|215.785316521025
MDBV1|ens04|2016-04-20 18:00:00|2016-04-23 12:00:00|208.711723941135
MDBV1|ens01|2016-04-19 06:00:00|2016-04-23 18:00:00|1415.99035924988
MDBV1|ens01|2016-04-20 18:00:00|2016-04-23 18:00:00|184.638914114666
MDBV1|ens02|2016-04-20 18:00:00|2016-04-23 18:00:00|184.57322371
MDBV1|ens03|2016-04-20 18:00:00|2016-04-23 18:00:00|189.508672138071
MDBV1|ens04|2016-04-20 18:00:00|2016-04-23 18:00:00|183.818062614059
MDBV1|ens01|2016-04-19 06:00:00|2016-04-24 00:00:00|1414.29375993118
MDBV1|ens01|2016-04-20 18:00:00|2

Re: [R] Help needed to format data for boxplot time-series

2016-06-01 Thread PIKAL Petr
Hi

It is preferable to use output of

dput(yourdata) or dput(yourdata[1:20,])

so that we can use your data.

From your description maybe

boxplot(split(yourdata$value, yourdata$valid_time))

can give you what you want.

Regards
Petr

> -Original Message-
> From: R-help [mailto:r-help-boun...@r-project.org] On Behalf Of Thomas
> Adams
> Sent: Wednesday, June 1, 2016 2:07 PM
> To: r-help@r-project.org
> Subject: [R] Help needed to format data for boxplot time-series
>
> All:
>
> I have used R in combination with GRASS GIS spatial data (using spgrass)
> many times in the past to generate a 'time series' of boxplots, to show
> variations over time. But I have a new problem, not involving spatial data, 
> but
> rather, true time-series data (snippet shown below). So, what I want to do is
> to generate a 'time-series' of boxplots based on the column 'valid_time' for
> the 'values' column data. What I can not figure out is how to either select or
> format the data for the series of individual boxplots.
> Somehow it seems I need to use reshape; do I group the data within a loop?
> This does not seem efficient. The full set of data I have covers a 30 day 
> period
> at 6-hourly time steps with 9320 rows
>
> Data
>
> lid|ens_num|basis_time|valid_time|value
> MDBV1|ens01|2016-04-19 06:00:00|2016-04-21 00:00:00|1431.4787995285
> MDBV1|ens01|2016-04-20 18:00:00|2016-04-21 00:00:00|740.777643846512
> MDBV1|ens02|2016-04-20 18:00:00|2016-04-21 00:00:00|740.78561401
> MDBV1|ens03|2016-04-20 18:00:00|2016-04-21 00:00:00|740.777441774178
> MDBV1|ens04|2016-04-20 18:00:00|2016-04-21 00:00:00|740.777441774178
> MDBV1|ens01|2016-04-19 06:00:00|2016-04-21 06:00:00|1430.25545361671
> MDBV1|ens01|2016-04-20 18:00:00|2016-04-21 06:00:00|673.404235368919
> MDBV1|ens02|2016-04-20 18:00:00|2016-04-21 06:00:00|673.404370083809
> MDBV1|ens03|2016-04-20 18:00:00|2016-04-21 06:00:00|673.404235368919
> MDBV1|ens04|2016-04-20 18:00:00|2016-04-21 06:00:00|673.404235368919
> MDBV1|ens01|2016-04-19 06:00:00|2016-04-21 12:00:00|1429.0170196373
> MDBV1|ens01|2016-04-20 18:00:00|2016-04-21 12:00:00|602.801441559601
> MDBV1|ens02|2016-04-20 18:00:00|2016-04-21 12:00:00|602.801239487267
> MDBV1|ens03|2016-04-20 18:00:00|2016-04-21 12:00:00|602.801441559601
> MDBV1|ens04|2016-04-20 18:00:00|2016-04-21 12:00:00|602.801441559601
> MDBV1|ens01|2016-04-19 06:00:00|2016-04-21 18:00:00|1427.75029553108
> MDBV1|ens01|2016-04-20 18:00:00|2016-04-21 18:00:00|532.976794630909
> MDBV1|ens02|2016-04-20 18:00:00|2016-04-21 18:00:00|532.976727273464
> MDBV1|ens03|2016-04-20 18:00:00|2016-04-21 18:00:00|532.97639048624
> MDBV1|ens04|2016-04-20 18:00:00|2016-04-21 18:00:00|532.976895667076
> MDBV1|ens01|2016-04-19 06:00:00|2016-04-22 00:00:00|1426.44531239624
> MDBV1|ens01|2016-04-20 18:00:00|2016-04-22 00:00:00|467.520648461056
> MDBV1|ens02|2016-04-20 18:00:00|2016-04-22 00:00:00|467.520513746166
> MDBV1|ens03|2016-04-20 18:00:00|2016-04-22 00:00:00|467.520379031277
> MDBV1|ens04|2016-04-20 18:00:00|2016-04-22 00:00:00|467.520783175945
> MDBV1|ens01|2016-04-19 06:00:00|2016-04-22 06:00:00|1425.14127226563
> MDBV1|ens01|2016-04-20 18:00:00|2016-04-22 06:00:00|408.103669752502
> MDBV1|ens02|2016-04-20 18:00:00|2016-04-22 06:00:00|408.105117937565
> MDBV1|ens03|2016-04-20 18:00:00|2016-04-22 06:00:00|408.102255246162
> MDBV1|ens04|2016-04-20 18:00:00|2016-04-22 06:00:00|408.193086760426
> MDBV1|ens01|2016-04-19 06:00:00|2016-04-22 12:00:00|1423.73767783165
> MDBV1|ens01|2016-04-20 18:00:00|2016-04-22 12:00:00|356.017269114971
> MDBV1|ens02|2016-04-20 18:00:00|2016-04-22 12:00:00|356.245105671883
> MDBV1|ens03|2016-04-20 18:00:00|2016-04-22 12:00:00|355.568634854126
> MDBV1|ens04|2016-04-20 18:00:00|2016-04-22 12:00:00|357.646308916569
> MDBV1|ens01|2016-04-19 06:00:00|2016-04-22 18:00:00|1422.30188653908
> MDBV1|ens01|2016-04-20 18:00:00|2016-04-22 18:00:00|310.664962696362
> MDBV1|ens02|2016-04-20 18:00:00|2016-04-22 18:00:00|310.956081572628
> MDBV1|ens03|2016-04-20 18:00:00|2016-04-22 18:00:00|310.891788891602
> MDBV1|ens04|2016-04-20 18:00:00|2016-04-22 18:00:00|311.764674018288
> MDBV1|ens01|2016-04-19 06:00:00|2016-04-23 00:00:00|1420.79065490837
> MDBV1|ens01|2016-04-20 18:00:00|2016-04-23 00:00:00|271.319441647482
> MDBV1|ens02|2016-04-20 18:00:00|2016-04-23 00:00:00|271.90585556159
> MDBV1|ens03|2016-04-20 18:00:00|2016-04-23 00:00:00|272.571818617964
> MDBV1|ens04|2016-04-20 18:00:00|2016-04-23 00:00:00|272.197900602722
> MDBV1|ens01|2016-04-19 06:00:00|2016-04-23 06:00:00|1419.24197253838
> MDBV1|ens01|2016-04-20 18:00:00|2016-04-23 06:00:00|238.587209240341
> MDBV1|ens02|2016-04-20 18:00:00|2016-04-23 06:00:00|238.386618769836
> MDBV1|ens03|2016-04-20 18:00:00|2016-04-23 06:00:00|246.312821885538
> MDBV1|ens04|2016-04-20 18:00:00|2016-04-23 06:00:00|237.956154179716
> MDBV1|ens01|2016-04-19 06:00:00|2016-04-23 12:00:00|1417.63953892746
> MDBV1|ens01|2016-04-20 18:00:00|2016-04-23 12:00:00|209.872343232489
> MDBV1|ens02|2016-04-20 18:00:00|

Re: [R] Help needed to format data for boxplot time-series

2016-06-01 Thread Thomas Adams
Petr and David,

Thank you so much! Both approaches do precisely what I need. I knew there
had to be a very simple way to do this, but I am still very much a novice
and struggle with data management at times. Also, thank you for the
suggestion to use dput(yourdata) or dput(yourdata[1:20,]) -- I knew such a
thing existed and search for it, but just could not recall the 'dput'
command name.

Regards,
Tom

On Wed, Jun 1, 2016 at 7:23 AM, PIKAL Petr  wrote:

> Hi
>
> It is preferable to use output of
>
> dput(yourdata) or dput(yourdata[1:20,])
>
> so that we can use your data.
>
> From your description maybe
>
> boxplot(split(yourdata$value, yourdata$valid_time))
>
> can give you what you want.
>
> Regards
> Petr
>
> > -Original Message-
> > From: R-help [mailto:r-help-boun...@r-project.org] On Behalf Of Thomas
> > Adams
> > Sent: Wednesday, June 1, 2016 2:07 PM
> > To: r-help@r-project.org
> > Subject: [R] Help needed to format data for boxplot time-series
> >
> > All:
> >
> > I have used R in combination with GRASS GIS spatial data (using spgrass)
> > many times in the past to generate a 'time series' of boxplots, to show
> > variations over time. But I have a new problem, not involving spatial
> data, but
> > rather, true time-series data (snippet shown below). So, what I want to
> do is
> > to generate a 'time-series' of boxplots based on the column 'valid_time'
> for
> > the 'values' column data. What I can not figure out is how to either
> select or
> > format the data for the series of individual boxplots.
> > Somehow it seems I need to use reshape; do I group the data within a
> loop?
> > This does not seem efficient. The full set of data I have covers a 30
> day period
> > at 6-hourly time steps with 9320 rows
> >
> > Data
> >
> > lid|ens_num|basis_time|valid_time|value
> > MDBV1|ens01|2016-04-19 06:00:00|2016-04-21 00:00:00|1431.4787995285
> > MDBV1|ens01|2016-04-20 18:00:00|2016-04-21 00:00:00|740.777643846512
> > MDBV1|ens02|2016-04-20 18:00:00|2016-04-21 00:00:00|740.78561401
> > MDBV1|ens03|2016-04-20 18:00:00|2016-04-21 00:00:00|740.777441774178
> > MDBV1|ens04|2016-04-20 18:00:00|2016-04-21 00:00:00|740.777441774178
> > MDBV1|ens01|2016-04-19 06:00:00|2016-04-21 06:00:00|1430.25545361671
> > MDBV1|ens01|2016-04-20 18:00:00|2016-04-21 06:00:00|673.404235368919
> > MDBV1|ens02|2016-04-20 18:00:00|2016-04-21 06:00:00|673.404370083809
> > MDBV1|ens03|2016-04-20 18:00:00|2016-04-21 06:00:00|673.404235368919
> > MDBV1|ens04|2016-04-20 18:00:00|2016-04-21 06:00:00|673.404235368919
> > MDBV1|ens01|2016-04-19 06:00:00|2016-04-21 12:00:00|1429.0170196373
> > MDBV1|ens01|2016-04-20 18:00:00|2016-04-21 12:00:00|602.801441559601
> > MDBV1|ens02|2016-04-20 18:00:00|2016-04-21 12:00:00|602.801239487267
> > MDBV1|ens03|2016-04-20 18:00:00|2016-04-21 12:00:00|602.801441559601
> > MDBV1|ens04|2016-04-20 18:00:00|2016-04-21 12:00:00|602.801441559601
> > MDBV1|ens01|2016-04-19 06:00:00|2016-04-21 18:00:00|1427.75029553108
> > MDBV1|ens01|2016-04-20 18:00:00|2016-04-21 18:00:00|532.976794630909
> > MDBV1|ens02|2016-04-20 18:00:00|2016-04-21 18:00:00|532.976727273464
> > MDBV1|ens03|2016-04-20 18:00:00|2016-04-21 18:00:00|532.97639048624
> > MDBV1|ens04|2016-04-20 18:00:00|2016-04-21 18:00:00|532.976895667076
> > MDBV1|ens01|2016-04-19 06:00:00|2016-04-22 00:00:00|1426.44531239624
> > MDBV1|ens01|2016-04-20 18:00:00|2016-04-22 00:00:00|467.520648461056
> > MDBV1|ens02|2016-04-20 18:00:00|2016-04-22 00:00:00|467.520513746166
> > MDBV1|ens03|2016-04-20 18:00:00|2016-04-22 00:00:00|467.520379031277
> > MDBV1|ens04|2016-04-20 18:00:00|2016-04-22 00:00:00|467.520783175945
> > MDBV1|ens01|2016-04-19 06:00:00|2016-04-22 06:00:00|1425.14127226563
> > MDBV1|ens01|2016-04-20 18:00:00|2016-04-22 06:00:00|408.103669752502
> > MDBV1|ens02|2016-04-20 18:00:00|2016-04-22 06:00:00|408.105117937565
> > MDBV1|ens03|2016-04-20 18:00:00|2016-04-22 06:00:00|408.102255246162
> > MDBV1|ens04|2016-04-20 18:00:00|2016-04-22 06:00:00|408.193086760426
> > MDBV1|ens01|2016-04-19 06:00:00|2016-04-22 12:00:00|1423.73767783165
> > MDBV1|ens01|2016-04-20 18:00:00|2016-04-22 12:00:00|356.017269114971
> > MDBV1|ens02|2016-04-20 18:00:00|2016-04-22 12:00:00|356.245105671883
> > MDBV1|ens03|2016-04-20 18:00:00|2016-04-22 12:00:00|355.568634854126
> > MDBV1|ens04|2016-04-20 18:00:00|2016-04-22 12:00:00|357.646308916569
> > MDBV1|ens01|2016-04-19 06:00:00|2016-04-22 18:00:00|1422.30188653908
> > MDBV1|ens01|2016-04-20 18:00:00|2016-04-22 18:00:00|310.664962696362
> > MDBV1|ens02|2016-04-20 18:00:00|2016-04-22 18:00:00|310.956081572628
> > MDBV1|ens03|2016-04-20 18:00:00|2016-04-22 18:00:00|310.891788891602
> > MDBV1|ens04|2016-04-20 18:00:00|2016-04-22 18:00:00|311.764674018288
> > MDBV1|ens01|2016-04-19 06:00:00|2016-04-23 00:00:00|1420.79065490837
> > MDBV1|ens01|2016-04-20 18:00:00|2016-04-23 00:00:00|271.319441647482
> > MDBV1|ens02|2016-04-20 18:00:00|2016-04-23 00:00:00|271.90585556159
> > MDBV1|ens03|2016-04-20 18:00:00|2016-04-23 00:

[R] Training set in Self organizing Map

2016-06-01 Thread ch.elahe via R-help
Hi all,
I want to use Self Organizing Map in R for my data. I want my training set to 
be the following subset of my data:
 

subdf=subset(df,Country%in%c("US","FR"))
next I should change this subset to a matrix but I get the following error:
 
data_train_matrix=as.matrix(scale(subdf))
error in colMeans(x,na.rm=TRUE):'x' must be numeric
 
Can anyone help me to solve that?
Thanks for any help
Elahe

__
R-help@r-project.org mailing list -- To UNSUBSCRIBE and more, see
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] Installing miniCRAN on Debian

2016-06-01 Thread G . Maubach
Hi All,

I am installng miniCRAN on Debian GNU Linux 8 Jessie (Linux analytics7 
4.5.0-0.bpo.2-amd64 #1 SMP Debian 4.5.4-1~bpo8+1 (2016-05-13) x86_64 GNU/Linux) 
and R 3.3.0 

-- cut --
> sessionInfo()
R version 3.3.0 (2016-05-03)
Platform: x86_64-pc-linux-gnu (64-bit)
Running under: Debian GNU/Linux 8 (jessie)

locale:
 [1] LC_CTYPE=de_DE.UTF-8   LC_NUMERIC=C   LC_TIME=de_DE.UTF-8  
 
 [4] LC_COLLATE=de_DE.UTF-8 LC_MONETARY=de_DE.UTF-8
LC_MESSAGES=de_DE.UTF-8   
 [7] LC_PAPER=de_DE.UTF-8   LC_NAME=C  LC_ADDRESS=C 
 
[10] LC_TELEPHONE=C LC_MEASUREMENT=de_DE.UTF-8 LC_IDENTIFICATION=C  
 

attached base packages:
[1] stats graphics  grDevices utils datasets  methods   base 

loaded via a namespace (and not attached):
[1] tools_3.3.0
-- cut --

After running

sudo apt-get install libssl-dev libcurl4-openssl-dev libxml2-dev libhunspell-dev

and calling

install.packages(pkgs = "miniCRAN", repos = "http://cran.csiro.au";, 
dependencies = TRUE)

I get the message

- ANTICONF ERROR ---
Configuration failed because hunspell was not found. Try installing:
 * deb: libhunspell-dev (Debian, Ubuntu, etc)
 * rpm: hunspell-devel (Fedora, CentOS, RHEL)
 * brew: hunspell (Mac OSX)
If hunspell is already installed, check that 'pkg-config' is in your
PATH and PKG_CONFIG_PATH contains a hunspell.pc file. If pkg-config
is unavailable you can set INCLUDE_DIR and LIB_DIR manually via:
R CMD INSTALL --configure-vars='INCLUDE_DIR=... LIB_DIR=...'

Running

find / -name hunspell.pc

gives

/usr/lib/x86_64-linux-gnu/pkgconfig/hunspell.pc

and running

find / -name pkg-config

gives

/usr/share/bash-completion/completions/pkg-config

How do I need to configure R correctly to get miniCRAN running?

Kind regards

Georg

__
R-help@r-project.org mailing list -- To UNSUBSCRIBE and more, see
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Training set in Self organizing Map

2016-06-01 Thread Jeff Newmiller
You did not send  sample of your data, using dput. Before doing that,  I 
suggest peeling apart your troublesome line of code yourself:

str( as.matrix( scale( subdf ) ) )
str( scale( subdf ) )
str( subdf )

And then think about what the scale function does. Does it make sense to ask it 
to scale character or factor data? Could you perhaps exclude some of the 
columns that don't belong in the scaled data? 
-- 
Sent from my phone. Please excuse my brevity.

On June 1, 2016 7:39:30 AM PDT, "ch.elahe via R-help"  
wrote:
>Hi all,
>I want to use Self Organizing Map in R for my data. I want my training
>set to be the following subset of my data:
> 
>
>subdf=subset(df,Country%in%c("US","FR"))
>next I should change this subset to a matrix but I get the following
>error:
> 
>data_train_matrix=as.matrix(scale(subdf))
>error in colMeans(x,na.rm=TRUE):'x' must be numeric
> 
>Can anyone help me to solve that?
>Thanks for any help
>Elahe
>
>__
>R-help@r-project.org mailing list -- To UNSUBSCRIBE and more, see
>https://stat.ethz.ch/mailman/listinfo/r-help
>PLEASE do read the posting guide
>http://www.R-project.org/posting-guide.html
>and provide commented, minimal, self-contained, reproducible code.

[[alternative HTML version deleted]]

__
R-help@r-project.org mailing list -- To UNSUBSCRIBE and more, see
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Training set in Self organizing Map

2016-06-01 Thread Ulrik Stervbo
Hi Elahe,

if you look at your subdf, you will see that the column Country - which is
not numeric - is still present. You might have other non-number columns,
but this I cannot tell.

scale expects a numeric matrix. You give it a data.frame which is silently
cast to a matrix. A matrix can only have one type - unlike the data.frame -
so the presence of the non-numeric columns results in a matrix of type
character. Calculating means of characters is not possible, hence the error.

You need your data.frame to consist only of numeric types - then scale will
proceed without complaints.

Best wishes,
Ulrik



On Wed, 1 Jun 2016 at 16:41 ch.elahe via R-help 
wrote:

> Hi all,
> I want to use Self Organizing Map in R for my data. I want my training set
> to be the following subset of my data:
>
>
> subdf=subset(df,Country%in%c("US","FR"))
> next I should change this subset to a matrix but I get the following error:
>
> data_train_matrix=as.matrix(scale(subdf))
> error in colMeans(x,na.rm=TRUE):'x' must be numeric
>
> Can anyone help me to solve that?
> Thanks for any help
> Elahe
>
> __
> R-help@r-project.org mailing list -- To UNSUBSCRIBE and more, see
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide
> http://www.R-project.org/posting-guide.html
> and provide commented, minimal, self-contained, reproducible code.
>

[[alternative HTML version deleted]]

__
R-help@r-project.org mailing list -- To UNSUBSCRIBE and more, see
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] Contrast matrix with a continuous variable

2016-06-01 Thread Carlos Alvarez Roa








































Hi all, 

I was wondering if someone could help me designing a contrast matrix when 

you have a continuous variable (Days). 



My model looks like this: 



model<-lme(Y~A*B*Days, data=data_over_time) 



The factor A has two levels (A1 and A2) and factor B has three levels (B1, 

B2, and B3). I measured the response variable Y every two or three days over 

70 days (Days). 

I need to look at only a few comparisons over the 70 days such us: 

A1 and B1 vs A2 and B1, 

A2 and B2 vs A2 and B1, 

A1 and B2 vs A2 and B2 

I could use the function contrast from the package contrast to design the 

matrix with all three comparisons. I know how to do it for specific days at a 

time. This would give me the first comparison for day 1.

a=contrast(model, 
   a=list(A='A1',B='B1',Days=1), 
   b=list(A='A2',B='B1',Days=1)
   ) 


However, I need to run the comparison over the 70 days not at 

individual time points at a time. 



I was wondering if someone could help me designing this contrast matrix. 



Any help would be much appreciated 

Cheers











  
[[alternative HTML version deleted]]

__
R-help@r-project.org mailing list -- To UNSUBSCRIBE and more, see
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] Unable to update R software to 3.3.0

2016-06-01 Thread Sunish Kumar Bilandi
Hi Team,

I am using RedHat 5 and installed R using YUM, (R version 3.2.3) Now I want to 
update R version tp 3.3.0, but I am unable to do that, Is there any alternate 
to do this?

Hope to hear from your side.

Regards,


Sunish Bilandi
Business Analyst, CIDA-01
Evalueserve

Office: +91 124 412/4154000 (Extn. 1994)
Mobile: 9811937267
Fax: +91 124 406 3430
sunish.bila...@evalueserve.com

Evalueserve.com  | Evalueserve LinkedIn 
 | Evalueserve Twitter 
 | Evalueserve Facebook 
 | Evalueserve 
Google+ 

Evalueserve - powered by mind+machine(tm)
Evalueserve is a global professional services provider offering research, 
analytics, and data management services. We are powered by mind+machine - a 
unique combination of human expertise and best-in-class technologies that use 
smart algorithms to simplify key tasks.



The information in this e-mail is the property of Evalue...{{dropped:11}}

__
R-help@r-project.org mailing list -- To UNSUBSCRIBE and more, see
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] non-ergodic analysis

2016-06-01 Thread Poizot Emmanuel

Dear all,

I'm looking for a tool to perform  non-ergodic covariance and
correlation analysis. This is in the purpose to study the spatial
autocorrelation of a variable.
Reagrds


*Emmanuel Poizot*
Cnam/Intechmer

__
R-help@r-project.org mailing list -- To UNSUBSCRIBE and more, see
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] TwitteR - Number of tweets from multiple locations

2016-06-01 Thread Juho Kiuru
Hi all, I am new to R and TwitteR and would love to get some advice from
you.

I managed to get list of tweets containing word 'innovation' tweeted in
Helsinki with following script:

searchTwitter('innovation', n=1, geocode='60.1920,24.9458,30mi',
since="2016-01-01", until="2016-05-31")

However, I was wondering is it possible to involve multiple locations to
script, so that the result would be number of tweets in each location.

For example, something like this:
Location Tweets
Helsinki 300
Berlin 400
Barcelona 500

Another problem I faced is in setting the time span I would like to have
the count of tweets from. I tried to set the time span from the beginning
of this year to end of May, but it seems like I get only tweets from the
last week of May.

Thanks in advance,
Juho

[[alternative HTML version deleted]]

__
R-help@r-project.org mailing list -- To UNSUBSCRIBE and more, see
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Unable to update R software to 3.3.0

2016-06-01 Thread Marc Schwartz

> On Jun 1, 2016, at 1:33 AM, Sunish Kumar Bilandi 
>  wrote:
> 
> Hi Team,
> 
> I am using RedHat 5 and installed R using YUM, (R version 3.2.3) Now I want 
> to update R version tp 3.3.0, but I am unable to do that, Is there any 
> alternate to do this?
> 
> Hope to hear from your side.
> 
> Regards,
> 
> 
> Sunish Bilandi
> Business Analyst, CIDA-01
> Evalueserve


Hi,

First, RHEL and related distributions (e.g. Fedora), have a dedicated R-SIG 
list:
  
  https://stat.ethz.ch/mailman/listinfo/r-sig-fedora

Future queries in this domain should be submitted there, as many of the RH 
package maintainers (e.g. Tom Callaway, aka Spot) read that list.

For R 3.3.0, it would appear that it is about a day away from being available 
for release:

  https://bodhi.fedoraproject.org/updates/FEDORA-EPEL-2016-6fc2c863b0

So for now, it would be available via the EPEL testing repos.

Otherwise, you can wait until it is available via release in the next day or 
so, or download the RPMS directly here:

  http://koji.fedoraproject.org/koji/buildinfo?buildID=762521

Regards,

Marc Schwartz

__
R-help@r-project.org mailing list -- To UNSUBSCRIBE and more, see
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Trimming time series to only include complete years

2016-06-01 Thread Morway, Eric
Hello Jeff,  thank you very much for following up with me on this.  It
definitely helped me get on my way with my analysis.  It figures your from
UC Davis (I'm guessing from your email address), I've been helped out by
them often!  -Eric


Eric Morway
Hydrologist
2730 N. Deer Run Rd.
Carson City, NV 89701
(775) 887-7668



On Mon, May 30, 2016 at 3:15 PM, Jeff Newmiller 
wrote:

> Sorry, I put too many bugs (opportunities for excellence!) in this on my
> first pass on this to leave it alone :-(
>
> isPartialWaterYear2 <- function( d ) {
>   dtl <- as.POSIXlt( d )
>   wy1 <- cumsum( ( 9 == dtl$mon ) & ( 1 == dtl$mday ) )
>   # any 0 in wy1 corresponds to first partial water year
>   result <- 0 == wy1
>   # if last day is not Sep 30, mark last water year as partial
>   if ( 8 != dtl$mon[ length( d ) ]
>  | 30 != dtl$mday[ length( d ) ] ) {
> result[ wy1[ length( d ) ] == wy1 ] <- TRUE
>   }
>   result
> }
>
> dat2 <- dat[ !isPartialWaterYear( dat$Date ), ]
>
> On Sat, 28 May 2016, Jeff Newmiller wrote:
>
> # read about POSIXlt at ?DateTimeClasses
>> # note that the "mon" element is 0-11
>> isPartialWaterYear <- function( d ) {
>>  dtl <- as.POSIXlt( dat$Date )
>>  wy1 <- cumsum( ( 9 == dtl$mon ) & ( 1 == dtl$mday ) )
>>  ( 0 == wy1  # first partial year
>>  | (  8 != dtl$mon[ nrow( dat ) ] # end partial year
>>& 30 != dtl$mday[ nrow( dat ) ]
>>) & wy1[ nrow( dat ) ] == wy1
>>  )
>> }
>>
>> dat2 <- dat[ !isPartialWaterYear( dat$Date ), ]
>>
>> The above assumes that, as you said, the data are continuous at one-day
>> intervals, such that the only partial years will occur at the beginning and
>> end. The "diff" function could be used to identify irregular data within
>> the data interval if needed.
>>
>> On Fri, 27 May 2016, Morway, Eric wrote:
>>
>> In bulk processing streamflow data available from an online database, I'm
>>> wanting to trim the beginning and end of the time series so that daily
>>> data
>>> associated with incomplete "water years" (defined as extending from Oct
>>> 1st
>>> to the following September 30th) is trimmed off the beginning and end of
>>> the series.
>>>
>>> For a small reproducible example, the time series below starts on
>>> 2010-01-01 and ends on 2011-11-05.  So the data between 2010-01-01 and
>>> 2010-09-30 and also between 2011-10-01 and 2011-11-05 is not associated
>>> with a complete set of data for their respective water years.  With the
>>> real data, the initial date of collection is arbitrary, could be 1901 or
>>> 1938, etc.  Because I'm cycling through potentially thousands of
>>> records, I
>>> need help in designing a function that is efficient.
>>>
>>> dat <-
>>>
>>> data.frame(Date=seq(as.Date("2010-01-01"),as.Date("2011-11-05"),by="day"))
>>> dat$Q <- rnorm(nrow(dat))
>>>
>>> dat$wyr <- as.numeric(format(dat$Date,"%Y"))
>>> is.nxt <- as.numeric(format(dat$Date,"%m")) %in% 1:9
>>> dat$wyr[!is.nxt] <- dat$wyr[!is.nxt] + 1
>>>
>>>
>>> function(dat) {
>>>   ...
>>>   returns a subset of dat such that dat$Date > -09-30 & dat$Date <
>>> -10-01
>>>   ...
>>> }
>>>
>>> where the years between - are "complete" (no missing days).  In
>>> the
>>> example above, the returned dat would extend from 2010-10-01 to
>>> 2011-09-30
>>>
>>> Any offered guidance is very much appreciated.
>>>
>>> [[alternative HTML version deleted]]
>>>
>>> __
>>> R-help@r-project.org mailing list -- To UNSUBSCRIBE and more, see
>>> https://stat.ethz.ch/mailman/listinfo/r-help
>>> PLEASE do read the posting guide
>>> http://www.R-project.org/posting-guide.html
>>> and provide commented, minimal, self-contained, reproducible code.
>>>
>>>
>>
>> ---
>> Jeff NewmillerThe .   .  Go
>> Live...
>> DCN:Basics: ##.#.   ##.#.  Live
>> Go...
>>  Live:   OO#.. Dead: OO#..  Playing
>> Research Engineer (Solar/BatteriesO.O#.   #.O#.  with
>> /Software/Embedded Controllers)   .OO#.   .OO#.
>> rocks...1k
>>
>> __
>> R-help@r-project.org mailing list -- To UNSUBSCRIBE and more, see
>> https://stat.ethz.ch/mailman/listinfo/r-help
>> PLEASE do read the posting guide
>> http://www.R-project.org/posting-guide.html
>> and provide commented, minimal, self-contained, reproducible code.
>>
>>
> ---
> Jeff NewmillerThe .   .  Go Live...
> DCN:Basics: ##.#.   ##.#.  Live
> Go...
>   Live:   OO#.. Dead: OO#..  Playing
> Research Engineer (Solar/BatteriesO.O#.   #.O#.  with
> /Software/Embedded Controllers)   .OO#.   .OO#.  rocks...1k
> ---
>

[

[R] Antwort: Re: Unable to update R software to 3.3.0

2016-06-01 Thread G . Maubach
Hi all,

I did it today on Debian GNU Linux 8 Jessie this way:

vim /etc/apt/sources.list
deb http://cran.uni-muenster.de/bin/linux/debian jessie-cran3
ESC;:wq

apt.get update
apt-get install r-base r-base-dev

This worked for me.

When installing R packages from within R I found that R needed the 
following:

apt-get install libssl-dev libcurl4-openssl-dev libhunspell-dev 
libxml2-dev 

You probably might to wish to install this also.

HTH.

Kind regards

Georg




Von:Marc Schwartz 
An: Sunish Kumar Bilandi , 
Kopie:  R-help 
Datum:  01.06.2016 17:18
Betreff:Re: [R] Unable to update R software to 3.3.0
Gesendet von:   "R-help" 




> On Jun 1, 2016, at 1:33 AM, Sunish Kumar Bilandi 
 wrote:
> 
> Hi Team,
> 
> I am using RedHat 5 and installed R using YUM, (R version 3.2.3) Now I 
want to update R version tp 3.3.0, but I am unable to do that, Is there 
any alternate to do this?
> 
> Hope to hear from your side.
> 
> Regards,
> 
> 
> Sunish Bilandi
> Business Analyst, CIDA-01
> Evalueserve


Hi,

First, RHEL and related distributions (e.g. Fedora), have a dedicated 
R-SIG list:
 
  https://stat.ethz.ch/mailman/listinfo/r-sig-fedora

Future queries in this domain should be submitted there, as many of the RH 
package maintainers (e.g. Tom Callaway, aka Spot) read that list.

For R 3.3.0, it would appear that it is about a day away from being 
available for release:

  https://bodhi.fedoraproject.org/updates/FEDORA-EPEL-2016-6fc2c863b0

So for now, it would be available via the EPEL testing repos.

Otherwise, you can wait until it is available via release in the next day 
or so, or download the RPMS directly here:

  http://koji.fedoraproject.org/koji/buildinfo?buildID=762521

Regards,

Marc Schwartz

__
R-help@r-project.org mailing list -- To UNSUBSCRIBE and more, see
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide 
http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.

__
R-help@r-project.org mailing list -- To UNSUBSCRIBE and more, see
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] Antwort: RE: Variable labels and value labels

2016-06-01 Thread G . Maubach
Hi Petr,

I am looking for a general procedure that I can use with any package of R.

As to my current experience it probably will happen that I need a 
procedure from another package than hmisc or memisc and the my solution 
shall work even than so that I do need to find another way to do it.

Kind regards

Georg



Von:PIKAL Petr 
An: "g.maub...@weinwolf.de" , 
"r-help@r-project.org" , 
Datum:  31.05.2016 14:56
Betreff:RE: [R] Variable labels and value labels



Hi

see in line

> -Original Message-
> From: R-help [mailto:r-help-boun...@r-project.org] On Behalf Of
> g.maub...@weinwolf.de
> Sent: Tuesday, May 31, 2016 2:01 PM
> To: r-help@r-project.org
> Subject: [R] Variable labels and value labels
>
> Hi All,
>
> I am using R for social sciences. In this field I am used to use short 
variable
> names like "q1" for question 1, "q2" for question 2 and so on and label 
the
> variables like q1 : "Please tell us your age" or q2 : "Could you state 
us your
> household income?" or something similar indicating which question is 
stored
> in the variable.
>
> Similar I am used to label values like 1: "Less than 18 years", 2 : "18 
to
> 30 years", 3 : "31 to 60 years" and 4 : "61 years and more".

Seems to me that it is work for factors

nnn <- sample(1:4, 20, replace=TRUE)
q1 <-factor(nnn, labels=c("Less than 18 years", "18 to 30 years", "31 to 
60 years","61 years and more"))

You can store such variables in data.frame with names "q1" to "qwhatever" 
and possibly "Subject"

And you can store annotation of questions in another data frame with 2 
columns e.g. "Question" and "Description"

Basically it is an approach similar to database and in R you can merge 
those two data.frames by ?merge.
>
> I know that the packages Hmisc and memisc have a functionality for this 
but
> these labeling functions are limited to the packages they were defined 
for.

It seems to me strange. What prevents you to use functions from Hmisc?

Regards
Petr

> Using the question tests as variable names is possible but very 
inconvenient.
>
> I there another way for labeling variables and values in R?
>
> Kind regards
>
> Georg Maubach
>
> __
> R-help@r-project.org mailing list -- To UNSUBSCRIBE and more, see
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide http://www.R-project.org/posting-
> guide.html
> and provide commented, minimal, self-contained, reproducible code.


Tento e-mail a jakékoliv k němu připojené dokumenty jsou důvěrné a jsou 
určeny pouze jeho adresátům.
Jestliže jste obdržel(a) tento e-mail omylem, informujte laskavě 
neprodleně jeho odesílatele. Obsah tohoto emailu i s přílohami a jeho 
kopie vymažte ze svého systému.
Nejste-li zamýšleným adresátem tohoto emailu, nejste oprávněni tento email 
jakkoliv užívat, rozšiřovat, kopírovat či zveřejňovat.
Odesílatel e-mailu neodpovídá za eventuální škodu způsobenou modifikacemi 
či zpožděním přenosu e-mailu.

V případě, že je tento e-mail součástí obchodního jednání:
- vyhrazuje si odesílatel právo ukončit kdykoliv jednání o uzavření 
smlouvy, a to z jakéhokoliv důvodu i bez uvedení důvodu.
- a obsahuje-li nabídku, je adresát oprávněn nabídku bezodkladně přijmout; 
Odesílatel tohoto e-mailu (nabídky) vylučuje přijetí nabídky ze strany 
příjemce s dodatkem či odchylkou.
- trvá odesílatel na tom, že příslušná smlouva je uzavřena teprve 
výslovným dosažením shody na všech jejích náležitostech.
- odesílatel tohoto emailu informuje, že není oprávněn uzavírat za 
společnost žádné smlouvy s výjimkou případů, kdy k tomu byl písemně 
zmocněn nebo písemně pověřen a takové pověření nebo plná moc byly 
adresátovi tohoto emailu případně osobě, kterou adresát zastupuje, 
předloženy nebo jejich existence je adresátovi či osobě jím zastoupené 
známá.

This e-mail and any documents attached to it may be confidential and are 
intended only for its intended recipients.
If you received this e-mail by mistake, please immediately inform its 
sender. Delete the contents of this e-mail with all attachments and its 
copies from your system.
If you are not the intended recipient of this e-mail, you are not 
authorized to use, disseminate, copy or disclose this e-mail in any 
manner.
The sender of this e-mail shall not be liable for any possible damage 
caused by modifications of the e-mail or by delay with transfer of the 
email.

In case that this e-mail forms part of business dealings:
- the sender reserves the right to end negotiations about entering into a 
contract in any time, for any reason, and without stating any reasoning.
- if the e-mail contains an offer, the recipient is entitled to 
immediately accept such offer; The sender of this e-mail (offer) excludes 
any acceptance of the offer on the part of the recipient containing any 
amendment or variation.
- the sender insists on that the respective contract is concluded only 
upon an express mutual agreement on 

Re: [R] Antwort: Re: Unable to update R software to 3.3.0

2016-06-01 Thread Marc Schwartz

> On Jun 1, 2016, at 12:02 PM, g.maub...@weinwolf.de wrote:
> 
> Hi all,
> 
> I did it today on Debian GNU Linux 8 Jessie this way:
> 
> vim /etc/apt/sources.list
> deb http://cran.uni-muenster.de/bin/linux/debian jessie-cran3
> ESC;:wq
> 
> apt.get update
> apt-get install r-base r-base-dev
> 
> This worked for me.
> 
> When installing R packages from within R I found that R needed the 
> following:
> 
> apt-get install libssl-dev libcurl4-openssl-dev libhunspell-dev 
> libxml2-dev 
> 
> You probably might to wish to install this also.
> 
> HTH.
> 
> Kind regards
> 
> Georg


Georg,

As Sunish noted in his post, he is using Red Hat Enterprise Linux (RHEL), which 
is an RPM based Linux distribution, as opposed to Debian and it's derivatives 
like Ubuntu, which use different pre-compiled binaries (.Deb).

Your ability to upgrade on Debian is not relevant to his issue, as a completely 
different infrastructure (RPM based repositories) is required for RHEL if one 
wishes to install pre-compiled binaries, as opposed to building from source, 
which is also an option if one wishes.

Regards,

Marc


> 
> 
> 
> 
> Von:Marc Schwartz 
> An: Sunish Kumar Bilandi , 
> Kopie:  R-help 
> Datum:  01.06.2016 17:18
> Betreff:Re: [R] Unable to update R software to 3.3.0
> Gesendet von:   "R-help" 
> 
> 
> 
> 
>> On Jun 1, 2016, at 1:33 AM, Sunish Kumar Bilandi 
>  wrote:
>> 
>> Hi Team,
>> 
>> I am using RedHat 5 and installed R using YUM, (R version 3.2.3) Now I 
> want to update R version tp 3.3.0, but I am unable to do that, Is there 
> any alternate to do this?
>> 
>> Hope to hear from your side.
>> 
>> Regards,
>> 
>> 
>> Sunish Bilandi
>> Business Analyst, CIDA-01
>> Evalueserve
> 
> 
> Hi,
> 
> First, RHEL and related distributions (e.g. Fedora), have a dedicated 
> R-SIG list:
> 
>  https://stat.ethz.ch/mailman/listinfo/r-sig-fedora
> 
> Future queries in this domain should be submitted there, as many of the RH 
> package maintainers (e.g. Tom Callaway, aka Spot) read that list.
> 
> For R 3.3.0, it would appear that it is about a day away from being 
> available for release:
> 
>  https://bodhi.fedoraproject.org/updates/FEDORA-EPEL-2016-6fc2c863b0
> 
> So for now, it would be available via the EPEL testing repos.
> 
> Otherwise, you can wait until it is available via release in the next day 
> or so, or download the RPMS directly here:
> 
>  http://koji.fedoraproject.org/koji/buildinfo?buildID=762521
> 
> Regards,
> 
> Marc Schwartz

__
R-help@r-project.org mailing list -- To UNSUBSCRIBE and more, see
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] Antwort: Re: Variable labels and value labels

2016-06-01 Thread G . Maubach
Hi Jim,

many thanks for the hint.

When looking at the documentation I did not get how I do control which 
value gets which label. Is it possible to define it?

Kind regards

Georg




Von:Jim Lemon 
An: g.maub...@weinwolf.de, r-help mailing list , 

Datum:  01.06.2016 03:59
Betreff:Re: [R] Variable labels and value labels



Hi Georg,
You may find the "add.value.labels" function in the prettyR package 
useful.

Jim

On Tue, May 31, 2016 at 10:00 PM,   wrote:
> Hi All,
>
> I am using R for social sciences. In this field I am used to use short
> variable names like "q1" for question 1, "q2" for question 2 and so on 
and
> label the variables like q1 : "Please tell us your age" or q2 : "Could 
you
> state us your household income?" or something similar indicating which
> question is stored in the variable.
>
> Similar I am used to label values like 1: "Less than 18 years", 2 : "18 
to
> 30 years", 3 : "31 to 60 years" and 4 : "61 years and more".
>
> I know that the packages Hmisc and memisc have a functionality for this
> but these labeling functions are limited to the packages they were 
defined
> for. Using the question tests as variable names is possible but very
> inconvenient.
>
> I there another way for labeling variables and values in R?
>
> Kind regards
>
> Georg Maubach
>
> __
> R-help@r-project.org mailing list -- To UNSUBSCRIBE and more, see
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide 
http://www.R-project.org/posting-guide.html
> and provide commented, minimal, self-contained, reproducible code.

__
R-help@r-project.org mailing list -- To UNSUBSCRIBE and more, see
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] TwitteR - Number of tweets from multiple locations

2016-06-01 Thread K. Elo

Hi Juho!

01.06.2016, 14:40, Juho Kiuru wrote:

Hi all, I am new to R and TwitteR and would love to get some advice from
you.

I managed to get list of tweets containing word 'innovation' tweeted in
Helsinki with following script:

searchTwitter('innovation', n=1, geocode='60.1920,24.9458,30mi',
since="2016-01-01", until="2016-05-31")

However, I was wondering is it possible to involve multiple locations to
script, so that the result would be number of tweets in each location.

For example, something like this:
Location Tweets
Helsinki 300
Berlin 400
Barcelona 500


Have your read the documentation ('?searchTwitter'). The argument 
'geocode' cannot be a list, only a single value.


A possible workaround could be to filter tweets based on geocode after 
you have received these. However, since your search pharse is rather ge 
general, you would receive a lot of "noise" as well.


One possibility would be to write a loop with three 'searchTwitter' 
calls with different geocodes. Here you could speed up the data 
collection by setting 'n' to e.g. 100 or 500.


You should, however, consider the fact that geocodes are often supressed 
by the user, so that you would most certainly get only a limited amount 
of tweets.



Another problem I faced is in setting the time span I would like to have
the count of tweets from. I tried to set the time span from the beginning
of this year to end of May, but it seems like I get only tweets from the
last week of May.


If I remember correctly this is not related to twitteR, but to the 
Twitter API itself. The API limits your search results to contain tweets 
from the last 7 days only.


HTH,
Kimmo

--
Kimmo Elo
Åbo Akademi University, Finland / German studies
University of Turku, Finland / DIGIN - Digital Humanities Network

__
R-help@r-project.org mailing list -- To UNSUBSCRIBE and more, see
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] Optim():Error in solve.default(crossprod(hm, xm), crossprod(hm, ym))

2016-06-01 Thread nourhaine nefzi
Dear members;
I am stuck trying to find optimal parameters using optim() function. I would be 
veryy gateful if you could help me on this:

I have the following equation:

Rp,t+1 = rf+ beta*rt+1 (1)

Rp,t+1= the return of the portfolio , fr = free risk rate , rt+1: the return of 
a strategy. beta has the following expression:
beta=x0+x1*A+ x2*B+ x3*C+x4*D (Estimated using Generalized Method of Moments 
(GMM).
A,B,C and D are the risk factors related to rt+1.

My objective is to find the optimal values of x1, x2, x3 and x4 that maximize 
the utility function of the investor.
The code is then :
ret<-cbind(ret)  #ret= rt+1
factors<-cbind(A,B,C,D)

func<-function(x,ret,factors) {
df <- data.frame(A=factors$A*x[1],B=factors$B*x[2],C=factors$C*x[3], 
D=factors$D*x[4])
H<-as.matrix(factors)
HH<-matrix(H,179,4)
m <- gmm(ret~., data=df, HH)
b<- coef(m)
beta<- b[1]+b[2]*factors$A+b[3]*factors$B+b[4]*factors$C+b[5]*D
beta=cbind(beta)

r=RF+beta*ret  #equation (1)
#Annual Sharpe ratio of the portfolio
averp<-mean(r)*12
sigmap<-sqrt(12)*sd(r)
Sharpe<-averp/sigmap

#Calculating utility
u<-1/nrow(r)*sum((1+r)^(1-5)/(1-5))
obj<-u
result <- 
list(obj=obj,u=u,beta=beta,r=r,averp=averp,sigmap=sigmap,Sharpe=Sharpe)
return(result)
}

#Catching the obj from the function
Final<-function(x,ret,factors){
bra<-func(x,ret,factors)
#print(bra$obj)
return(-bra$obj)
}
p<-optim(par = c(0,1,2,3),Final,method="Nelder-Mead",ret=ret,factors=factors)
bra<-func(x=p$par,ret=ret,factors=factors)

When I run the code, I get the following error:

Error in solve.default(crossprod(hm, xm), crossprod(hm, ym)) :
  Lapack routine dgesv: system is exactly singular: U[2,2] = 0

Could you please help me ! Thank you in advance

[[alternative HTML version deleted]]

__
R-help@r-project.org mailing list -- To UNSUBSCRIBE and more, see
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] SEM GFI

2016-06-01 Thread VINAY KULKARNI via R-help
Hi,
Please find below the code:
Thanks,Vinay
>library(sem) >data1=read.csv("data_1.csv") >corr=cor(data1) 
>model1<-specifyModel()brand1_Pric_ind->   Val_brand1,lamb1,NADist_brand1   
>->   Val_brand1,lamb2,NAbrand1_like_me ->   
>Val_brand1,lamb3,NAbrand2_Def_Drink ->   Val_brand1,lamb4,NAbrand2_Like -> 
>brand2_Def_Drink,lamb5,NAbrand2_Pleasure -> 
>brand2_Def_Drink,lamb6,NAbrand1_Like -> brand1_like_me,lamb7,NAbrand1_Love -> 
>brand1_like_me,lamb8,NAbrand1_P4WC -> brand1_Like,lamb9,NAbrand1_P4WC -> 
>brand1_Love,lamb10,NAbrand1_Energy -> brand1_P4WC,lamb11,NAbrand1_Different -> 
>brand1_P4WC,lamb12,NAbrand1_Pric_ind <-> brand1_Pric_ind,the1,NADist_brand1 
><-> Dist_brand1,the2,NAbrand1_like_me  <-> 
>brand1_like_me,the3,NAbrand2_Def_Drink <-> brand2_Def_Drink,the4,NAbrand2_Like 
><-> brand2_Like,the5,NAbrand2_Pleasure <-> brand2_Pleasure,the6,NAbrand1_Like 
><-> brand1_Like,the7,NAbrand1_Love <-> brand1_Love,the8,NAbrand1_P4WC <-> 
>brand1_P4WC,the9,NAbrand1_Energy <-> brand1_Energy,the10,NAbrand1_Different 
><-> brand1_Different,the11,NAbrand1_like_me  <-> 
>brand2_Def_Drink,the12,NAbrand1_Like <-> brand1_Love,the13,NAVal_brand1 <-> 
>Val_brand1,1,NA  > opt <- options(fit.indices = c("GFI", "AGFI", "RMSEA", 
>"NFI", "NNFI", "CFI", "RNI", "IFI", "SRMR", "AIC", "AICc", "BIC", "CAIC")) > 
>sem.model1<-sem(model1,corr,36)
> summary(sem.model1)




  From: Bert Gunter 
 To: VINAY KULKARNI  
Cc: "r-help@r-project.org" 
 Sent: Wednesday, 1 June 2016 2:16 AM
 Subject: Re: [R] SEM GFI
   
Probably impossible to answer without your following the posting guide
and posting your code, etc.

Cheers,

Bert
Bert Gunter

"The trouble with having an open mind is that people keep coming along
and sticking things into it."
-- Opus (aka Berkeley Breathed in his "Bloom County" comic strip )


On Tue, May 31, 2016 at 11:28 AM, VINAY KULKARNI via R-help
 wrote:
> Hi,
> I am exactly replicating the SEM model which was done in SAS using Proc Calis 
> in R.
> Used sem package in R but not getting the GFI as same as in SAS 
> (approximately 15% difference)
> and also one link is insignificant but in SAS am getting significant.
> Searched through online in different blogs but not able to get the solution.
> Please let me know what might be the reason.
> Thanks,Vinay
>
>
>
>        [[alternative HTML version deleted]]
>
> __
> R-help@r-project.org mailing list -- To UNSUBSCRIBE and more, see
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
> and provide commented, minimal, self-contained, reproducible code.


  
[[alternative HTML version deleted]]

__
R-help@r-project.org mailing list -- To UNSUBSCRIBE and more, see
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.

[R] Fwd: model specification using lme

2016-06-01 Thread li li
Thanks Thierry for the reply. I think I now have a better understanding for
the specification of the random effects when using lme function.
Are my interpretations below correct?

random=~ 1 | individual   (same random intercept no random slope)

random=~ 1 +method| individual(same random intercept and same random
slope)

random=~ 1 +method:time| individual(same random intercept and different
random slope for different method)
random=~ 1 +method + method:time| individual(different random intercept
and different random slope for different method

 The summary results from the lme function shows whether the slopes for the
three methods are equal (parallelism). I also wanted to test the hypotheses
that each of the fixed slopes (corresponding to the three methods) equals
0, can I use multicomp package for that purpose? I am confused on how to
make correct specifications in glht function to test these hypotheses.

Hanna


> summary(mod1)
Linear mixed-effects model fit by REML
 Data: one
   AIC  BIClogLik
  304.4703 330.1879 -140.2352

Random effects:
 Formula: ~1 + time | individual
 Structure: General positive-definite, Log-Cholesky parametrization
StdDev   Corr
(Intercept) 0.2487869075 (Intr)
time0.0001841179 -0.056
Residual0.3718305953

Variance function:
 Structure: Different standard deviations per stratum
 Formula: ~1 | method
 Parameter estimates:
   312
 1.0 26.59750 24.74476
Fixed effects: reponse ~ method * time
Value Std.Error DF   t-value p-value(Intercept)
96.65395  3.528586 57 27.391694  0.
method2   1.17851  4.856026 57  0.242689  0.8091
method3   5.87505  3.528617 57  1.664973  0.1014
*time  0.07010  0.250983 57  0.279301  0.7810 method2:time
-0.12616  0.360585 57 -0.349877  0.7277 method3:time -0.08010  0.251105 57
-0.318999  0.7509*
 Correlation:
 (Intr) methd2 methd3 time   mthd2:
method2  -0.726
method3  -0.999  0.726
time -0.779  0.566  0.779
method2:time  0.542 -0.712 -0.542 -0.696
method3:time  0.778 -0.566 -0.779 -0.999  0.696

Standardized Within-Group Residuals:
Min  Q1 Med  Q3 Max
-2.67575293 -0.51633192  0.06742723  0.59706762  2.81061874

Number of Observations: 69
Number of Groups: 7 >



-- Forwarded message --
From: Thierry Onkelinx 
Date: 2016-05-30 4:40 GMT-04:00
Subject: Re: [R] model specification using lme
To: li li 
Cc: r-help 



Dear Hanna,

None of the models are correct is you want the same random intercept for
the different methods but different random slope per method.

You can random = ~ 1 + time:method | individual

The easiest way to get alpha_0 and tau_i is to apply post-hoc contrasts.
That is fairly easy to do with the multcomp package.

alpha_0 = (m1 + m2 + m3) / 3
m1 = intercept
m2 = intercept + method2
m3 = intercept + method3
hence alpha_0 = intercept + method2/3 + method3/3

m1 = alpha_0 + tau_1
tau_1 = intercept - method2/3 - method3/3

Best regards,

ir. Thierry Onkelinx
Instituut voor natuur- en bosonderzoek / Research Institute for Nature and
Forest
team Biometrie & Kwaliteitszorg / team Biometrics & Quality Assurance
Kliniekstraat 25
1070 Anderlecht
Belgium

To call in the statistician after the experiment is done may be no more
than asking him to perform a post-mortem examination: he may be able to say
what the experiment died of. ~ Sir Ronald Aylmer Fisher
The plural of anecdote is not data. ~ Roger Brinner
The combination of some data and an aching desire for an answer does not
ensure that a reasonable answer can be extracted from a given body of data.
~ John Tukey

2016-05-29 21:23 GMT+02:00 li li :


> Hi all,
>   For the following data, I consider the following random intercept and
> random slope model. Denote as y_ijk the response value from *j*th
> individual within *i*th method at time point *k*. Assume the following
> model for y_ijk:
>
>   y_ijk= (alpha_0+ tau_i +a_j(i))+(beta_i+b_j(i)) T_k + e_ijk
>
>
> Here alpha_0 is the grand mean;
>   tau_i is the fixed effect for ith method;
>   a_j(i) is random intercept corresponding to the *j*th individual
> within *i*th method, assumed to be common for all three methods;
>   beta_i is the fixed slope corresponding to the ith method;
>   b_j(i) is the random slope corresponding to jth individual for
> the ith method, assumed to be different for different methods;
>   T_k is the time corresponding to y_ijk;
>   e_ijk is the residual.
>
> For this model, I consider the three specification using  the lme function
> as follows:
>
>
> mod1 <- lme(fixed= reponse ~ method*time, random=~ 1 +time | individual,
> data=one, weights= varIdent(form=~1|method),
> control = lmeControl(opt = "optim"))
>
> mod2 <- lme(fixed= reponse ~ method*time, random=~ 0 +time | individual,
> data=one, weights= varIdent(form=~1|method),
> control = lmeControl

Re: [R] Variable labels and value labels

2016-06-01 Thread Jim Lemon
Hi Georg,
add.value.labels simply creates an attribute named "value.labels" for
the sorted values of the vector passed to it. The value labels passed
become the names of this attribute in the sorted order. The function
is intended to mimic a factor in reverse. While the factor adds
sequential numeric values to the original values, add.value.labels
adds names to the values passed. It was intended to be a mnemonic for
numeric values that perhaps should have been coded as character. If I
wrote this function now, it would probably look like this:

value.labels<-function(x,labels) {
 if(missing(labels)) return(attr(x,"value.labels"))
 else {
  attr(x,"value.labels") <- sort(unique(x))
  lenvallab <- length(attr(x,"value.labels"))
  if (length(labels) > lenvallab) {
   cat("More value labels than values, only the first",
lenvallab, "will be used\n")
   labels <- labels[1:lenvallab]
  }
  names(attr(x, "value.labels"))<-labels
  return(x)
 }
}

age<-sample(1:5,100,TRUE)
value.labels(age)
age<-value.labels(age,c("0-19","20-39","40-59","60-79","80+"))
age
value.labels(age)

Jim

On Thu, Jun 2, 2016 at 3:37 AM,   wrote:
> Hi Jim,
>
> many thanks for the hint.
>
> When looking at the documentation I did not get how I do control which
> value gets which label. Is it possible to define it?
>
> Kind regards
>
> Georg
>
>
>
>
> Von:Jim Lemon 
> An: g.maub...@weinwolf.de, r-help mailing list ,
>
> Datum:  01.06.2016 03:59
> Betreff:Re: [R] Variable labels and value labels
>
>
>
> Hi Georg,
> You may find the "add.value.labels" function in the prettyR package
> useful.
>
> Jim
>
> On Tue, May 31, 2016 at 10:00 PM,   wrote:
>> Hi All,
>>
>> I am using R for social sciences. In this field I am used to use short
>> variable names like "q1" for question 1, "q2" for question 2 and so on
> and
>> label the variables like q1 : "Please tell us your age" or q2 : "Could
> you
>> state us your household income?" or something similar indicating which
>> question is stored in the variable.
>>
>> Similar I am used to label values like 1: "Less than 18 years", 2 : "18
> to
>> 30 years", 3 : "31 to 60 years" and 4 : "61 years and more".
>>
>> I know that the packages Hmisc and memisc have a functionality for this
>> but these labeling functions are limited to the packages they were
> defined
>> for. Using the question tests as variable names is possible but very
>> inconvenient.
>>
>> I there another way for labeling variables and values in R?
>>
>> Kind regards
>>
>> Georg Maubach
>>
>> __
>> R-help@r-project.org mailing list -- To UNSUBSCRIBE and more, see
>> https://stat.ethz.ch/mailman/listinfo/r-help
>> PLEASE do read the posting guide
> http://www.R-project.org/posting-guide.html
>> and provide commented, minimal, self-contained, reproducible code.
>
>

__
R-help@r-project.org mailing list -- To UNSUBSCRIBE and more, see
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] Making an if condition variable ?

2016-06-01 Thread ce

Dear all,

I want to make an if condition variable like :

a = 10
CONDITION = " a > 0 "

if ( CONDITION ) print(" a is bigger" ) 

I tried get , getElement , eval without success ?

Thanks

__
R-help@r-project.org mailing list -- To UNSUBSCRIBE and more, see
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Making an if condition variable ?

2016-06-01 Thread Jim Lemon
Hi ce,

a<-10
condition<-expression("a>0")
if(eval(parse(text=condition))) cat("a>0\n")

Jim

On Thu, Jun 2, 2016 at 12:30 PM, ce  wrote:
>
> Dear all,
>
> I want to make an if condition variable like :
>
> a = 10
> CONDITION = " a > 0 "
>
> if ( CONDITION ) print(" a is bigger" )
>
> I tried get , getElement , eval without success ?
>
> Thanks
>
> __
> R-help@r-project.org mailing list -- To UNSUBSCRIBE and more, see
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
> and provide commented, minimal, self-contained, reproducible code.

__
R-help@r-project.org mailing list -- To UNSUBSCRIBE and more, see
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Making an if condition variable ?

2016-06-01 Thread Ista Zahn
if ( eval(parse(text=CONDITION ))) print(" a is bigger" )

Best,
Ista
On Jun 1, 2016 10:32 PM, "ce"  wrote:

>
> Dear all,
>
> I want to make an if condition variable like :
>
> a = 10
> CONDITION = " a > 0 "
>
> if ( CONDITION ) print(" a is bigger" )
>
> I tried get , getElement , eval without success ?
>
> Thanks
>
> __
> R-help@r-project.org mailing list -- To UNSUBSCRIBE and more, see
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide
> http://www.R-project.org/posting-guide.html
> and provide commented, minimal, self-contained, reproducible code.
>

[[alternative HTML version deleted]]

__
R-help@r-project.org mailing list -- To UNSUBSCRIBE and more, see
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Making an if condition variable ?

2016-06-01 Thread Jeff Newmiller
Beware of getting too "meta" in your programming... it is rarely worth it. Just 
write the code and move on with life. That is the beauty of a scripting 
language. 
-- 
Sent from my phone. Please excuse my brevity.

On June 1, 2016 7:30:29 PM PDT, ce  wrote:
>
>Dear all,
>
>I want to make an if condition variable like :
>
>a = 10
>CONDITION = " a > 0 "
>
>if ( CONDITION ) print(" a is bigger" ) 
>
>I tried get , getElement , eval without success ?
>
>Thanks
>
>__
>R-help@r-project.org mailing list -- To UNSUBSCRIBE and more, see
>https://stat.ethz.ch/mailman/listinfo/r-help
>PLEASE do read the posting guide
>http://www.R-project.org/posting-guide.html
>and provide commented, minimal, self-contained, reproducible code.

[[alternative HTML version deleted]]

__
R-help@r-project.org mailing list -- To UNSUBSCRIBE and more, see
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Making an if condition variable ?

2016-06-01 Thread Richard M. Heiberger
a <- 10
CONDITION <-  (a > 0)

if ( CONDITION ) print(" a is bigger" )

On Wed, Jun 1, 2016 at 10:30 PM, ce  wrote:
>
> Dear all,
>
> I want to make an if condition variable like :
>
> a = 10
> CONDITION = " a > 0 "
>
> if ( CONDITION ) print(" a is bigger" )
>
> I tried get , getElement , eval without success ?
>
> Thanks
>
> __
> R-help@r-project.org mailing list -- To UNSUBSCRIBE and more, see
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
> and provide commented, minimal, self-contained, reproducible code.

__
R-help@r-project.org mailing list -- To UNSUBSCRIBE and more, see
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] httr package syntax (PUT)

2016-06-01 Thread Jared Rodecker
Greetings fellow R users.

I'm struggling with the syntax of submitting a PUT request

I'm trying to insert a few PUT requests into some legacy R code that I have
that performs daily ETL on a small database. These requests will add users
to an email mailing list in MailChimp.


I have been able to get my GET requests formatted into syntax that R
(specifically the httr package) accepts:

GET("
https://us10.api.mailchimp.com/3.0/lists/list_id_X/members/MEMBER_HASH_###";,
query = list(apikey = 'XX'))


However when I try to do something similar for PUT requests this simple
syntax isn't working - you can't just pass the API KEY and/or requested
parameters directly through the URL. I get a 401 error if I use the same
syntax I used for GET.


I believe that I need to use the CONFIG option to pass the API key (either
using AUTHENTICATE or ADD_HEADERS) and the requested parameters in the BODY
to get the PUT request to work but I can't get the syntax to work - this
gives a 400 error:


auth <- authenticate("anystring", "XX", type = "basic")

parms <- '[{"email_address" : "some_u...@domain.com", "status_if_new" :
"subscribed"}]'

PUT("
https://us10.api.mailchimp.com/3.0/lists/list_id_X/members/MEMBER_HASH_###
",config=auth,body=parms,encode="json")


If anyone can point me to a more flushed out example that would be
amazing...but even just some tips on how to get more info on my error
message to help me troubleshoot my syntax would also be a big help.  I've
also been trying to get httpput (from the RCurl package) but also
struggling with the syntax there.


Thanks!


Jared

[[alternative HTML version deleted]]

__
R-help@r-project.org mailing list -- To UNSUBSCRIBE and more, see
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Unable to update R software to 3.3.0

2016-06-01 Thread Loris Bennett
Hi Sunish,

Sunish Kumar Bilandi  writes:

> Hi Team,
>
> I am using RedHat 5 and installed R using YUM, (R version 3.2.3) Now I
> want to update R version tp 3.3.0, but I am unable to do that, Is
> there any alternate to do this?
>
> Hope to hear from your side.
>
> Regards,
>
>
> Sunish Bilandi
> Business Analyst, CIDA-01
> Evalueserve

You don't say what the problem is, but I'm running Scientific Linux 6.7,
which is based on the corresponding version of Red Hat, and have found
that I cannot install R 3.3.0, because the version of zlib available is
too old.  R 3.3.0 requires zlib >= 1.2.5, whereas the version in the SL
repositories is 1.2.3.

So if this is the problem, then you either have to install newer version
of zlib from source or switch to RH7, which comes with zlib 1.2.7.

Cheers,

Loris

-- 
Dr. Loris Bennett (Mr.)
ZEDAT, Freie Universität Berlin Email loris.benn...@fu-berlin.de

__
R-help@r-project.org mailing list -- To UNSUBSCRIBE and more, see
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.

Re: [R] Unable to update R software to 3.3.0

2016-06-01 Thread Loris Bennett
Loris Bennett  writes:

> Hi Sunish,
>
> Sunish Kumar Bilandi  writes:
>
>> Hi Team,
>>
>> I am using RedHat 5 and installed R using YUM, (R version 3.2.3) Now I
>> want to update R version tp 3.3.0, but I am unable to do that, Is
>> there any alternate to do this?
>>
>> Hope to hear from your side.
>>
>> Regards,
>>
>>
>> Sunish Bilandi
>> Business Analyst, CIDA-01
>> Evalueserve
>
> You don't say what the problem is, but I'm running Scientific Linux 6.7,
> which is based on the corresponding version of Red Hat, and have found
> that I cannot install R 3.3.0, because the version of zlib available is
> too old.  R 3.3.0 requires zlib >= 1.2.5, whereas the version in the SL
> repositories is 1.2.3.
>
> So if this is the problem, then you either have to install newer version
> of zlib from source or switch to RH7, which comes with zlib 1.2.7.

I forgot to say that for RH7, R 3.3.0 is available from the EPEL
repository, whereas for RH5 or RH6 you will have to install R from
source.

Cheers,

Loris

-- 
Dr. Loris Bennett (Mr.)
ZEDAT, Freie Universität Berlin Email loris.benn...@fu-berlin.de

__
R-help@r-project.org mailing list -- To UNSUBSCRIBE and more, see
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.

Re: [R] Making an if condition variable ?

2016-06-01 Thread Martin Maechler
> Jim Lemon 
> on Thu, 2 Jun 2016 13:03:01 +1000 writes:

> Hi ce,

> a<-10
> condition<-expression("a>0")
> if(eval(parse(text=condition))) cat("a>0\n")

While this may answer the question asked,
the above is *not* good advice, excuse me, Jim :

> fortune(106)

If the answer is parse() you should usually rethink the question.
   -- Thomas Lumley
  R-help (February 2005)

> fortune(181)

Personally I have never regretted trying not to underestimate my own future 
stupidity.
   -- Greg Snow (explaining why eval(parse(...)) is often suboptimal, answering 
a question
  triggered by the infamous fortune(106))
  R-help (January 2007)

-

Good advice would emphasize to use  expressions rather than
strings and yes that's a bit more sophistication.

But it's worth it.
Martin


> 
> Jim

> On Thu, Jun 2, 2016 at 12:30 PM, ce  wrote:
>> 
>> Dear all,
>> 
>> I want to make an if condition variable like :
>> 
>> a = 10
>> CONDITION = " a > 0 "
>> 
>> if ( CONDITION ) print(" a is bigger" )
>> 
>> I tried get , getElement , eval without success ?
>> 
>> Thanks
>> 
>> __
>> R-help@r-project.org mailing list -- To UNSUBSCRIBE and more, see
>> https://stat.ethz.ch/mailman/listinfo/r-help
>> PLEASE do read the posting guide 
http://www.R-project.org/posting-guide.html
>> and provide commented, minimal, self-contained, reproducible code.

> __
> R-help@r-project.org mailing list -- To UNSUBSCRIBE and more, see
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide 
http://www.R-project.org/posting-guide.html
> and provide commented, minimal, self-contained, reproducible code.

__
R-help@r-project.org mailing list -- To UNSUBSCRIBE and more, see
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.