[R] coxph and variables

2013-04-02 Thread sylvain
Dear list,I am quite new to the world of biostatistics and I encounter some
issues in the precise understanding of the coxph function of the survival
package.I have a set of survival data (patient who had (or died from) a
breast cancer) I'd like to see which are the variables that might cause dead
or not.When trying> summary(coxph(Surv(Time_to_distant_recurrence_yrs,
!Distant_recurrence)~  as.factor(Herceptincat) + as.factor(nodeCat_all) ,
data = her2.matrix))I obtain
Call:coxph(formula = Surv(Time_to_distant_recurrence_yrs,
!Distant_recurrence) ~ as.factor(Herceptincat) + as.factor(nodeCat_all),
data = her2.matrix)  n= 231, number of events= 53
coef exp(coef) se(coef)  z Pr(>|z|)   as.factor(Herceptincat)1 -0.5891   
0.5548   0.2805 -2.100  0.03570 * as.factor(nodeCat_all)1   0.97182.6426  
0.6195  1.569  0.11672   as.factor(nodeCat_all)2   1.97137.1803   0.6101 
3.231  0.00123 **---Signif. codes:  0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 
0.05 ‘.’ 0.1
‘ ’ 1  exp(coef) exp(-coef) lower .95 upper
.95as.factor(Herceptincat)10.5548 1.80250.3202   
0.9614as.factor(nodeCat_all)1 2.6426 0.37840.7847   
8.8989as.factor(nodeCat_all)2 7.1803 0.13932.1720  
23.7368Concordance= 0.699  (se = 0.039 )Rsquare= 0.096   (max possible=
0.911 )Likelihood ratio test= 23.33  on 3 df,   p=3.444e-05Wald test   
= 21.76  on 3 df,   p=7.312e-05Score (logrank) test = 24.46  on 3 df,  
p=2.001e-05
I think, this means that the two variables I tested are of interest to build
the model with a p-value of  about 10e-5. This also mean that having the
Herceptincat =1 is significantly different from having a Herceptincat = 0.
Moreover, having a nodecat_all iof 2 is significantly different from having  
a nodecat equals to 0.However, this does not tell me much about the
variables alone. 
summary(coxph(Surv(Time_to_distant_recurrence_yrs, !Distant_recurrence)~ 
as.factor(Herceptincat) , data = her2.matrix))Call:coxph(formula =
Surv(Time_to_distant_recurrence_yrs, !Distant_recurrence) ~
as.factor(Herceptincat), data = her2.matrix)  n= 231, number of events= 53  
  
coef exp(coef) se(coef)  z Pr(>|z|)as.factor(Herceptincat)1 -0.4326   
0.6488   0.2789 -1.5510.121 exp(coef) exp(-coef)
lower .95 upper .95as.factor(Herceptincat)10.6488  1.5410.3756
1.121Concordance= 0.568  (se = 0.035 )Rsquare= 0.011   (max possible= 0.911
)Likelihood ratio test= 2.45  on 1 df,   p=0.1177Wald test= 2.41 
on 1 df,   p=0.1208Score (logrank) test = 2.44  on 1 df,   p=0.118
So, if I only consider one variable (Herceptincat). This does not seem very
interesting. However, nodecat_all is far more interesting.Does it mean that
nodecat_all is enough for me to build a model and that I don't have to take 
(Herceptincat).  into account? I am bit lost between the global p-values and
those that accont for only one variables. It is even worse with factorized
variables because, you miss a p-value for the first category!I thank you all
for your help!
Call:coxph(formula = Surv(Time_to_distant_recurrence_yrs,
!Distant_recurrence) ~ as.factor(nodeCat_all), data = her2.matrix)  n=
231, number of events= 53   coef exp(coef) se(coef)
z Pr(>|z|)   as.factor(nodeCat_all)1 0.85252.3455   0.6173 1.381 
0.16726   as.factor(nodeCat_all)2 1.82886.2264   0.6068 3.014  0.00258
**---Signif. codes:  0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 
‘ ’ 1
exp(coef) exp(-coef) lower .95 upper .95as.factor(nodeCat_all)1 2.346
0.42630.6995 7.865as.factor(nodeCat_all)2 6.226 0.1606   
1.895620.452Concordance= 0.661  (se = 0.036 )Rsquare= 0.078   (max
possible= 0.911 )Likelihood ratio test= 18.84  on 2 df,   p=8.111e-05Wald
test= 17.26  on 2 df,   p=0.0001787Score (logrank) test = 19.9 
on 2 df,   p=4.785e-05




--
View this message in context: 
http://r.789695.n4.nabble.com/coxph-and-variables-tp4663087.html
Sent from the R help mailing list archive at Nabble.com.
[[alternative HTML version deleted]]

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] Breusch-Pagan heterosckedasticity test for mixed models - does it exist ?

2014-08-06 Thread sylvain willart
Hi everyone,

I'm asked to perform a heteroskedasticity test for a model,
Usually, I use lmtest:bptest and it works fine,
But this model I have to test was estimated using nlme:lme, and the bptest
function seems to complain about it (no R-Squared I guess?),

So I wonder: Does it exist a bptest for mixed models (or is it only
available with OLS estimated models) ?
Pinheiro and Bates only perform a graphical test in their book, then
correct the VarCov matrix, and test wether the new model significantly
improves the uncorrected one. Is it the only way to go?

Thanks for any inputs,

Sÿlv

[[alternative HTML version deleted]]

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] S4 class and Package NAMESPACE [NC]

2008-07-25 Thread Sylvain ARCHENAULT
Hello,

I am trying to make a R package with S4 classes. To do so, i do as usual 
(when no S4 were involved, i.e without NAMESPACE file) and I've got errors 
in class refering to other classes in the package.
So I use a NAMESPACE file to declare classes, it works fine, but now I 
have to declare each functions in the NAMESPACE file. Since I have a lot 
of functions, it's not very easy. 

Is there a way to exports all methods in one statement ? or is there a 
tool which can help generating the NAMESPACE file ?

Thanks for your help,
Sylvain.

PS: Could you please CC me in your replies since I do not subscribe to the 
list.
*
This message and any attachments (the "message") are con...{{dropped:15}}

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] how to import map data (maptools?) from a html set of 'coords'

2010-03-02 Thread sylvain willart
Dear R users,

I would like to draw map and import it in maptools/spatstat packages.

The 'raw data' I have come from a web page (...) and are
basically a list of coordinates of a polygon.

I would like to know how to import them in R; I checked the maptools
packages, but all the examples use existing .dbf files.

I just have a (serie of) text file(s) looking like this:

For example, for the French Region Burgundy:



any idea welcome,

sylvain

(If anayone is interested with that type of data, they're available at
the INSEE website
along with loads of information on the population and economy of each region)

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] how to import map data (maptools?) from a html set of 'coords'

2010-03-03 Thread sylvain willart
Hi
thanks for your reply,
I'll try to better explain my request...
the data do not come from a file with a specific extension, this is
just some lines I copied pasted from a source html file

The web page is:
http://www.insee.fr/fr/ppp/bases-de-donnees/donnees-detaillees/duicq/accueil.asp
it displays an (interactive) map of France with all the regions

to access the source: edit/source , or Ctrl+U in a web browser

By the middle of the html source file, there is an html object called
map ( ... ) with a set of coordinates representing the
polygons of each region,

These coordinates are just location of points: x1,y1,x2,y2,x3,y3...
that draw polygons. They are not proper longitude or latitude and
their "origine" is just the corner of the image the html file
generates...

I am aware those are not "real" geographic data (That's why I didn't
post my question to sig-geo, it looks more like a problem of
graphics), but these are the coordinates one need to "draw" a map (and
eventually import it to a more specific package like spatstat)

So, what I would like to do is: using those coordinates to draw such a
map, and eventually use that map for distance or area calculus (which
do not need to be extremely precise...)

sylvain


2010/3/3 Michael Denslow :
> Hi Sylvian,
>
>
> On Tue, Mar 2, 2010 at 1:15 PM, sylvain willart
>  wrote:
>> Dear R users,
>>
>> I would like to draw map and import it in maptools/spatstat packages.
>>
>> The 'raw data' I have come from a web page (...) and are
>> basically a list of coordinates of a polygon.
>>
>> I would like to know how to import them in R; I checked the maptools
>> packages, but all the examples use existing .dbf files.
>>
>> I just have a (serie of) text file(s) looking like this:
>>
>> For example, for the French Region Burgundy:
>>
>> > alt="Bourgogne"
>> coords="208,121,211,115,221,113,224,115,225,120,229,122,232,128,251,125,255,
>> 130,256,136,266,138,268,148,267,154,263,160,267,168,267,180,262,
>> 175,256,178,254,184,248,184,243,187,237,187,232,185,234,181,227,
>> 171,216,171,212,166,211,155,208,149,208,135,211,132,213,125,208,
>> 121">
>
> It is not clear (to me) from your example what kind of file this is.
> Maybe XML, it does not look like GML. readOGR() in the rgdal package
> may be a better route to explore, but you need to determine what file
> structure is first.
>
>> any idea welcome,
>>
>> sylvain
>>
>> (If anayone is interested with that type of data, they're available at
>> the INSEE website
>
> I can not easily find an example on this site. Perhaps you could
> provide a direct link to the file. Lastly, I suspect that the
> r-sig-geo mailing list would get you some better answers.
>
> Michael
>
> --
> Michael Denslow
>
> I.W. Carpenter Jr. Herbarium [BOON]
> Department of Biology
> Appalachian State University
> Boone, North Carolina U.S.A.
> -- AND --
> Communications Manager
> Southeast Regional Network of Expertise and Collections
> sernec.org
>
> 36.214177, -81.681480 +/- 3103 meters
>

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] how to import map data (maptools?) from a html set of 'coords'

2010-03-03 Thread sylvain willart
SOLVED,

example from the "Nord-Pas-de-Calais" region:


v <- 
c(237,55,236,58,229,57,214,57,203,55,197,54,187,48,179,46,179,35,180,31,184,26,197,23,201,24,202,31,207,34,213,31,216,37,219,41,228,46,234,47,237,55)

seqx <- seq(1,length(v),by=2)
seqy <- seq(2,length(v),by=2)

vx <- c()
for (j in seqx) {
vx <- c(vx,v[j])
  }
vy <- c()
for (j in seqy) {
vy <- c(vy,v[j])
  }
plot(vx,-vy)
polygon(vx,-vy,border="red")
####

thanks for your tips,

Sylvain
Now, I just have to read it using maptools...

2010/3/3 sylvain willart :
> Hi
> thanks for your reply,
> I'll try to better explain my request...
> the data do not come from a file with a specific extension, this is
> just some lines I copied pasted from a source html file
>
> The web page is:
> http://www.insee.fr/fr/ppp/bases-de-donnees/donnees-detaillees/duicq/accueil.asp
> it displays an (interactive) map of France with all the regions
>
> to access the source: edit/source , or Ctrl+U in a web browser
>
> By the middle of the html source file, there is an html object called
> map ( ... ) with a set of coordinates representing the
> polygons of each region,
>
> These coordinates are just location of points: x1,y1,x2,y2,x3,y3...
> that draw polygons. They are not proper longitude or latitude and
> their "origine" is just the corner of the image the html file
> generates...
>
> I am aware those are not "real" geographic data (That's why I didn't
> post my question to sig-geo, it looks more like a problem of
> graphics), but these are the coordinates one need to "draw" a map (and
> eventually import it to a more specific package like spatstat)
>
> So, what I would like to do is: using those coordinates to draw such a
> map, and eventually use that map for distance or area calculus (which
> do not need to be extremely precise...)
>
> sylvain
>
>
> 2010/3/3 Michael Denslow :
>> Hi Sylvian,
>>
>>
>> On Tue, Mar 2, 2010 at 1:15 PM, sylvain willart
>>  wrote:
>>> Dear R users,
>>>
>>> I would like to draw map and import it in maptools/spatstat packages.
>>>
>>> The 'raw data' I have come from a web page (...) and are
>>> basically a list of coordinates of a polygon.
>>>
>>> I would like to know how to import them in R; I checked the maptools
>>> packages, but all the examples use existing .dbf files.
>>>
>>> I just have a (serie of) text file(s) looking like this:
>>>
>>> For example, for the French Region Burgundy:
>>>
>>> >> alt="Bourgogne"
>>> coords="208,121,211,115,221,113,224,115,225,120,229,122,232,128,251,125,255,
>>> 130,256,136,266,138,268,148,267,154,263,160,267,168,267,180,262,
>>> 175,256,178,254,184,248,184,243,187,237,187,232,185,234,181,227,
>>> 171,216,171,212,166,211,155,208,149,208,135,211,132,213,125,208,
>>> 121">
>>
>> It is not clear (to me) from your example what kind of file this is.
>> Maybe XML, it does not look like GML. readOGR() in the rgdal package
>> may be a better route to explore, but you need to determine what file
>> structure is first.
>>
>>> any idea welcome,
>>>
>>> sylvain
>>>
>>> (If anayone is interested with that type of data, they're available at
>>> the INSEE website
>>
>> I can not easily find an example on this site. Perhaps you could
>> provide a direct link to the file. Lastly, I suspect that the
>> r-sig-geo mailing list would get you some better answers.
>>
>> Michael
>>
>> --
>> Michael Denslow
>>
>> I.W. Carpenter Jr. Herbarium [BOON]
>> Department of Biology
>> Appalachian State University
>> Boone, North Carolina U.S.A.
>> -- AND --
>> Communications Manager
>> Southeast Regional Network of Expertise and Collections
>> sernec.org
>>
>> 36.214177, -81.681480 +/- 3103 meters
>>
>

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] black cluster in salt and pepper image

2010-03-08 Thread Sylvain Sardy

Hi,

on a lattice, I have binary 0/1 data. 1s are rare and may form clusters. 
I would like

to know the size/length of largest cluster. Any help warmly welcome,

Sylvain.

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] Counting non-empty levels of a factor

2009-11-08 Thread sylvain willart
Hi everyone,

I'm struggling with a little problem for a while, and I'm wondering if
anyone could help...

I have a dataset (from retailing industry) that indicates which brands
are present in a panel of 500 stores,

store , brand
1 , B1
1 , B2
1 , B3
2 , B1
2 , B3
3 , B2
3 , B3
3 , B4

I would like to know how many brands are present in each store,

I tried:
result <- aggregate(MyData$brand , by=list(MyData$store) , nlevels)

but I got:
Group.1 x
1 , 4
2 , 4
3 , 4

which is not exactly the result I expected
I would like to get sthg like:
Group.1 x
1 , 3
2 , 2
3 , 3

Looking around, I found I can delete empty levels of factor using:
problem.factor <- problem.factor[,drop=TRUE]
But this solution isn't handy for me as I have many stores and should
make a subset of my data for each store before dropping empty factor

I can't either counting the line for each store (N), because the same
brand can appear several times in each store (several products for the
same brand, and/or several weeks of observation)

I used to do this calculation using SAS with:
proc freq data = MyData noprint ; by store ;
 tables  brand / out = result ;
run ;
(the cool thing was I got a database I can merge with MyData)

any idea for doing that in R ?

Thanks in advance,

King Regards,

Sylvain Willart,
PhD Marketing,
IAE Lille, France

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Counting non-empty levels of a factor

2009-11-08 Thread sylvain willart
Thanks a lot for those solutions,
Both are working great, and they do slightly different (but both very
interesting) things,
Moreover, I learned about the length() function ... one more to add to
my personal cheat sheet
King Regards

2009/11/8 David Winsemius :
>
> On Nov 8, 2009, at 9:11 AM, David Winsemius wrote:
>
>>
>> On Nov 8, 2009, at 8:38 AM, sylvain willart wrote:
>>
>>> Hi everyone,h
>>>
>>> I'm struggling with a little problem for a while, and I'm wondering if
>>> anyone could help...
>>>
>>> I have a dataset (from retailing industry) that indicates which brands
>>> are present in a panel of 500 stores,
>>>
>>> store , brand
>>> 1 , B1
>>> 1 , B2
>>> 1 , B3
>>> 2 , B1
>>> 2 , B3
>>> 3 , B2
>>> 3 , B3
>>> 3 , B4
>>>
>>> I would like to know how many brands are present in each store,
>>>
>>> I tried:
>>> result <- aggregate(MyData$brand , by=list(MyData$store) , nlevels)
>>>
>>> but I got:
>>> Group.1 x
>>> 1 , 4
>>> 2 , 4
>>> 3 , 4
>>>
>>> which is not exactly the result I expected
>>> I would like to get sthg like:
>>> Group.1 x
>>> 1 , 3
>>> 2 , 2
>>> 3 , 3
>>
>> Try:
>>
>> result <- aggregate(MyData$brand , by=list(MyData$store) , length)
>>
>> Quick, easy and generalizes to other situations. The factor levels got
>> carried along identically, but length counts the number of elements in the
>> list returned by tapply.
>
> Which may not have been what you asked for as this would demonstrate. You
> probably wnat the second solution:
> mydata2 <- rbind(MyData, MyData)
>> result <- aggregate(mydata2$brand , by=list(mydata2$store) , length)
>> result
>  Group.1 x
> 1       1 6
> 2       2 4
> 3       3 6
>
>> result <- aggregate(mydata2$brand , by=list(mydata2$store) , function(x)
>> nlevels(factor(x)))
>> result
>  Group.1 x
> 1       1 3
> 2       2 2
> 3       3 3
>
>>>
>>> Looking around, I found I can delete empty levels of factor using:
>>> problem.factor <- problem.factor[,drop=TRUE]
>>
>> If you reapply the function, factor, you get the same result. So you could
>> have done this:
>>
>> > result <- aggregate(MyData$brand , by=list(MyData$store) , function(x)
>> > nlevels(factor(x)))
>> > result
>>  Group.1 x
>> 1       1 3
>> 2       2 2
>> 3       3 3
>>
>>
>>
>>> But this solution isn't handy for me as I have many stores and should
>>> make a subset of my data for each store before dropping empty factor
>>>
>>> I can't either counting the line for each store (N), because the same
>>> brand can appear several times in each store (several products for the
>>> same brand, and/or several weeks of observation)
>>>
>>> I used to do this calculation using SAS with:
>>> proc freq data = MyData noprint ; by store ;
>>> tables  brand / out = result ;
>>> run ;
>>> (the cool thing was I got a database I can merge with MyData)
>>>
>>> any idea for doing that in R ?
>>>
>>> Thanks in advance,
>>>
>>> King Regards,
>>>
>>> Sylvain Willart,
>>> PhD Marketing,
>>> IAE Lille, France
>>>
>>> __
>>> R-help@r-project.org mailing list
>>> https://stat.ethz.ch/mailman/listinfo/r-help
>>> PLEASE do read the posting guide
>>> http://www.R-project.org/posting-guide.html
>>> and provide commented, minimal, self-contained, reproducible code.
>>
>> David Winsemius, MD
>> Heritage Laboratories
>> West Hartford, CT
>>
>> __
>> R-help@r-project.org mailing list
>> https://stat.ethz.ch/mailman/listinfo/r-help
>> PLEASE do read the posting guide
>> http://www.R-project.org/posting-guide.html
>> and provide commented, minimal, self-contained, reproducible code.
>
> David Winsemius, MD
> Heritage Laboratories
> West Hartford, CT
>
>

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] length of a density curve (or any curve)

2009-12-04 Thread sylvain willart
Hello R users,

When I type

d <- density(MyData$x)

I obtain a density object I can plot,

But I wonder if there is a way to easily compute the length of the
density curve ?

( I imagine I could compute the distances between the 512 equally
spaced points using their x and y, but does it exist a smarter way ?)

Regards,

SW

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] length of a density curve (or any curve)

2009-12-04 Thread sylvain willart
Yes, sure (and I just did it again)
but I can't see an answer... did I miss sthg ?

regards,

SW

2009/12/4 milton ruser :
> hi Sylvain,
>
> did you try ?density
>
> regards
>
> milton
>
> On Fri, Dec 4, 2009 at 7:19 AM, sylvain willart 
> wrote:
>>
>> Hello R users,
>>
>> When I type
>>
>> d <- density(MyData$x)
>>
>> I obtain a density object I can plot,
>>
>> But I wonder if there is a way to easily compute the length of the
>> density curve ?
>>
>> ( I imagine I could compute the distances between the 512 equally
>> spaced points using their x and y, but does it exist a smarter way ?)
>>
>> Regards,
>>
>> SW
>>
>> __
>> R-help@r-project.org mailing list
>> https://stat.ethz.ch/mailman/listinfo/r-help
>> PLEASE do read the posting guide
>> http://www.R-project.org/posting-guide.html
>> and provide commented, minimal, self-contained, reproducible code.
>
>

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] length of a density curve (or any curve)

2009-12-04 Thread sylvain willart
Thanks for your answer,

@Dennis Murphy: no, I don't know the functionnal form, this is purely
empirical data

@ Ted Harding: Thank you for your lines of code, they are actually a
pretty smart way...

SW

2009/12/4 Ted Harding :
> True enough -- ?density does not address the issue of computing
> the length pf the curve!
>
> One simple way of implementing the idea you first thought of
> would be on the following lines:
>
>  d <- density(MyData$x)
>  sum(sqrt(diff(d$x)^2 + diff(d$y)^2))
>
> which simply sums the lengths of the line-segments. You would
> get a better approximation to the ideal length by increasing
> the value of 'n' in the call to density() (perhaps as a separate
> calculation, since a relatively small value of 'n' is likely
> to be adeqaute for plotting, but possibly inadequate for the
> accurate computation of the length).
>
> Hpoing this helps,
> Ted.
>
> On 04-Dec-09 12:41:22, sylvain willart wrote:
>> Yes, sure (and I just did it again)
>> but I can't see an answer... did I miss sthg ?
>>
>> regards,
>>
>> SW
>>
>> 2009/12/4 milton ruser :
>>> hi Sylvain,
>>>
>>> did you try ?density
>>>
>>> regards
>>>
>>> milton
>>>
>>> On Fri, Dec 4, 2009 at 7:19 AM, sylvain willart
>>> 
>>> wrote:
>>>>
>>>> Hello R users,
>>>>
>>>> When I type
>>>>
>>>> d <- density(MyData$x)
>>>>
>>>> I obtain a density object I can plot,
>>>>
>>>> But I wonder if there is a way to easily compute the length of the
>>>> density curve ?
>>>>
>>>> ( I imagine I could compute the distances between the 512 equally
>>>> spaced points using their x and y, but does it exist a smarter way ?)
>>>>
>>>> Regards,
>>>>
>>>> SW
>>>>
>>>> __
>>>> R-help@r-project.org mailing list
>>>> https://stat.ethz.ch/mailman/listinfo/r-help
>>>> PLEASE do read the posting guide
>>>> http://www.R-project.org/posting-guide.html
>>>> and provide commented, minimal, self-contained, reproducible code.
>>>
>>>
>>
>> __
>> R-help@r-project.org mailing list
>> https://stat.ethz.ch/mailman/listinfo/r-help
>> PLEASE do read the posting guide
>> http://www.R-project.org/posting-guide.html
>> and provide commented, minimal, self-contained, reproducible code.
>
> 
> E-Mail: (Ted Harding) 
> Fax-to-email: +44 (0)870 094 0861
> Date: 04-Dec-09                                       Time: 13:02:27
> -- XFMail --
>

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] Reshaping Data for bi-partite Network Analysis

2013-04-13 Thread sylvain willart
Hello

I have a dataset of people spending time in places. But most people don't
hang out in all the places.

it looks like:

> Input<-data.frame(people=c("Marc","Marc","Joe","Joe","Joe","Mary"),
+  place=c("school","home","home","sport","beach","school"),
+  time=c(2,4,3,1,5,4))
> Input
  people  place time
1   Marc school2
2   Marc   home4
3Joe   home3
4Joe  sport1
5Joe  beach5
6   Mary school4

In order to import it within R's igraph, I must use graph.incidence(), but
the data needs to be formatted that way:

>
Output<-data.frame(school=c(2,0,4),home=c(4,3,0),sport=c(0,1,0),beach=c(0,5,0),
+row.names=c("Marc","Joe","Mary"))
> Output
 school home sport beach
Marc  24 0 0
Joe   03 1 5
Mary  40 0 0

The Dataset is fairly large (couple hundreds of people and places), and I
would very much appreciate if someone could point me to a routine or
function that could transform my Input dataset to the required Output,

Thank you very much in advance

Regards

Sylvain

PS: sorry for cross-posting this on statnet and then on R help list, but I
received a message from statnet pointing out the question was more related
to general data management than actual network analysis. Which is true
indeed...

[[alternative HTML version deleted]]

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Reshaping Data for bi-partite Network Analysis [SOLVED]

2013-04-13 Thread sylvain willart
Wow !
so many thanks Arun and Rui
works like a charm
problem solved


2013/4/13 arun 

> Hi,
> Try this;
> library(reshape2)
> res<-dcast(Input,people~place,value.var="time")
> res[is.na(res)]<-0
>  res
> #  people beach home school sport
> #1Joe 53  0 1
> #2   Marc 04  2 0
> #3   Mary 00  4 0
>
> #or
>  xtabs(time~.,Input)
> #  place
> #people beach home school sport
>  # Joe  53  0 1
>  # Marc 04  2 0
>  # Mary 00      4     0
>
> A.K.
>
>
>
> 
>  From: sylvain willart 
> To: r-help ; sylvain willart <
> sylvain.will...@gmail.com>
> Sent: Saturday, April 13, 2013 5:03 PM
> Subject: [R] Reshaping Data for bi-partite Network Analysis
>
>
> Hello
>
> I have a dataset of people spending time in places. But most people don't
> hang out in all the places.
>
> it looks like:
>
> > Input<-data.frame(people=c("Marc","Marc","Joe","Joe","Joe","Mary"),
> +  place=c("school","home","home","sport","beach","school"),
> +  time=c(2,4,3,1,5,4))
> > Input
>   people  place time
> 1   Marc school2
> 2   Marc   home4
> 3Joe   home3
> 4Joe  sport1
> 5Joe  beach5
> 6   Mary school4
>
> In order to import it within R's igraph, I must use graph.incidence(), but
> the data needs to be formatted that way:
>
> >
>
> Output<-data.frame(school=c(2,0,4),home=c(4,3,0),sport=c(0,1,0),beach=c(0,5,0),
> +row.names=c("Marc","Joe","Mary"))
> > Output
>  school home sport beach
> Marc  24 0 0
> Joe   03 1 5
> Mary  40 0 0
>
> The Dataset is fairly large (couple hundreds of people and places), and I
> would very much appreciate if someone could point me to a routine or
> function that could transform my Input dataset to the required Output,
>
> Thank you very much in advance
>
> Regards
>
> Sylvain
>
> PS: sorry for cross-posting this on statnet and then on R help list, but I
> received a message from statnet pointing out the question was more related
> to general data management than actual network analysis. Which is true
> indeed...
>
> [[alternative HTML version deleted]]
>
> __
> R-help@r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide
> http://www.R-project.org/posting-guide.html
> and provide commented, minimal, self-contained, reproducible code.
>

[[alternative HTML version deleted]]

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] Which analysis for a set of dummy variables alone ?

2014-02-04 Thread sylvain willart
Dear R-users,

I have a dataset I would like to analyze and plot
It consists of 100 dummy variables (0/1) for about 2,000,000 observations
There is absolutely no quantitative variable, nor anything I could use as
an explained variable for a regression analysis.

Actually, the dataset represents the patronage of 2 billion customers for
100 stores. It equals 1 if the consumer go to the store, 0 if he doesn't.
With no further information.

As the variable look like factors (0/1), I thought I could go for a
Mutliple Correspondence Analysis (MCA). However, the resulting plot
consists of 2 points for each variable (one for 1 and one for 0) which is
not easily interpretable. (or is there a method for not plotting certain
points in MCA?)

I also tried to consider my dataset as a bipartite network
(consumer-store). However, the plot is not really insightful, as I am
especially looking for links between stores. (kind of "if a consumer go to
that store, he probably also goes to this one...")

So, I have a simple question: which method you would choose for computing
and plotting the links between a set of dummy variable?

Thanks in advance

Sylvain
PhD Marketing
Associate Professor University of Lille - FR

[[alternative HTML version deleted]]

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] Extract part of a list based on the value of one of the subsubscript (weird, but with a reproducible example for clarity sake)

2014-03-17 Thread sylvain willart
Dear R (list)-users

I'm trying to extract part of a list based on the value of one of the
subsubscript element

As those things are usually quite complicated to explain, let me provide a
simple reproducible example that closely matches my problem (although my
actual list has a few thousands elements):

## EXAMPLE
MyFullList<-list()
v1<-rep(c("A","B","C","D","E"),2)
v2<-c(rep(1,5),rep(2,5))
for (i in 1:10){
  MyFullList[[i]]<-density(runif(10,0,1))
  MyFullList[[i]][8]<-i
  MyFullList[[i]][9]<-v1[i]
  MyFullList[[i]][10]<-v2[i]
}
## end example

Now, my problem is that I would like to extract, in a new list, a part of
the full list based on the value of it's 9th subscript, let say, "B". This
new list has to include S3-densities objects (stored in the first 7
sub-elements)

Here's what I tried (and the errors I got)

### TRIALS
MyList_v1_B<-MyFullList[MyFullList[[]][9]=="B"]
# error invalid subscript type 'symbol'
MyList_v1_B<-MyFullList[MyFullList[][9]=="B"]
# no errors, but returns an empty list ???
MyList_v1_B<-MyFullList[MyFullList[,9]=="B"]
# error incorrect number of dimensions
 end trials (for now)

Obviously, I'm missing something,
And I would appreciate any clue to help me perform this task

# Here is my R.version info, although I'm not sure it's relevant here
> R.version
platform   x86_64-unknown-linux-gnu
arch   x86_64
os linux-gnu
system x86_64, linux-gnu
status
major  2
minor  15.2
year   2012
month  10
day26
svn rev61015
language   R
version.string R version 2.15.2 (2012-10-26)
nickname   Trick or Treat

thanks
Sylvain Willart

[[alternative HTML version deleted]]

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Extract part of a list based on the value of one of the subsubscript (weird, but with a reproducible example for clarity sake)

2014-03-17 Thread sylvain willart
It works like a charm,
Plus, this method (logical vector for list extraction) opens a wide range a
possibilities for me,
thanks a million Duncan


2014-03-17 12:55 GMT+01:00 Duncan Murdoch :

> On 14-03-17 7:27 AM, sylvain willart wrote:
>
>> Dear R (list)-users
>>
>> I'm trying to extract part of a list based on the value of one of the
>> subsubscript element
>>
>> As those things are usually quite complicated to explain, let me provide a
>> simple reproducible example that closely matches my problem (although my
>> actual list has a few thousands elements):
>>
>> ## EXAMPLE
>> MyFullList<-list()
>> v1<-rep(c("A","B","C","D","E"),2)
>> v2<-c(rep(1,5),rep(2,5))
>> for (i in 1:10){
>>MyFullList[[i]]<-density(runif(10,0,1))
>>MyFullList[[i]][8]<-i
>>MyFullList[[i]][9]<-v1[i]
>>MyFullList[[i]][10]<-v2[i]
>> }
>> ## end example
>>
>> Now, my problem is that I would like to extract, in a new list, a part of
>> the full list based on the value of it's 9th subscript, let say, "B". This
>> new list has to include S3-densities objects (stored in the first 7
>> sub-elements)
>>
>
> You'll need to do this in two steps, you can't do it using only indexing.
>  The problem is that logical indexing requires a logical vector, and it's
> not easy to get one of those when you are starting with a list.  So here's
> how to do it:
>
> First create a new character vector containing the 9th subscript of each
> element, e.g.
>
> ninths <- sapply(MyFullList, function(x) x[[9]])
>
> You use sapply so that the result is coerced to a character vector, and
> x[[9]] rather than x[9] so that each result is a scalar character.
>
> Do your test on that, and use the result to index the full list:
>
> MyFullList[ninths == "B"]
>
> This returns a list containing the cases where the test evaluates to TRUE.
>
> Duncan Murdoch
>
>
>> Here's what I tried (and the errors I got)
>>
>> ### TRIALS
>> MyList_v1_B<-MyFullList[MyFullList[[]][9]=="B"]
>> # error invalid subscript type 'symbol'
>> MyList_v1_B<-MyFullList[MyFullList[][9]=="B"]
>> # no errors, but returns an empty list ???
>> MyList_v1_B<-MyFullList[MyFullList[,9]=="B"]
>> # error incorrect number of dimensions
>>  end trials (for now)
>>
>> Obviously, I'm missing something,
>> And I would appreciate any clue to help me perform this task
>>
>> # Here is my R.version info, although I'm not sure it's relevant here
>>
>>> R.version
>>>
>> platform   x86_64-unknown-linux-gnu
>> arch   x86_64
>> os linux-gnu
>> system x86_64, linux-gnu
>> status
>> major  2
>> minor  15.2
>> year   2012
>> month  10
>> day26
>> svn rev61015
>> language   R
>> version.string R version 2.15.2 (2012-10-26)
>> nickname   Trick or Treat
>>
>> thanks
>> Sylvain Willart
>>
>> [[alternative HTML version deleted]]
>>
>> __
>> R-help@r-project.org mailing list
>> https://stat.ethz.ch/mailman/listinfo/r-help
>> PLEASE do read the posting guide http://www.R-project.org/
>> posting-guide.html
>> and provide commented, minimal, self-contained, reproducible code.
>>
>>
>

[[alternative HTML version deleted]]

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] [R-pkgs] ODB : connecting OpenOffice Base with R

2011-06-20 Thread Sylvain Mareschal
The recently released "ODB" package was developped to manage HSQL 
databases embedded in .odb files (the default when creating a database 
with OpenOffice Base) via R.




BRIEFLY

The goal of this package is to access OpenOffice databases via R, to 
process data stored in it or to automatize their building from scratch 
or updating.


The package provides 5 main functions :
- odb.create, to create a new .odb file from a template.
- odb.open, to produce an "odb" connection to a temporary copy of the 
.odb file.

- odb.close, to close the connection and update the .odb file.
- odb.read, to import data from the database to R via "SELECT" SQL 
queries built by the useR.
- odb.write, to update the database via "INSERT" or "CREATE" SQL queries 
built by the useR.


A few other functions are also provided to manage .odb specificties such 
as comments on table fields and stored queries. Some wrappers are also 
provided to insert directly a data.frame in a database table without 
writing the SQL query, list the table names ands fields or export the 
database in a .sql file.


Other wrappers may be added in future versions to help users not 
familiar with the SQL language.




TYPICAL USE

connection <- odb.open("file.odb")
data <- odb.read(connection, "SELECT * FROM table WHERE id < 15")
odb.write(connection, "UPDATE table SET field='peach' WHERE id = 5")
odb.close(connection)



TECHNICAL CONSIDERATIONS

.odb files, as any other OpenDocument files, are ZIP archives containing 
the HSQL files. To establish the connection, the .odb file is unzipped 
via the "zip" bash command if available, and the connection is made via 
the RJDBC interface. The "odb" object produced inherits from the 
"DBIConnection" class, thus all functions provided in the DBI packages 
may be used directly on it to manage the database. The odb.read and 
odb.write functions are only wrappers to such DBI functions, handling 
frequent issues such as charset or factors considerations.


Notice the database files are copied in a temporary directory, thus any 
update made to the database is not written in the .odb file untill the 
odb.close call, so simultaneous access to a same database (via R and 
OpenOffice) should not be considered.




Any suggestion or comment may be sent back to this email adress.

Sylvain Mareschal

___
R-packages mailing list
r-packa...@r-project.org
https://stat.ethz.ch/mailman/listinfo/r-packages

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] conflict when passing an argument to a function within an other function

2011-11-27 Thread Sylvain Antoniazza
Dear all,

I have a problem with this piece of code which I don't really see how to
overcome and I think it might also be of more general interest:

map(xlim=c(-10,40),ylim=c(30,60)
,mar=rep(0,4)+.1
,fill=T
,col="darkseagreen"
,bg="skyblue"
)

The idea is to use the map function from the maps package to draw a map of
Europe. By default with the argument fill=T there is border to the
polygones and I want them without border. Here is the start of the problems.

The polygone function used in map function has a border argument that can
deal with that issue, the problem is that this argument cannot be passed to
map directly because map also has a border argument for another purpose (I
tried with the density argument for polygone passed to map and here it is
working (because map don't have a density argument itself)).

Is there any way to pass a border argument to the polygone function within
map???

Thanks in advance for your help.

Cheers, Sylvain.

PS: for the moment I used a trick with the fg argument passed to par which
allows to pass a colour for the border of the polygons.

-- 
*Sylvain Antoniazza*
PhD student & assistant
University of Lausanne, Switzerland

http://www.unil.ch/dee/page55408_fr.html

[[alternative HTML version deleted]]

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] import contingency table

2012-05-28 Thread sylvain willart
hello everyone,

i often work on contingency table that I create from data.frame (with
table() function)

but a friend sent me an excel sheet wich *already is* a contingency
table (just a simple 2 way table !...)

any clue on how to import it in R (keeping row names and col names) ?

any tuto I come accross only mention the table transformation, but
never the import of such data

I only found read.ftable() but couldn't get it to work

any help appreciated

Sylv

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] import contingency table

2012-05-28 Thread sylvain willart
Thanks Rui,
but my problem is not to read an xls file, I converted already to csv,
but rather to read a contingency table into R, and telling R it is
astually a contingency table, and not a data.frame...

file below, if it helps...

Sylv

,AUC,Alin,BLG,BrDep,CRF,CrfMkt,CAS,Casto,Confo,ElecDep,Geant,Halle,KIA,LerMrl,Match,METRO,MNP,SimpMkt
Strasbg,4,0,0,2,3,0,0,6,2,1,2,1,0,2,3,2,3,6
Paris,0,0,0,0,10,1,5,2,4,0,5,1,0,0,0,3,7,7
Brest,3,0,0,2,8,0,5,9,4,0,5,0,2,0,0,0,0,0
Lyon,0,0,0,1,4,2,8,2,3,0,5,1,0,0,0,0,4,5
Nice,3,0,0,0,3,2,5,1,2,0,2,0,0,0,0,2,2,0
Limg,3,0,0,1,4,2,3,0,0,0,3,0,0,0,0,1,0,4
Toulse,0,0,0,1,5,4,3,2,2,0,5,0,0,0,0,2,1,5
Nancy,0,0,0,2,3,1,1,8,2,0,2,0,1,0,2,3,2,4
Lille,0,0,0,0,6,8,0,0,2,2,3,1,0,1,5,1,2,6
Mtplier,0,0,0,0,7,3,4,1,0,1,4,0,0,0,0,1,6,3
Aix,0,4,0,0,9,2,5,1,0,0,5,0,0,0,0,1,7,5
Senart,0,0,0,1,10,3,5,0,5,0,6,0,0,0,0,0,3,3
Grenbl,0,0,0,0,3,2,5,3,1,0,5,0,0,0,0,0,0,4
Angers,0,0,0,2,8,0,4,0,4,0,4,0,2,0,0,0,3,3
Brdx,3,0,0,2,4,3,3,0,1,0,5,0,2,0,0,1,3,4
Dijon,0,0,0,1,8,2,5,3,4,0,5,0,0,0,0,2,1,0
Rouen,3,0,0,1,2,0,2,0,3,1,2,1,2,0,0,0,0,6

2012/5/28 Rui Barradas :
> Hello,
>
> Try function read.xls in library gdata
> Also, a good way of avoiding such doubts is
>
> library(sos)
> findFn('xls')
>
> It returns read.xls as the first line.
>
> Hope this helps,
>
> Rui Barradas
>
> Em 28-05-2012 11:32, sylvain willart escreveu:
>>
>> hello everyone,
>>
>> i often work on contingency table that I create from data.frame (with
>> table() function)
>>
>> but a friend sent me an excel sheet wich *already is* a contingency
>> table (just a simple 2 way table !...)
>>
>> any clue on how to import it in R (keeping row names and col names) ?
>>
>> any tuto I come accross only mention the table transformation, but
>> never the import of such data
>>
>> I only found read.ftable() but couldn't get it to work
>>
>> any help appreciated
>>
>> Sylv
>>
>> __
>> R-help@r-project.org mailing list
>> https://stat.ethz.ch/mailman/listinfo/r-help
>> PLEASE do read the posting guide
>> http://www.R-project.org/posting-guide.html
>> and provide commented, minimal, self-contained, reproducible code.

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] import contingency table

2012-05-28 Thread sylvain willart
no,
the problem is that the lines in my file do not correspond to
individuals, but are variables, just like are the columns,
my file is already a contingency table, with each cell being a frequency:

here is a sample of it:
***
   
,AUC,Alin,BLG,BrDep,CRF,CMkt,CAS,Casto,Confo,ElDep,Geant,Halle,KIA,LMrl,Match,MET,MNP,SM,
Strasbg,4,0,0,2,3,0,0,6,2,1,2,1,0,2,3,2,3,6
Paris,0,0,0,0,10,1,5,2,4,0,5,1,0,0,0,3,7,7
Brest,3,0,0,2,8,0,5,9,4,0,5,0,2,0,0,0,0,0
Lyon,0,0,0,1,4,2,8,2,3,0,5,1,0,0,0,0,4,5
Nice,3,0,0,0,3,2,5,1,2,0,2,0,0,0,0,2,2,0
Limg,3,0,0,1,4,2,3,0,0,0,3,0,0,0,0,1,0,4
Toulse,0,0,0,1,5,4,3,2,2,0,5,0,0,0,0,2,1,5
Nancy,0,0,0,2,3,1,1,8,2,0,2,0,1,0,2,3,2,4
Lille,0,0,0,0,6,8,0,0,2,2,3,1,0,1,5,1,2,6
Mtplier,0,0,0,0,7,3,4,1,0,1,4,0,0,0,0,1,6,3
Aix,0,4,0,0,9,2,5,1,0,0,5,0,0,0,0,1,7,5
Senart,0,0,0,1,10,3,5,0,5,0,6,0,0,0,0,0,3,3
Grenbl,0,0,0,0,3,2,5,3,1,0,5,0,0,0,0,0,0,4
Angers,0,0,0,2,8,0,4,0,4,0,4,0,2,0,0,0,3,3
Brdx,3,0,0,2,4,3,3,0,1,0,5,0,2,0,0,1,3,4
Dijon,0,0,0,1,8,2,5,3,4,0,5,0,0,0,0,2,1,0
Rouen,3,0,0,1,2,0,2,0,3,1,2,1,2,0,0,0,0,6
**

I know how to read it into a df or a matrix,
if it was a df or matrix, i could turn it into a table,
but this is already a contingency table

for example, the first number "4", is the number of people being in
city "Strasbg" (first row) and working at "AUC" (first column) (this
is Auchan actually)

I do not have the original file where each row would be an individual,
I just have that flat file, with variables on the rows and variables
on the colums, and frequencies in each cell,
And I wonder how to read it in R telling him this is a
frequency/contingency table 

I can't believe there are no way of getting aroud it (or maybe the sun
stroke to heavy on my head) 

Sylv

2012/5/28 Nicolas Iderhoff :
> Wouldn't it work for you to read the data into a matrix/df so you can 
> transform it into a table()?
> if you're worried about the names of the cols/rows, you can always do
> read.table(..)[,1] to get the row names for example and put them into the 
> matrix with rownames()
>
> Am 28.05.2012 um 13:49 schrieb sylvain willart:
>
>> there are no indication in ?table on how to read in a contingency
>> table (only on how to transform a dataframe or matrix into a
>> contingency table),
>> when I read my file with read.table(), and run is.table() I get
>> "FALSE" for an answer, and the function as.table() leads to an error
>> message,
>> Sylv
>>
>>
>> 2012/5/28 Nicolas Iderhoff :
>>> Try
>>>
>>> ?table
>>>
>>>
>>> Am 28.05.2012 um 13:33 schrieb sylvain willart:
>>>
>>>> Thanks Rui,
>>>> but my problem is not to read an xls file, I converted already to csv,
>>>> but rather to read a contingency table into R, and telling R it is
>>>> astually a contingency table, and not a data.frame...
>>>>
>>>> file below, if it helps...
>>>>
>>>> Sylv
>>>>
>>>> ,AUC,Alin,BLG,BrDep,CRF,CrfMkt,CAS,Casto,Confo,ElecDep,Geant,Halle,KIA,LerMrl,Match,METRO,MNP,SimpMkt
>>>> Strasbg,4,0,0,2,3,0,0,6,2,1,2,1,0,2,3,2,3,6
>>>> Paris,0,0,0,0,10,1,5,2,4,0,5,1,0,0,0,3,7,7
>>>> Brest,3,0,0,2,8,0,5,9,4,0,5,0,2,0,0,0,0,0
>>>> Lyon,0,0,0,1,4,2,8,2,3,0,5,1,0,0,0,0,4,5
>>>> Nice,3,0,0,0,3,2,5,1,2,0,2,0,0,0,0,2,2,0
>>>> Limg,3,0,0,1,4,2,3,0,0,0,3,0,0,0,0,1,0,4
>>>> Toulse,0,0,0,1,5,4,3,2,2,0,5,0,0,0,0,2,1,5
>>>> Nancy,0,0,0,2,3,1,1,8,2,0,2,0,1,0,2,3,2,4
>>>> Lille,0,0,0,0,6,8,0,0,2,2,3,1,0,1,5,1,2,6
>>>> Mtplier,0,0,0,0,7,3,4,1,0,1,4,0,0,0,0,1,6,3
>>>> Aix,0,4,0,0,9,2,5,1,0,0,5,0,0,0,0,1,7,5
>>>> Senart,0,0,0,1,10,3,5,0,5,0,6,0,0,0,0,0,3,3
>>>> Grenbl,0,0,0,0,3,2,5,3,1,0,5,0,0,0,0,0,0,4
>>>> Angers,0,0,0,2,8,0,4,0,4,0,4,0,2,0,0,0,3,3
>>>> Brdx,3,0,0,2,4,3,3,0,1,0,5,0,2,0,0,1,3,4
>>>> Dijon,0,0,0,1,8,2,5,3,4,0,5,0,0,0,0,2,1,0
>>>> Rouen,3,0,0,1,2,0,2,0,3,1,2,1,2,0,0,0,0,6
>>>>
>>>> 2012/5/28 Rui Barradas :
>>>>> Hello,
>>>>>
>>>>> Try function read.xls in library gdata
>>>>> Also, a good way of avoiding such doubts is
>>>>>
>>>>> library(sos)
>>>>> findFn('xls')
>>>>>
>>>>> It returns read.xls as the first line.
>>>>>
>>>>> Hope this helps,
>>>>>
>>>>> Rui Barradas
>>>>>
>>>>> Em 28-05-2012 11:32, sylvain willart escreveu:
>>>>>

[R] Builind C code for R with cygwin [C1]

2008-04-07 Thread sylvain . archenault
Hello,

I would like to build shared lib for R using cygwin environment. After 
installing needed tools (mainly perl, make and compilers), I ran 
R CMD SHLIB hw.c

But I get the following error :
$ R CMD SHLIB hw.c
c:/PROGRA~1/R/R-26~1.2/src/gnuwin32/MakeDll:82: *** multiple target 
patterns.  Stop.

Google it didn't give me any hints.

Thanks for your help.
Sylvain Archenault.
*
This message and any attachments (the "message") are con...{{dropped:13}}

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Builind C code for R with cygwin [NC]

2008-04-07 Thread sylvain . archenault
Ok thanks, I understand what's wrong.





[EMAIL PROTECTED] 
07/04/08 10:26 AM


To
Sylvain ARCHENAULT/fr/[EMAIL PROTECTED]
cc
r-help@r-project.org
Subject
Re: [R] Builind C code for R with cygwin [C1]






And I guess you installed R for Windows and then tried to do this?

You must either
1) Build R under cygwin as a Unix-alike or
2) Use the Rtools make if building R for Windows.

R for Windows is a Windows program, and cygwin's make does not accept 
Windows' file paths any more (it used to).

On Mon, 7 Apr 2008, [EMAIL PROTECTED] wrote:

> Hello,
>
> I would like to build shared lib for R using cygwin environment. After
> installing needed tools (mainly perl, make and compilers), I ran
> R CMD SHLIB hw.c
>
> But I get the following error :
> $ R CMD SHLIB hw.c
> c:/PROGRA~1/R/R-26~1.2/src/gnuwin32/MakeDll:82: *** multiple target
> patterns.  Stop.
>
> Google it didn't give me any hints.

But the 'R Installation and Administration Manual' told you firmly that 
this is not going to work.

> Thanks for your help.
> Sylvain Archenault.
> 
*
> This message and any attachments (the "message") are 
con...{{dropped:13}}
>
> __
> R-help@r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide 
http://www.R-project.org/posting-guide.html
> and provide commented, minimal, self-contained, reproducible code.
>

-- 
Brian D. Ripley,  [EMAIL PROTECTED]
Professor of Applied Statistics,  http://www.stats.ox.ac.uk/~ripley/
University of Oxford, Tel:  +44 1865 272861 (self)
1 South Parks Road, +44 1865 272866 (PA)
Oxford OX1 3TG, UKFax:  +44 1865 272595

*
This message and any attachments (the "message") are con...{{dropped:13}}

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] Problem with garchFit [C1]

2008-04-07 Thread sylvain . archenault
Hello,

I have problems with garchFit. When trying to fit a time series, I got the 
following error : 

> garchFit(~garch(1,1),data=x,trace=F)
Error in solve.default(fit$hessian) : 
  Lapack routine dgesv: system is exactly singular


The method works fine on series generating by garchSim. What could be the 
problem with the time series I use ?

Thanks,
Sylvain
*
This message and any attachments (the "message") are con...{{dropped:13}}

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] [R-SIG-Finance] how to study the lead and lag relation of two time series?

2009-01-22 Thread Sylvain Barthelemy
Dear Michael,

David Ruelle wrote a very interesting paper on "Recurrence plots of
dynamical Systems" that you should read, and I remember of simples lead/lags
methods to detect random or determinist systems.

I think that you should take a look at this very interesting paper on
"Lead-lag cross-sectional structure and detection of
correlated-anticorrelated regime shifts": http://tinyurl.com/b6cw5m

Regards.

Sylvain

______
Sylvain Barthélémy
Research Director, TAC
Applied Economic & Financial Research
Tel: +33.(0).299.393.140 - Fax: +33.(0).299.393.189
E-mail: ba...@tac-financial.com
www.tac-financial.com | www.sylbarth.com


-Message d'origine-
De : r-sig-finance-boun...@stat.math.ethz.ch
[mailto:r-sig-finance-boun...@stat.math.ethz.ch] De la part de Michael
Envoyé : jeudi 22 janvier 2009 02:18
À : r-help; r-sig-fina...@stat.math.ethz.ch
Objet : [R-SIG-Finance] how to study the lead and lag relation of two time
series?

Hi all,

Is there a way to study the lead and lag relation of two time series?

Let's say I have two time series, At and Bt. Is there a systematic way
of concluding whether it's A leading B or B leading A and by how much?

Thanks!

___
r-sig-fina...@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-sig-finance
-- Subscriber-posting only.
-- If you want to post, subscribe first.
No virus found in this incoming message.
Checked by AVG - http://www.avg.com 

21/01/2009
21:15

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] step and glm : keep all models

2008-01-24 Thread Sylvain Bertrand
Hi everyone,

I'm running the following command:

> step(mydata.glm,directions="both",trace=T)

which returns the model with the lowest AIC.
Now I'd like to get the list of all the models (or at least the formulas)
that were used in between.

I've noticed the "keep" option, but couldnt find any help on how to use it.

Would anyone here be able to help me with this?

Thank you.

Regards,
Sylvain

[[alternative HTML version deleted]]

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] Seasonal adjustment with seas - Error: X-13 has returned a non-zero exist status

2020-09-17 Thread Weber Sylvain (HES)
Dear all,

I am using function "seas" from package "seasonal" for seasonal adjustment of 
time series. I am using the latest version of R (4.0.2, 64-bit).
The following lines provide a small working example:
## EXAMPLE ##
library(seasonal)
m <- seas(AirPassengers)
m

On my private laptop, everything is going smoothly, and I obtain the following 
output:
## OUTPUT ##
Call:
seas(x = AirPassengers)

Coefficients:
  Weekday  Easter[1] AO1951.May  MA-Nonseasonal-01  
 -0.002950.017770.100160.11562  
   MA-Seasonal-12  
  0.49736  

But on my office computer, I receive the following error message after issuing 
" m <- seas(AirPassengers)":
## ERROR ##
Error: X-13 has returned a non-zero exist status, which means that the current 
spec file cannot be processed for an unknown reason.
In addition: Warning message:
In system(cmd, intern = intern, wait = wait | intern, show.output.on.console = 
wait,  :
  running command 'C:\Windows\system32\cmd.exe /c 
"\\hes-nas-prairie.hes.adhes.hesge.ch/Home_S/sylvain.weber/Documents/R/win-library/4.0/x13binary/bin/x13ashtml.exe"
 C:\Users\SYLVAI~1.WEB\AppData\Local\Temp\Rtmpe0MDE7\x132d0cb8579b9/iofile -n 
-s' had status 1

A similar issue has been already mentioned in various blogs, for instance:
https://rdrr.io/github/christophsax/seasonal/src/travis/test-x13messages.R
http://freerangestats.info/blog/2015/12/21/m3-and-x13
https://github.com/christophsax/seasonal/issues/154
but I couldn't find any solution yet.

Does anyone have any idea why this issue is occurring?

Thanks very much.

Sylvain

__
R-help@r-project.org mailing list -- To UNSUBSCRIBE and more, see
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Seasonal adjustment with seas - Error: X-13 has returned a non-zero exist status

2020-09-18 Thread Weber Sylvain (HES)
Dear Bill, Jeff, and Bert,

Thanks so much for the replies.
Bill is absolutely right. The problem came from the UNC path. 
I could solve the issue by adding the following lines to my Rprofile:
myPaths <- .libPaths()
myPaths <- c('M:/R/win-library/4.0', myPaths[2])  # where 
'M:/R/win-library/4.0' is the traditional DOS path corresponding to the UNC 
path \\... 
.libPaths(myPaths)

Thanks again!

Sylvain


De : Bill Dunlap  
Envoyé : jeudi, 17 septembre 2020 18:35
À : Weber Sylvain (HES) 
Cc : r-help@R-project.org
Objet : Re: [R] Seasonal adjustment with seas - Error: X-13 has returned a 
non-zero exist status

The problem might to due to using an UNC path (//machine//blah/...) instead of 
the traditional DOS path ("H:/blah/...").

E.g., if my working directory has a UNC path, cmd.exe will not work as expected:
> getwd()
[1] "server/dept/devel/bill-sandbox"
> system("C:\\WINDOWS\\SYSTEM32\\cmd.exe /c echo foo bar", intern=TRUE)
[1] "'server\\dept\\devel\\bill-sandbox'"                           
[2] "CMD.EXE was started with the above path as the current directory."
[3] "UNC paths are not supported.  Defaulting to Windows directory."   
[4] "foo bar"                          

You can test that by mapping 
"\\http://hes-nas-prairie.hes.adhes.hesge.ch/Home_S/sylvain.weber/Documents/R/win-library/4.0/x13binary/bin/x13ashtml.exe";
 to a driver letter and using letter-colon everywhere instead of "\\...".

-Bill

On Thu, Sep 17, 2020 at 8:46 AM Weber Sylvain (HES) 
<mailto:sylvain.we...@hesge.ch> wrote:
Dear all,

I am using function "seas" from package "seasonal" for seasonal adjustment of 
time series. I am using the latest version of R (4.0.2, 64-bit).
The following lines provide a small working example:
## EXAMPLE ##
library(seasonal)
m <- seas(AirPassengers)
m

On my private laptop, everything is going smoothly, and I obtain the following 
output:
## OUTPUT ##
Call:
seas(x = AirPassengers)

Coefficients:
          Weekday          Easter[1]         AO1951.May  MA-Nonseasonal-01  
         -0.00295            0.01777            0.10016            0.11562  
   MA-Seasonal-12  
          0.49736  

But on my office computer, I receive the following error message after issuing 
" m <- seas(AirPassengers)":
## ERROR ##
Error: X-13 has returned a non-zero exist status, which means that the current 
spec file cannot be processed for an unknown reason.
In addition: Warning message:
In system(cmd, intern = intern, wait = wait | intern, show.output.on.console = 
wait,  :
  running command 'C:\Windows\system32\cmd.exe /c 
"\\http://hes-nas-prairie.hes.adhes.hesge.ch/Home_S/sylvain.weber/Documents/R/win-library/4.0/x13binary/bin/x13ashtml.exe";
 C:\Users\SYLVAI~1.WEB\AppData\Local\Temp\Rtmpe0MDE7\x132d0cb8579b9/iofile -n 
-s' had status 1

A similar issue has been already mentioned in various blogs, for instance:
https://rdrr.io/github/christophsax/seasonal/src/travis/test-x13messages.R
http://freerangestats.info/blog/2015/12/21/m3-and-x13
https://github.com/christophsax/seasonal/issues/154
but I couldn't find any solution yet.

Does anyone have any idea why this issue is occurring?

Thanks very much.

Sylvain

__
mailto:R-help@r-project.org mailing list -- To UNSUBSCRIBE and more, see
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.
__
R-help@r-project.org mailing list -- To UNSUBSCRIBE and more, see
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.