Re: [R] RODBC results from stored procedure

2010-10-14 Thread ang

I know this thread is from a while back, but hopefully I can still get some
help on this.

I also used RODBC to connect to a SQL Server, and my stored procedure
returns a results set (that is not stored as a temp or permanent table).

So I was wondering if there was a way to access this results set, or am I
forced to store the results into a table before I can access it?

Some options I have considered: 
a)modifying the stored procedures to insert the results into tables
b)creating temp tables (trying to stay away from this as it is not as
dynamic and would require defining of many tables if I wanted to run some ad
hoc analysis)

The reason I do not want to modify the queries/create temp or perm tables is
because the SQL end is maintained by a separate team, and I am simply using
R to connect/run analyses off the data.  I am trying to keep my side as self
contained and independent as possible.  Any suggestions/advice would be
greatly appreciated.

Thanks a lot,
Adrian
-- 
View this message in context: 
http://r.789695.n4.nabble.com/RODBC-results-from-stored-procedure-tp897462p2996173.html
Sent from the R help mailing list archive at Nabble.com.

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] ThinkCell type waterfall charts in R?

2011-01-10 Thread ang

I was actually looking to create a similar type of graph in R.
But I guess I am going to try to approach it by using a stacked column chart
and hiding the 'net' series, while only showing the increases/decreases in
value.

I'll post an update later on what I come up with.


-- 
View this message in context: 
http://r.789695.n4.nabble.com/ThinkCell-type-waterfall-charts-in-R-tp881466p3208123.html
Sent from the R help mailing list archive at Nabble.com.

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] ThinkCell type waterfall charts in R?

2011-01-11 Thread ang

Hi Jim,

I looked through the plotrix documentation, and the waterfall plot comes
from the stackpoly function right?  I'm not sure if I can modify the
stackpoly to create the plot I want, since stackpoly is a line plot and
fills the area under with color.

I haven't played with all the options yet, but what I was looking for was
more similar to staircase.plot, but instead of horizontal bars, they would
be vertical columns.  Would you happen to know any packages or existing
plots that could be easily modified to do this?

Thanks in advance for your help.

Adrian
-- 
View this message in context: 
http://r.789695.n4.nabble.com/ThinkCell-type-waterfall-charts-in-R-tp881466p3209490.html
Sent from the R help mailing list archive at Nabble.com.

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] Coloring Continents/Regions of World Map

2011-05-13 Thread ang
Dear All,

This may be a silly question, but I have tried searching through a few of
the existing threads already, and haven't found any information on this.

I am trying to color code entire continents/regions (Asia, North America,
Africa, etc.) based on the number of relevant companies.  The data would
look like: 

World Location  Companies
North America   4848
Western Europe  1972
East Asia   1373
Northern Europe 313
Southern Europe 276
Middle East 167
Pacific 209
Southern Asia   156
SouthEast Asia  141
Eastern Europe  100
South America   55
Southern Africa 65
Carribean   26
Northern Africa 42
Central Asia8
Western Africa  17
Eastern Africa  6
Central Africa  1
Central America 1

And the names/regions definitely wouldn't match in R, but if I could
aggregate this data on my end to at least plot the #s in each continent that
would be great.

map('world', plot=FALSE)$names) gives a list of all the regions, but there
are over 2000 of these regions, and I can't figure out a way to aggregate
them.

Any help would be greatly appreciated.

Thanks,
Adrian

--
View this message in context: 
http://r.789695.n4.nabble.com/Coloring-Continents-Regions-of-World-Map-tp3520895p3520895.html
Sent from the R help mailing list archive at Nabble.com.

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] Converting matrices into row vectors and saving as ASCII text

2009-09-19 Thread Xi Ang

Hi

I have some data with these dimensions:
5 3 100

which correspond to the x, y, and time dimensions, for a variable, p.

I need the data in this format: 100 rows (1 row per time unit), and 15
values in each row. 

I have attempted to reshape my data

>dim(data)
5 3 100

>attr(data,'dim')<-c(dim(data)[3],dim(data)[1]*dimdata[2])

So I get data with 100 rows, 15 columns.

I need to use this data outside of R, and so have to save it as an ASCII
file that retains the row-column structure of the data, but I do not know
how to.

It would be ideal if I could end up with a text file that also has an
additional column that labels which time unit (1-100) the row belongs to,
i.e.

1   a1,1  a1,2 ... a1,15
2   a2,1  a2,2 ... a2,15
3   a3,1  a3,2 ... a3,15
4   a4,1  a4,2 ... a4,15
.
.
.
99
100

Any suggestions would be appreciated.

Thanks
Xi


-- 
View this message in context: 
http://www.nabble.com/Converting-matrices-into-row-vectors-and-saving-as-ASCII-text-tp25523562p25523562.html
Sent from the R help mailing list archive at Nabble.com.

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Converting matrices into row vectors and saving as ASCII text

2009-09-19 Thread Xi Ang


Thanks for your reply.

Is there a way I can save the data to an ascii file without losing the
row/column structure?
I have tried save(...) and write.table(...) but the output file seems to
jumble up the order of the matrix.

Thanks
Xi


David Winsemius wrote:
> 
> XYT <- array(1:150, dim=c(3,5,10))
> XYbyT= matrix(apply(XYT, 3, I), ncol=10)
> 
> ...or even...
> 
> XYbyT= matrix(XYT, ncol=10)
> 
> --  
> David.
> 
> On Sep 19, 2009, at 1:11 PM, Xi Ang wrote:
> 
>>
>> Hi
>>
>> I have some data with these dimensions:
>> 5 3 100
>>
>> which correspond to the x, y, and time dimensions, for a variable, p.
>>
>> I need the data in this format: 100 rows (1 row per time unit), and 15
>> values in each row.
>>
>> I have attempted to reshape my data
>>
>>> dim(data)
>> 5 3 100
>>
>>> attr(data,'dim')<-c(dim(data)[3],dim(data)[1]*dimdata[2])
>>
>> So I get data with 100 rows, 15 columns.
>>
>> I need to use this data outside of R, and so have to save it as an  
>> ASCII
>> file that retains the row-column structure of the data, but I do not  
>> know
>> how to.
>>
>> It would be ideal if I could end up with a text file that also has an
>> additional column that labels which time unit (1-100) the row  
>> belongs to,
>> i.e.
>>
>> 1   a1,1  a1,2 ... a1,15
>> 2   a2,1  a2,2 ... a2,15
>> 3   a3,1  a3,2 ... a3,15
>> 4   a4,1  a4,2 ... a4,15
>> .
>> .
>> .
>> 99
>> 100
>>
>> Any suggestions would be appreciated.
>>
>> Thanks
>> Xi
>>
>>
>> -- 
>> View this message in context:
>> http://www.nabble.com/Converting-matrices-into-row-vectors-and-saving-as-ASCII-text-tp25523562p25523562.html
>> Sent from the R help mailing list archive at Nabble.com.
>>
>> __
>> R-help@r-project.org mailing list
>> https://stat.ethz.ch/mailman/listinfo/r-help
>> PLEASE do read the posting guide
>> http://www.R-project.org/posting-guide.html
>> and provide commented, minimal, self-contained, reproducible code.
> 
> David Winsemius, MD
> Heritage Laboratories
> West Hartford, CT
> 
> __
> R-help@r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide
> http://www.R-project.org/posting-guide.html
> and provide commented, minimal, self-contained, reproducible code.
> 
> 

-- 
View this message in context: 
http://www.nabble.com/Converting-matrices-into-row-vectors-and-saving-as-ASCII-text-tp25523562p25526729.html
Sent from the R help mailing list archive at Nabble.com.

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] A Gamma-GLM with log link

2009-12-07 Thread choonhong ang
Hi,

I have a set of data (total number of record = 144,122), and I would like to
use gamma-glm with log link to set up a model.

IC is number of records
IL is paid amount

The table below shows that I have
30.578% of the data in the level of "1 - 1000" paid amount
20.320% of the data in the level of "1001 - 2000" paid amount
and so on

My question is could i use the whole data set to model ?
or may be i have to use the data up to 10,000 paid amount ?

 Level IL  Avg IL  Sum IC  1-1000539.60 44,069 30.578% 1001
- 2000  1,444.81 29,285 20.320% 2001 - 3000  2,457.72 15,343
10.646% 3001 - 4000  3,473.40   8,497 5.896% 4001 - 5000  4,496.47
  5,838 4.051% 5001 - 6000  5,476.28   3,831 2.658% 6001 -
7000
6,482.82   2,889 2.005% 7001 - 8000  7,500.97   2,323 1.612% 8001
- 9000  8,492.07   1,772 1.230% 9001 - 1  9,542.60   1,736
1.205% 10001 - 1100010,490.19   1,291 0.896% 11001 - 1200011,516.00
  1,104 0.766% 12001 - 1300012,508.65 915 0.635% 13001 -
1400013,501.24 869 0.603% 14001 - 1500014,562.99 876
0.608% 15001 - 1600015,502.50 650 0.451% 16001 - 1700016,498.90
585 0.406% 17001 - 1800017,511.23 573 0.398% 18001 -
1900018,529.04 512 0.355% 19001 - 219,605.73 615
0.427% 21001 - 2200021,518.71 448 0.311% 22001 - 2300022,489.74
389 0.270% 23001 - 2400023,493.52 340 0.236% 24001 -
2500024,603.33 413 0.287% 25001 - 2600025,516.57 324
0.225% 26001 - 2700026,514.70 297 0.206% 27001 - 2800027,509.62
272 0.189% 28001 - 2900028,486.77 238 0.165% 29001 -
329,591.02 312 0.216% 30001 - 3100030,460.08 238
0.165% 31001 - 3200031,527.67 240 0.167% 32001 - 3300032,526.25
213 0.148% 33001 - 3400033,496.41 208 0.144% 34001 -
3500034,556.44 235 0.163% 35001 - 3600035,476.62 190
0.132% 36001 - 3700036,512.92 155 0.108% 37001 - 3800037,524.77
191 0.133% 38001 - 3900038,469.21 152 0.105% 39001 -
439,554.93 176 0.122% 40001 - 4100040,501.28 171
0.119% 41001 - 4200041,521.06 182 0.126% 42001 - 4300042,525.54
156 0.108% 43001 - 4400043,541.32 118 0.082% 44001 -
4500044,549.38 131 0.091% 45001 - 4600045,513.78 125
0.087% 46001 - 4700046,532.88 128 0.089% 47001 - 4800047,528.92
121 0.084% 48001 - 4900048,472.52 115 0.080% 49001 -
549,684.14 191 0.133% 50001 - 5100050,556.99 104
0.072% 51001 - 5200051,527.56 119 0.083% 52001 - 5300052,519.82
120 0.083% 53001 - 5400053,504.29 105 0.073% 54001 -
5500054,527.36 126 0.087% 55001 - 5600055,566.20   94
0.065% 56001 - 5700056,533.86   98 0.068% 57001 - 58000
57,494.78
112 0.078% 58001 - 5900058,555.27 100 0.069% 59001 -
659,592.80 134 0.093% 60001 - 6100060,540.90   94
0.065% 61001 - 6200061,488.11   93 0.065% 62001 - 63000
62,543.60
  95 0.066% 63001 - 6400063,509.94   95 0.066% 64001 -
6500064,578.59   97 0.067% 65001 - 6600065,486.95   82
0.057% 66001 - 6700066,518.80   72 0.050% 67001 - 68000
67,507.95
  85 0.059% 68001 - 6900068,516.98   86 0.060% 69001 -
769,541.52   89 0.062% 70001 - 7100070,519.49   77
0.053% 71001 - 7200071,488.88   74 0.051% 72001 - 73000
72,483.43
  72 0.050% 73001 - 7400073,489.59   82 0.057% 74001 -
7500074,601.14   92 0.064% 75001 - 7600075,467.29   78
0.054% 76001 - 7700076,550.53   74 0.051% 77001 - 78000
77,525.18
  76 0.053% 78001 - 7900078,481.88   74 0.051% 79001 -
879,555.44   78 0.054% 80001 - 8100080,443.24   72
0.050% 81001 - 8200081,504.18   67 0.046% 82001 - 83000
82,510.68
  78 0.054% 83001 - 8400083,483.89   78 0.054% 84001 -
8500084,566.99   84 0.058% 85001 - 8600085,504.36   86
0.060% 86001 - 8700086,584.49   67 0.046% 87001 - 88000
87,540.17
  60 0.042% 88001 - 8900088,483.13   73 0.051% 89001 -
989,532.23   70 0.049% 90001 - 9100090,521.94   73
0.051% 91001 - 9200091,597.56   62 0.043% 92001 - 93000
92,499.65
  75 0.052% 93001 - 9400093,515.79   64 0.044% 94001 -
9500094,524.84   76 0.053% 95001 - 9600095,469.49   81
0.056% 96001 - 9700096,492.13   55 0.038% 97001 - 98000
97,454.56
  62 0.043% 98001 - 9900098,493.23   57 0.040% 99001 -

[R] help - read SAS into R

2010-08-31 Thread choonhong ang
HI All,

How to read SAS data directly into R ?

Thank you

[[alternative HTML version deleted]]

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] vglm

2010-08-31 Thread choonhong ang
Hi All,

could anybody help me to understand what is this error means ?


mydata=read.table("C:/Documents and
Settings/angieb/Desktop/CommercialGL/cl_ilf_claimdata.csv",header=TRUE,sep=",")
> names(mydata)
[1] "ILFTable""liabLimit"   "AnnAggLimit" "DedAmt"  "Loss"
"TIL"
> fit=vglm(Loss~1,pareto1(location=alpha),trace=TRUE,crit="c")
Error in eval(expr, envir, enclos) : object "Loss" not found

[[alternative HTML version deleted]]

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] vglm help

2010-09-03 Thread choonhong ang
Hi All,

I am using vglm & the fitted values is NA.  As in the R documentation that
for Pareto1, if the estimate of k is less than or equal to unity then the
fitted values will be NA.

what is NA means ?

how to solve it ?

how to get the k estimate ?  (in the R document the estimate of alpha is
f...@extra).


Thank you

[[alternative HTML version deleted]]

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] Insurance data in library(MASS)

2009-02-23 Thread choonhong ang
I have used the insurance data from R library and I have 2 questions:
I use the following:
>library(MASS)
>data(Insurance)
> m1=glm(Claims ~ District + Group + Age + offset(log(Holders)),data =
Insurance, family = poisson)
>summary(m1)

Call:
glm(formula = Claims ~ District + Group + Age + offset(log(Holders)),
family = poisson, data = Insurance)
Deviance Residuals:
 Min1QMedian3Q   Max
-2.46558  -0.50802  -0.03198   0.5   1.94026
Coefficients:
 Estimate Std. Error z value Pr(>|z|)
(Intercept) -1.810508   0.032972 -54.910  < 2e-16 ***
District20.025868   0.043016   0.601 0.547597
District30.038524   0.050512   0.763 0.445657
District40.234205   0.061673   3.798 0.000146 ***
Group.L  0.429708   0.049459   8.688  < 2e-16 ***
Group.Q  0.004632   0.041988   0.110 0.912150
Group.C -0.029294   0.033069  -0.886 0.375696
Age.L   -0.394432   0.049404  -7.984 1.42e-15 ***
Age.Q   -0.000355   0.048918  -0.007 0.994210
Age.C   -0.016737   0.048478  -0.345 0.729910
---
Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
(Dispersion parameter for poisson family taken to be 1)
Null deviance: 236.26  on 63  degrees of freedom
Residual deviance:  51.42  on 54  degrees of freedom
AIC: 388.74
 (1) In the result above, what is Group.L, Group.Q, Group.C, Age.L, Age.Q,
Age.C ?

 (2) When I copy the Insurance data in csv format (as shown in the
attachement) and run the same procedure the result shown is different from
above result, why ?
__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Insurance data in library(MASS)

2009-02-24 Thread choonhong ang
Hi,

In the result shown, the District 1 is used as the base category.  How to
change to make District 4 as a base category ?

On Mon, Feb 23, 2009 at 11:05 AM, choonhong ang wrote:

> I have used the insurance data from R library and I have 2 questions:
> I use the following:
> >library(MASS)
> >data(Insurance)
> > m1=glm(Claims ~ District + Group + Age + offset(log(Holders)),data =
> Insurance, family = poisson)
> >summary(m1)
>
> Call:
> glm(formula = Claims ~ District + Group + Age + offset(log(Holders)),
> family = poisson, data = Insurance)
> Deviance Residuals:
>  Min1QMedian3Q   Max
> -2.46558  -0.50802  -0.03198   0.5   1.94026
> Coefficients:
>  Estimate Std. Error z value Pr(>|z|)
> (Intercept) -1.810508   0.032972 -54.910  < 2e-16 ***
> District20.025868   0.043016   0.601 0.547597
> District30.038524   0.050512   0.763 0.445657
> District40.234205   0.061673   3.798 0.000146 ***
> Group.L  0.429708   0.049459   8.688  < 2e-16 ***
> Group.Q  0.004632   0.041988   0.110 0.912150
> Group.C -0.029294   0.033069  -0.886 0.375696
> Age.L   -0.394432   0.049404  -7.984 1.42e-15 ***
> Age.Q   -0.000355   0.048918  -0.007 0.994210
> Age.C   -0.016737   0.048478  -0.345 0.729910
> ---
> Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
> (Dispersion parameter for poisson family taken to be 1)
> Null deviance: 236.26  on 63  degrees of freedom
> Residual deviance:  51.42  on 54  degrees of freedom
> AIC: 388.74
>  (1) In the result above, what is Group.L, Group.Q, Group.C, Age.L, Age.Q,
> Age.C ?
>
>  (2) When I copy the Insurance data in csv format (as shown in the
> attachement) and run the same procedure the result shown is different from
> above result, why ?
>

[[alternative HTML version deleted]]

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] R question - combine values

2009-02-25 Thread choonhong ang
The district a is the baseline and we observe the difference between
District a & b is not significant, we can choose to combine these 2 values.
How to write code to combine these 2 value ?

> m1=glm(Claims~District+Group+Age+log(Holders),fami ly=poisson,data=mydata)
> summary(m1)

Call:
glm(formula = Claims ~ District + Group + Age + log(Holders),
family = poisson, data = mydata)

Deviance Residuals:
Min 1Q Median 3Q Max
-2.553115 -0.471819 0.002411 0.455274 1.800739

Coefficients:
Estimate Std. Error z value Pr(>|z|)
(Intercept) -2.52 0.689162 -4.031 5.56e-05 ***
Districtb 0.119942 0.079861 1.502 0.133125
Districtc 0.228371 0.144503 1.580 0.114019
Districtd 0.571661 0.248792 2.298 0.021576 *
Group>2l 0.794721 0.180354 4.406 1.05e-05 ***
Group1-1.5l -0.003496 0.127947 -0.027 0.978202
Group1.5-2l 0.379190 0.055856 6.789 1.13e-11 ***
Age>35 -1.074971 0.389480 -2.760 0.005780 **
Age25-29 -0.332131 0.129512 -2.564 0.010333 *
Age30-35 -0.539815 0.160138 -3.371 0.000749 ***
log(Holders) 1.201696 0.144135 8.337 < 2e-16 ***
---
Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1

(Dispersion parameter for poisson family taken to be 1)

Null deviance: 4236.68 on 63 degrees of freedom
Residual deviance: 49.45 on 53 degrees of freedom
AIC: 388.77
__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] select Intercept coefficients only

2009-02-27 Thread choonhong ang
Hi friends,

Is there a function to select intercept coefficients only ?

When I use "coeficients" it shows me all the coefficients, but I only want a
specific coefficients.

[[alternative HTML version deleted]]

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] comment on this book "A Handbook of Statistical Analyses Using R by Brian S. Everitt (Author), Torsten Hothorn (Author)"

2009-03-02 Thread choonhong ang
Is this book a good reference to learn R for statistical analysis ?

A Handbook of Statistical Analyses Using R by Brian S.
Everitt(Author),
Torsten
Hothorn(Author)

[[alternative HTML version deleted]]

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] detect outliers and high levarage points

2009-03-03 Thread choonhong ang
Hi friends,

How to detect outliers and high leverage points for GLM ?

Could I use plot(model)
(i) "Residuals vs Fitted" graph to detect the outliers ?
(ii) "Residuals vs Leverage" graph to detect the high leverage points ?
And then remove those points from the data and re-run the model ?

[[alternative HTML version deleted]]

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] Model confusion

2008-03-10 Thread Christopher De Ang

Hi,
 
I m wondering is there any rules on what variables are used for factor 
analysis/ loading. For instance, increase reputation lead to increase in volume 
of post. So is volume need to be calculated in factor loading.
 
 
InDependent Variable dependent Variables
reputation >Volume (based on the no. of posting)
social interaction ties ->Volume
 
Anyway, when i load it, it give me a eigenvalue of 0.999 .what to do about 
it.
 
Thanks,
Christopher
_


[[alternative HTML version deleted]]

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.