Hi,
I believe you want
eval(parse(text="pi/2"))
a word of warning exemplified in
eval(parse(text="library(fortunes) ; fortune(106)"))
HTH,
baptiste
On 19 October 2011 19:30, Erin Hodgess wrote:
> Dear R People:
>
> Suppose I have the following:
>
> "pi/2"
>
> and I would like it to be 1.5
I would imagine that you could parse+evaluate it like you asked about on
another thread; this isn't tested though. Does that work in your context?
Michael Weylandt
On Oct 19, 2011, at 2:30 AM, Erin Hodgess wrote:
> Dear R People:
>
> Suppose I have the following:
>
> "pi/2"
>
> and I would
You are misusing SQL variable substitution. Just embed the list in the SQL
statement itself.
paste("SELECT * FROM this_table WHERE this_column IN (", paste(org_table$id,
collapse=TRUE),")",sep="")
Better yet, use a SQL join so you can do this sequence in one SQL statement.
-
hi, I have a problem. The R shows that
Error in 1/ue : non-numeric argument to binary operator.
Here is the code:
# simulation of tempered stable processes by compound Poisson approximation
tsp<-function(n,e,a,c,lema){
x<-numeric(n)
for (i in 1:n){
repeat{
w<-runif(1)
v<-runif(1)
x<-e*
Hello,
The code below works fine up until I try to use the "IN" statement in
the last line. The proper SQL format is:
SELECT * FROM this_table WHERE this_column IN (1,2,3,4,5)
But, I think I may be getting something like:
SELECT * FROM this_table WHERE this_column IN c(1,2,3,4,5)
Which makes
Dear R People:
Suppose I have the following:
"pi/2"
and I would like it to be 1.57.
Using as.numeric, here is my result:
> as.numeric("pi/2")
[1] NA
Warning message:
NAs introduced by coercion
>
Is there a way to produce the numeric result, please?
Thanks,
Erin
--
Erin Hodgess
Associa
Hi, I am a relative newbie to R, so thanks in advance for the patience.
I am interesting in changing a table with year data into a format that
is friendlier for making bar charts. I currently have a table with the
same year appearing a number of times as separate rows. I want to
change this so th
That worked great thank you.
--
View this message in context:
http://r.789695.n4.nabble.com/Ordering-of-stack-in-ggplot-package-ggplot2-tp3917159p3917520.html
Sent from the R help mailing list archive at Nabble.com.
__
R-help@r-project.org mailing lis
The p-value as I understand it, is the probability of seeing a phenomenon at
least as extreme as the one which you observed assuming the null hypothesis
is true. In other words, if you assume there is no difference (which is the
NULL for you here I believe), and there is an 81% chance of seeing dat
Dear Gavin,
My apologies for the delay in responding to your request for further
information I have been travelling for work since you replied and have only
just returned to email contact.
The output from the traceback is as follows
# This is the capscale model that I called
> beetlecap <-capsc
Hi:
I think the problem is that you're trying to append the predicted
probabilities as a new variable in the (one-line) data frame, when in
fact a vector of probabilities is output ( = number of ordered levels
of the response) for each new observation. Here's a reproducible
example hacked from the
Hi Vivian,
This may be naive given the method (I am unfamiliar with glasso), but
what about simple subtraction? If it restricts to 0, you believe you
have .3, then just: obs - .3 and restrict to 0 again? Here is a
little example (assuming .3 correlation, but using glasso with the
covariance matr
I checked your equations with some made-up values:
crossover <- 50
fullin <- 120
fullout <- 20
x <- 62
and got 0.6261906 --- which was the value returned by the function in
question.
With x <- 41 I got 0.3094857 --- again the same as the value returned by
the function.
So you seem to have der
I've only been using R on and off for 9 months and started using the
glasso package for sparse covariance estimation. I know the concept is
to shrink some of the elements of the covariance matrix to zero.
However, say I have a dataset that I know has some underlying
"baseline" covariance/co
On Oct 18, 2011, at 7:59 PM, swonder03 wrote:
I'm trying to reproduce the 3rd graph on the page of this site:
http://learnr.wordpress.com/2009/03/17/ggplot2-barplots/ . However,
the data
below produces a ggplot with the stacks sorted in alphabetical order
from
the bottom up. I'd like the st
Hi:
levels(df.m2$Region)
[1] "Africa" "Americas" "Asia" "Europe" "Oceania"
Reorder your Region factor to the following:
df.m2$Region <- factor(df.m2$Region, levels = c('Europe', 'Asia',
'Americas', 'Africa', 'Oceania'))
Then recopy the code from the definitio
On Oct 18, 2011, at 6:27 PM, Cem Girit wrote:
Hello,
I cannot access the r-help website although after
registration I am getting all the posts sent to the side. Each time
I click
on the "Visit Subscriber List" on the
https://stat.ethz.ch/mailman/listinfo/r-help site, I get
Dear R-Help listers,
I am trying to estimate an proportional odds logistic regression model
(or ordered logistic regression) and then make predictions by
supplying a hypothetical x vector. However, somehow this does not
work. I guess I must have missed something here. I first used the polr
functio
hello, i am so glad to write you.
i am dealing now with writing my M.Sc in Applied Statistics thesis, titled "
Data Mining Classifiers and Predictive Models Validation and Evaluation".
I am planning to compare several DM classifiers like "NN, kNN, SVM, Dtree, and
Naïve Bayes" according to their
I'm trying to reproduce the 3rd graph on the page of this site:
http://learnr.wordpress.com/2009/03/17/ggplot2-barplots/ . However, the data
below produces a ggplot with the stacks sorted in alphabetical order from
the bottom up. I'd like the stacks to be in the order "Europe", "Asia",
"Americas, "
Hi Mike
Thanks for your comments. I had the code trying to project() the lat, long
so all fixed now.
Thanks for your help and pointing out the R-sig-Geo mailing list which I'll
use in the future should I have other questions.
Thanks
Kate
--
View this message in context:
http://r.789695.n4.nab
Hello,
I cannot access the r-help website although after
registration I am getting all the posts sent to the side. Each time I click
on the "Visit Subscriber List" on the
https://stat.ethz.ch/mailman/listinfo/r-help site, I get "R-help roster
authentication failed." error. Any i
Hello all,
I am quite new to R, with the goal of using it for a project in my business
course. I am attempt to run a Monte Carlo simulation of futures prices based
on a random walk whereby the given volatility (I will use historical
volatility in this case, say 12%) is Levy-distributed , equally
On 19/10/11 13:57, Erin Hodgess wrote:
Dear R People:
Suppose I have the following character string:
f1
[1] "(1/30)*(20-x)"
My goal is to end up with
y<- (1/30)*(20-x)
How would I do this, please?
I've been experimenting with eval, but no good so far.
As usual, I have the feeling that th
Dear R People:
Suppose I have the following character string:
> f1
[1] "(1/30)*(20-x)"
My goal is to end up with
y <- (1/30)*(20-x)
How would I do this, please?
I've been experimenting with eval, but no good so far.
As usual, I have the feeling that this is something really simple, but
I can
Hi,
Thanks! So, you're saying I should output the descriptives from the individual
data files, and manually do the stats to get the combined estimates (or, I can
use the files in mplus or spss to run it in those programs)? Is there no
automated process in any of the mi or mi-related packages tha
Mike,
You can retrieve each of imputed data sets and use Rubin's rule for
combined analysis. I am not sure how to do combined analysis of cov,
but mean and SE would be estimiable.
For mi package to get individual copies of imputed data
?mi.completed
HTH
Weidong Gu
On Tue, Oct 18, 2011 at 5:1
Tena koe Erin
http://biostat.mc.vanderbilt.edu/wiki/Main/UseR-2012 has the contact person on
the front page ...
Peter Alspach
> -Original Message-
> From: r-help-boun...@r-project.org [mailto:r-help-bounces@r-
> project.org] On Behalf Of Erin Hodgess
> Sent: Wednesday, 19 October 2011
I am looking for a way to bucket data in 2 dimensions using a weighting that
will linearly allocate based on the proximity of the reference value. I'm
sure the functionality probably already exists but couldn't find an example:
Example Data
Yrs,strike,value
0.75,105,100
1.25,102.5,200
Time Buck
Hi, all,
I'm running multiple imputation to handle missing data and I'm running into a
problem. I can generate the MI data sets in both amelia and the mi package
(they look fine), but I can't figure out how to get pooled results. The
examples from the mi package, zelig, etc., all seem to go righ
Or do you want each number separated?
> data <- textConnection("010101001110101
+ 10101001010
+ 01001010010"
+ )
> result <- as.matrix(read.fwf(data, rep(1, 15)))
> result
V1 V2 V3 V4 V5 V6 V7 V8 V9 V10 V11 V12 V13 V14 V15
[1,] 0 1 0 1 0 1 0 0 1 1 1 0 1 0 1
[2,]
On Oct 18, 2011, at 2:53 PM, Dennis Murphy wrote:
Prompted by David's xtabs() suggestion, one way to do what I think the
OP wants is to
* define day and unit as factors whose levels comprise the full range
of desired values;
* use xtabs();
* return the result as a data frame.
Something like
x
Hi Brian,
Take a look at ?scan
> x <- scan(file.choose(), what = 'list')
Read 3 items
> x
[1] "010101001110101" "10101001010" "01001010010"
> as.matrix(x)
[,1]
[1,] "010101001110101"
[2,] "10101001010"
[3,] "01001010010"
HTH,
Jorge
On Tue, Oct 18, 2011 at 3:09 PM, Brian Ts
Would readLines() work?
Michael
On Tue, Oct 18, 2011 at 3:09 PM, Brian Tsai wrote:
> hi all,
>
> i have a file of the following format that i want to read into a matrix:
>
> 010101001110101
> 10101001010
> 01001010010
> ...
>
> it has no headers or row names.
>
> I tried to use read.tabl
hi all,
i have a file of the following format that i want to read into a matrix:
010101001110101
10101001010
01001010010
...
it has no headers or row names.
I tried to use read.table(), but it doesn't allow me to specify nothing as
the column separator (specifying sep='' means whitespac
Thanks you for the quick and helpful replies. Problem solved.
Jonny
On Tue, Oct 18, 2011 at 11:33 AM, David Winsemius wrote:
>
> On Oct 18, 2011, at 2:24 PM, Sarah Goslee wrote:
>
> Hi Jonny,
>>
>> On Tue, Oct 18, 2011 at 1:02 PM, Jonny Armstrong
>> wrote:
>>
>>> I am analyzing the spatial distr
Then read in a million lines, scan back for the break, write out the data,
delete from the buffer, then read the next million lines into the buffer.
On Tuesday, October 18, 2011, johannes rara wrote:
> Thank you Jim for your kind reply. My intention was to split one 14M
> file into less than 15 t
Hello,
# Full disclosure. I am not sure if my problem is a bug(s) in the code, or a
fundamental misunderstanding on my part about what I am trying to do with
these statistics. I am not familiar with maximum likelihood tests.
# I currently have two vectors
Aequipecten<-c(0, 0, 1, 0, 0, 0, 0, 0,
Prompted by David's xtabs() suggestion, one way to do what I think the
OP wants is to
* define day and unit as factors whose levels comprise the full range
of desired values;
* use xtabs();
* return the result as a data frame.
Something like
x <- data.frame( day = factor(rep(c(4, 6), each = 8),
This is just scientific notation, so
8.15e-01 is the same as:
> 8.15*10^-1
[1] 0.815
niki wrote:
>
> Dear all,
>
> i have done some regression analyses but i do not understand the p value.
> These are the results
>
>t-value p value
> geno.1
On Oct 18, 2011, at 2:24 PM, Sarah Goslee wrote:
Hi Jonny,
On Tue, Oct 18, 2011 at 1:02 PM, Jonny Armstrong
wrote:
I am analyzing the spatial distribution of fish in a stream. The
stream is
divided into equally sized units, and the number of fish in each
unit is
counted. My problem is tha
Here is one option:
a<- data.frame(day=c(rep(4,8),rep(6,8)),unit=
c((1:8),seq(2,16,2)),value=round(runif(16,1,34),0)) #approx your data
b<- data.frame(day=c(rep(4,16),rep(6,16)),unit= 1:16) #fake df
b1<-merge (a,b, by=c('day','unit'),all.y=T)
b1$value[is.na(b1$value)]<-0
Hi Jonny,
On Tue, Oct 18, 2011 at 1:02 PM, Jonny Armstrong
wrote:
> I am analyzing the spatial distribution of fish in a stream. The stream is
> divided into equally sized units, and the number of fish in each unit is
> counted. My problem is that my dataset is missing rows where the count in a
>
Thank you Jim for your kind reply. My intention was to split one 14M
file into less than 15 text files, each of them having ~1M lines. The
idea was to make sure that one "sequence"
GG!KK!KK! --sequence start
APE!KKU!684!
APE!VAL!!
APE!UASU!!
APE!PLA!1!
APE!E!10!
APE!TPVA!17122009!
APE!STAP!1!
GG!K
I am analyzing the spatial distribution of fish in a stream. The stream is
divided into equally sized units, and the number of fish in each unit is
counted. My problem is that my dataset is missing rows where the count in a
unit equals zero. I need to create zero data for the missing units.
For ex
Hello,
I have two numeric vectors in R and used cor.test function with them.
Is it possible in R to know how much contributed a particular row of the
vectors to the total correlation value and significance?
Of course I just could take out that row from the vectors and run the test
again to see how
Dear all,
i have done some regression analyses but i do not understand the p value.
These are the results
t-value p value
geno.1 -0.229 0.978 -0.234 8.15e-01
geno.50.647 1.146 0.565 5.73e-01
stress:geno.5-1.337 1.022 -1.307 1.92e
Please look at the RExcel project
rcom.univie.ac.at
followup should probably be on that mailing list.
Rich
On Tue, Oct 18, 2011 at 10:32 AM, Wensui Liu wrote:
> dear listers,
>
> right now, we are trying to use r to implement sas dde function, e.g.
> interact with excel. however, we can't fin
The contact person is:
Stephania McNeal-Goddard
email: stephania.mcneal-godd...@vanderbilt.edu
phone: (615)322-2768
Vanderbilt University School of Medicine
Department of Biostatistics
S-2323 Medical Center North
Nashville, TN 37232-2158
On Tue, 2011-10-18 at 12:41 -0400, David Winsemius wrote:
On Oct 18, 2011, at 12:25 PM, Erin Hodgess wrote:
Dear R People:
Do you know who the contact person is for UseR 2012, please?
I'm trying to get together some numbers for funding (sorry for the
Funny, it was the first hit on a Google search with term "useR2012"
http://biostat.mc.vanderbilt.
Dear R People:
Do you know who the contact person is for UseR 2012, please?
I'm trying to get together some numbers for funding (sorry for the tackiness).
Thanks,
Erin
--
Erin Hodgess
Associate Professor
Department of Computer and Mathematical Sciences
University of Houston - Downtown
mailto:
Dear R helpers,
I have a ts object, dadosvar, and want to run a VAR.
These are my data:
> dadosvar[1:15,]
dl rp igpm ereal crescpib jurosreal
[1,] 32.31 NA 39.07 419.59 NA 7025.95
[2,] 32.00 NA 40.78 596.57 NA 13401.25
[3,] 32.70 NA 45.71 867.6
Your original code was not directly vectorizible (and I should have
noted that): this should work:
fnc <- function(x, y) {
z <- outer(y, x, FUN = "^")
exp(-x) * colSums(z)
}
y <- c(2,3,5)
integrate(fnc, 0, 3, y)
This should work and sorry for the initial confusion.
Michael
On Tue, Oct 1
Hello everybody
I want to use rls() on a multi-dimensional function, where parts of it are
modeled using a spline.
I tried to condense my problem into the following code example,
which tries to fit the y-values of a spline interpolation:
--
thanks for the help and pointer.
I am modifying it like this
x=which(n[,1]==n[,2])
n=n[-x,]
to get rid of combinations which will generate '0' or ratio of 1.
Thanks once again.
sharad
--
View this message in context:
http://r.789695.n4.nabble.com/calculating-ratios-from-all-combinations-tp39125
Hello ,
I was taking a look at your website and I noticed that one of the links you
suggest in http://www.daba.lv/Adreses/RS_GPS.shtml isn't working properly. The
link in question is this one:
http://instruct1.cit.cornell.edu/~agl1/Hexagone.html. You'll find a similar
resource published
here:
I thought that you wanted a separate file for each of the breaks
"GG!KK!KK!". If you want to read in some large number of lines and
then break them so that they have that many lines, you can do the same
thing, except scanning from the back for a break. So if your input
file has 14M breaks in it,
Dear R users, need help with my heatmap. I will really approciate some help.
Given the matrix:
> head(x)
A B C D time
[1,] 0 8 0 01
[2,] 0 160 0 02
[3,] 0 175 0 03
[4,] 0 253 0 04
[5,] 79 212 0 05
[6,] 6 105 0 06
and call:
## Heatmap ---
dear listers,
right now, we are trying to use r to implement sas dde function, e.g.
interact with excel. however, we can't find a way to call vba from rï¼
any insight is appreciated.
[[alternative HTML version deleted]]
__
R-help@r-project.or
I'm not sure how to easily get that data from google (see Michael's
message), but it's available from yahoo.
getSymbols('TCS.NS', src='yahoo')
I've found that historical stock data from Yahoo is typically cleaner and
more reliable than from Google. The other main difference is that Yahoo
provide
How about adding an additional argument to fun?
R> fun <- function(x, y) exp(-x)*sum(y^x)
R> y <- c(2,3,5)
R> integrate(fun, 0, 3, y)
[1] 346.853 with absolute error < 3.9e-12
Michael
On Tue, Oct 18, 2011 at 10:01 AM, Freddy Hernandez Barajas
wrote:
> Hello all R users
>
> I want to calculate th
Hi
I'm using filled.contour in R with matrix 19x19. Therefore the plot isn't
very "smooth" Are there any functions in R to make it smooth? I can't have
more observations because it takes to long.
Thanks, Knut
--
View this message in context:
http://r.789695.n4.nabble.com/filled-contour-with-few-
Use the stringsAsFactors = FALSE argument for read.table() so the
strings will remain factors and then you can convert them directly
yourself.
Michael
On Tue, Oct 18, 2011 at 9:40 AM, Martin Batholdy
wrote:
> Ok, I think that would work – thanks!
>
> However, in my case I read a data.frame via r
On Tue, Oct 18, 2011 at 03:40:27PM +0200, Martin Batholdy wrote:
> Ok, I think that would work – thanks!
>
> However, in my case I read a data.frame via read.table().
> So some of the columns get transformed to factors automatically – > I don't
> generate the factor-variables as in the example, so
Hello all R users
I want to calculate this univariate integral:
exp(-x)*sum(y^x) respect to x from 0 to 3 where y is a vector y=(2,3,5).
In fact, the original y vector has a large number of elements but I propose
with 3 elements.
I know that I can resolve this problem doing
fun <- function(x)
In your code you had a loop over the variable col, but it was never used.
Anyways, just modify the line:
n <- n[-length(n)] # Throwout unwanted columns
to also throw out values with 0's. Perhaps:
idxZeros <- apply(d, 1, function(x) any( abs(x-0) < 1e-08)) # Identify
rows with zeros
n <- n[!idxZ
Thanks for your reply.
Let me make an example then:
m<- c(150, 400, 500,750,800, NA)
How can I use cut to generate the m_group as c(0,0.4755,1, 0.2275,0,0):
Breaks 331.04 476.07 608.66 791.5
NA
m_group0 x 1
On Oct 18, 2011, at 7:35 AM, Martin Batholdy wrote:
Dear R-list,
I currently have to convert a data.frame with several factor-
variables to a numeric matrix.
Now the problem is, that the order of the factor-labels don't match
the order I would like to use.
for example, let's assume I
Ok, I think that would work – thanks!
However, in my case I read a data.frame via read.table().
So some of the columns get transformed to factors automatically –
I don't generate the factor-variables as in the example, so I can't control how
the levels are ordered (or can I?).
On 18.10.2011,
Thanks Jim for your help. I tried this code using readLines and it
works but not in way I wanted. It seems that this code is trying to
separate all records from a text file so that I'm getting over 14 000
000 text files. My intention is to get only 15 text files all expect
one containing 1 000 000
Dear all,
I know there have been various questions posted over the years about loops but
I'm afraid that I'm still stuck. I am using Windows XP and R 2.9.2.
I am generating some data using the multivariate normal distribution (within
the 'mnormt' package). [The numerical values of sanad and cov
Add levels= to your factor() call.
E.g.,
x1 <- factor(rep(1:4, 5), labels=c("slightly disagree", "disagree",
"agree", "slightly agree"), levels = c(2,1,4,3))
as.numeric(x1)
[1] 2 1 4 3 2 1 4 3 2 1 4 3 2 1 4 3 2 1 4 3
Michael
On Tue, Oct 18, 2011 at 7:35 AM, Martin Batholdy
wrote:
> Dear R-lis
Thanks Micheal,
that's what I wanted, I did not quite catch which variable is unused.
Another part is I have my variable values in log scale so it generates '0's'
instead of '1', how do i get rid of those cols.
Thanks for you patience
Sharad
--
View this message in context:
http://r.789695.n4.n
Dear R users,
Apologies for the total beginner's question. I was wondering whether
you could tell me if there is a structural equation modelling function
that can handle binary data i.e. in similar manner to the GLM function
with a binomial family.
Best wishes,
Mario
___
Ben,
this is a continuation of the query i posted on:
http://r.789695.n4.nabble.com/GLM-and-Neg-Binomial-models-td3902173.html
I cannot give you a direct example (big dataset) of what i did aside from
what i have written:
fitpoisson <- glm((RESPONSE) ~ A + B +
offset(log(LENGTH)) + offset(lo
Sorry,
I didn't notice a mistake... Sepal.Length is a dependent variable
and Sepal.Width is an independent one...
---
Regards
Alex
2011/10/18 Alexander Lebedev
> *Dear experts,*
>
> Please excuse me for disturbing... Right now I am struggling with GLM a
> bit... Would you be so kind to provide m
Dear Ben,
First of all, many thanks for your reply. I am highly appreciative of that.
I am still unsure about some issues
The dispersion parameter is that which is estimated by
sum(residuals(fit,type="pearson")^2)/fit$df.res. This is what a quasipoisson
model estimates. This corresponds
*Dear experts,*
Please excuse me for disturbing... Right now I am struggling with GLM a
bit... Would you be so kind to provide me a solution on using nuisance
variables. The problem is that I have data on Depression (volumetric
measurements of different brain regions) and I want to include age, ge
Hi
How to get p-value and the standard error in PLS
I have used the following function to calculate PLS
fit1 <- mvr(formula=Y~X1+X2+X3+X4, data=Dataset, comp=4)
Please help me
--
View this message in context:
http://r.789695.n4.nabble.com/getting-p-value-and-standard-error-in-PLS-tp3914760p
Duncan Murdoch gmail.com> writes:
>
> On 11-10-18 4:30 AM, Seref Arikan wrote:
> > Hi Dan,
> > I've tried the log likelihood, but it reaches zero again, if I work with say
> > 1000 samples.
> > I need an approach that would scale to quite large sample sizes. Surely I
> > can't be the first one t
Without knowing the calculation you want to run, I can't give you any
more direction than this, but it sounds like you need to take a step
back and rethink your problem in terms of vectorization. If you can do
so, outer() might be able to help as well as direct vectorwise
calculation.
If it's enti
Dear Ondrej,
You might use the linearHypothesis() function in the car package.
Best,
John
John Fox
Senator William McMaster
Professor of Social Statistics
Department of Sociology
McMaster University
Hamilton, Ontario, Canada
http://socserv.mcmaster.ca/jfox
I believe it's because they are not made available for download as a csv file.
Compare:
https://www.google.com/finance/historical?q=NSE:TCS
with
https://www.google.com/finance/historical?q=NASDAQ:AAPL
You'll see that for AAPL, there is an option to export prices on the
right hand side: that's
Use 'readLines' instead of 'read.table'. We want to read in the text
file and convert it into separate text files, each of which can then
be read in using 'read.table'. My solution assumes that you have used
readLines. Trying to do this with data frames gets messy. Keep it
simple and do it in t
Rui,
I suggest you read the following tutorial to give you an introduction to
foreach:
http://cran.r-project.org/web/packages/doMC/vignettes/gettingstartedMC.pdf
Regarding your question I could would suspect that you can not assign values to
some variable inside foreach and use them "outside"
Thanks Jim,
I tried to convert this solution into my situation (.txt file as an input);
zz <- file("myfile.txt", "r")
fileNo <- 1 # used for file name
buffer <- NULL
repeat{
input <- read.csv(zz, as.is=T, nrows=100, sep='!',
row.names=NULL, na.strings="")
if (length(input) == 0) break
Let's do it in two parts: first create all the separate files (which
if this what you are after, we can stop here). You can change the
value on readLines to read in as many lines as you want; I set it to 2
just for testing.
x <- textConnection("APE!KKU!684!
APE!VAL!!
APE!UASU!!
APE!PLA!1!
APE!E!1
Let me rephrase your question:
"How do I get the results that are printed by print(survfit())"
Answer: read the help file "?print.survfit"
__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the postin
D_Tomas hotmail.com> writes:
> I have fitted a Negative Binomial model (glm.nb) and a Poisson model (glm
> family=poisson) to some count data. Both have the same explanatory variables
> & dataset
>
> When I call sum(fitted(model.poisson)) for my GLM-Poisson model, I obtain
> exactly the same n
I have a data set like this in one .txt file (cols separated by !):
APE!KKU!684!
APE!VAL!!
APE!UASU!!
APE!PLA!1!
APE!E!10!
APE!TPVA!17122009!
APE!STAP!1!
GG!KK!KK!
APE!KKU!684!
APE!VAL!!
APE!UASU!!
APE!PLA!1!
APE!E!10!
APE!TPVA!17122009!
APE!STAP!1!
GG!KK!KK!
APE!KKU!684!
APE!VAL!!
APE!UASU!!
APE!
On 11-10-18 7:34 AM, Vikram Bahure wrote:
Hi,
I am a new user in R.
I wanted to study the code for some R commands.
For example, as I was studying PCA analysis there is a command in R, as
"princomp". Normally if we type the command we get the code behind the
function, but I am not able to get
Dear R-list,
I currently have to convert a data.frame with several factor-variables to a
numeric matrix.
Now the problem is, that the order of the factor-labels don't match the order I
would like to use.
for example, let's assume I have this factor-variable in my data-frame:
x <- factor(re
Hi,
I am a new user in R.
I wanted to study the code for some R commands.
For example, as I was studying PCA analysis there is a command in R, as
"princomp". Normally if we type the command we get the code behind the
function, but I am not able to get for this one.
*> princomp
function (x, ...)
Dear r-helpers,
I have a query regarding use of contrasts in MANOVA.
summary(manova(model))
gives me only result of test for overall difference.
Would you be so kind and give me a hint how to get the same test statistics
(e.g.Pillai's) and P values for the predefined contrasts?
Best regards
Ondrej
Thank you again.
Nicola
2011/10/18 Uwe Ligges
>
>
> On 18.10.2011 12:23, Nicola Sturaro Sommacal wrote:
>
>> Thank you very much for your reply. You confirm what I suppose.
>>
>> Can you give me a reference of that you wrote? I need it for a report.
>>
>> Thanks again.
>>
>>
>> PS: sorry Uwe
On 18.10.2011 12:23, Nicola Sturaro Sommacal wrote:
Thank you very much for your reply. You confirm what I suppose.
Can you give me a reference of that you wrote? I need it for a report.
Thanks again.
PS: sorry Uwe for the previous reply, not to the list.
... where I replied it is in the
Thank you very much for your reply. You confirm what I suppose.
Can you give me a reference of that you wrote? I need it for a report.
Thanks again.
PS: sorry Uwe for the previous reply, not to the list.
2011/10/18 Uwe Ligges
>
>
> On 18.10.2011 10:37, Nicola Sturaro Sommacal wrote:
>
>> He
Please see the posting guide, and supply the information you were
asked for (and read the relevant manuals).
What OS?
What locale?
What graphics device?
No extra support is needed: R handles Czech characters perfectly well
in a Czech locale (or any UTF-8 locale provided you have the correct
f
On 11-10-18 4:30 AM, Seref Arikan wrote:
Hi Dan,
I've tried the log likelihood, but it reaches zero again, if I work with say
1000 samples.
I need an approach that would scale to quite large sample sizes. Surely I
can't be the first one to encounter this problem, and I'm sure I'm missing
an optio
On 18.10.2011 10:37, Nicola Sturaro Sommacal wrote:
Hello everybody.
My issue arise when I build a package with my functions. This package is for
personal purposes only and it will not submitted to CRAN.
Anyway, this may be an opportunity for myself to clear the S3 methods
concept. I read the
1 - 100 of 116 matches
Mail list logo