> Good Afternoon,
>
>I have a small problem with the following code.
>
> # The x.sub$Time[[1]] 2006-10-31 19:03:01 EST
>
> # when put in variable star give-me
> star<-x.sub$Time[[1]]
> print(star)
> print(x.sub$Time[[1]])
> [1] 1 36 32 -
>
>
> do not understand why
Why what?
Hi
Just order your table output.
xx<-sample(letters[1:5], 100, replace=T)
yy<-table(xx)
barplot(yy[order(yy, decreasing=T)])
Regards
Petr
>
>
> Hi,
>
> I am working on categorical data with column as disease name(categaory).
>
> My input data is
> [1] Acute lymphoblastic leukemia (child
Hi Kimmo,
You can try to use the "layer" function from "latticeExtra" library:
densityplot(~PV1CIV, groups=SGENDER, data=ISGFINC2,
lwd=2, col=1, lty=c(1,2), pch=c("+","o"),
key=list(text=list(lab=levels(ISGFINC2$SGENDER), col=1),
space="bottom", columns=2, border=T, lines=T, lwd=2,
lty=c(
Hi!
I have used the following command:
densityplot(~PV1CIV, groups=SGENDER, data=ISGFINC2,
lwd=2, col=1, lty=c(1,2), pch=c("+","o"),
key=list(text=list(lab=levels(ISGFINC2$SGENDER), col=1),
space="bottom", columns=2, border=T, lines=T, lwd=2,
lty=c(1,2), col=1), ref=T, plot.points=F)
to
Just to also confirm that
the strapply seems to be the faster.
I also wrote a mistake with the formula before, as have already be suggested,
the corrected is the
c(1, 1/60) %*% strapply(x, "\\d+.?\\d+", as.numeric, simplify = TRUE)
The only now is that I need to convert my data to a format th
Hi Vihan,
Perhaps quantile in "stats" package...
Type ?quantile, you will get a lot of informations.
Regards,
Pascal
- Mail original -
De : Vihan Pandey
À : r-help@r-project.org
Cc :
Envoyé le : Jeudi 8 mars 2012 15h08
Objet : [R] Doing Mathematica Quantile[] function in R
Hi all,
Hi,
I am working on categorical data with column as disease name(categaory).
My input data is
[1] Acute lymphoblastic leukemia (childhood)
[2] Adiponectin levels
[3] Adiponectin levels
[4] Adiponectin levels
[5] Adiponectin leve
hi
I plot a series of observation data every minutes in a day as the
attachment below
plot(wnd,type='l',lty=1,col='red',lwd=1,xlab=xxlab,ylab=yylab,ylim=YY)
In the figure, the x-axis tickment is the number of data How can I change
it
fore example 1h 2h 3h 4h and so on ?
--
TANG Jie
Email: to
Hello world,
I'm pretty new to computer code: for example, I consider it a small
victory that I (all by myself!) managed to ssh into the server at my
lab from home and copy a file onto my desktop. Be gentle. I have
primarily used R for running some pretty mid-level statistics
(creating distance mat
Hi all,
I am an R newbie trying to do some calculations I do in Mathematica in
R on a GNU/Linux system.
The main thing I am interested in doing is taking a 0.999 quantile on
a data set in a file who's data is normally distributed, say foo.csv.
e.g in Mathematica if I have something like this :
> -Original Message-
> From: r-help-boun...@r-project.org [mailto:r-help-boun...@r-project.org]
> On Behalf Of Ista Zahn
> Sent: Wednesday, March 07, 2012 6:55 PM
> To: Greg Snow
> Cc: r-help@r-project.org; Markus Elze
> Subject: Re: [R] gsub: replacing double backslashes with single backsl
Hi Chintanu,
Try this:
legend (locator(1), "Important ones", box.col=NA)
Regards,
Pascal
- Mail original -
De : Chintanu
À : r-help@r-project.org
Cc :
Envoyé le : Jeudi 8 mars 2012 14h52
Objet : [R] legend
Hi,
A very simple thing that I'm unable to do. I did look at the help but .
Hi,
A very simple thing that I'm unable to do. I did look at the help but
While putting a legend on a plot, I don't wish to have the enclosing
border surrounding the words (as given below).
Tried to use the following, but didn't help :
legend (locator(1), border=FALSE, fill=FALSE, "Import
On 08/03/2012 02:07, Hasan Diwan wrote:
I have a bunch of clean timeseries data obtained from a sensor and I'd
like to apply a Kalman Filter to it to smoothe it out. Through a few
days of Googling, reading papers, implementing such a filter in
various languages, I finally realised that it may be
RODBC is a library that sends SQL statements through an ODBC DSN. I don't see
anything to suggest that the problem is in R or RODBC, but it might be in ODBC
or your (unidentified) database. If you don't agree, please make your example
reproducible (sample data, complete R code, ODBC DSN creation
Hello to R uers,
I am wondering if there is an easy way to perform a cross-power spectral
density estimation of two timeseries (x and y) using the Welch's method.
Both packages "bspec" and "oce" provide a function to calculate the PSD with
the Welch's method, but only for a timeserie.
Thank y
Perhaps you should dput() you data so this is reproducible...
Michael
On Wed, Mar 7, 2012 at 10:15 AM, RMSOPS wrote:
> Good Afternoon,
>
> I have a small problem with the following code.
>
> # The x.sub$Time[[1]] 2006-10-31 19:03:01 EST
>
> # when put in variable star give-me
> star<-x.sub$T
You are chasing your tail. You have already achieved your goal, but you don't
seem to understand that.
The three characters
C:\
are represented in R as
"C:\\"
so when you see the latter, the former is what is actually already in memory.
"C:\"
is not legal R code (it is an unterminated strin
On Wed, Mar 7, 2012 at 12:57 PM, Greg Snow <538...@gmail.com> wrote:
>
> The issue here is the difference between what is contained in a string
> and what R displays to you.
>
> The string produced with the code:
>
> > tmp <- "C:\\"
>
> only has 3 characters (as David pointed out), the third of whi
Look at the package dlm, fkf, or sspir for various kalman filter
implementations. I use dlm regularly and it's great.
On Wed, Mar 7, 2012 at 8:07 PM, Hasan Diwan wrote:
> I have a bunch of clean timeseries data obtained from a sensor and I'd
> like to apply a Kalman Filter to it to smoothe it o
Hi,
Suppose we have a general function that returns a logical indicating
which values in 'x' are found in 'l', throwing an error if none are
found and a warning if only some are found:
"checkFun" <- function(l, x)
{
xinl <- x %in% l
if (! any(xinl)) stop("none of the x values found in l")
Dear Michael,
effect() works with lmer(). Just load lme4 after the effect package. See the
penultimate example in ?effect.
I hope this helps,
John
John Fox
Sen. William McMaster Prof. of Social Statistics
Department of Sociology
McMaster Univers
I have a bunch of clean timeseries data obtained from a sensor and I'd
like to apply a Kalman Filter to it to smoothe it out. Through a few
days of Googling, reading papers, implementing such a filter in
various languages, I finally realised that it may be built into R. So
I did a "??kalman" at the
Hi Richard,
Could you provide a reproducible example?
Regards,
Pascal
De : GAO Richard
À : r-help@r-project.org
Envoyé le : Jeudi 8 mars 2012 10h10
Objet : [R] ADL in auto.arima [SEC=UNOFFICIAL]
Hi,
I am trying to run ADL model by using auto.arima in packa
Hello,
Anamika Chaudhuri-2 wrote
>
>>
>> Hi All:
>>
>> I am using R to calculate exact 95% confidence interval using Clopper
>> Pearson method. I am using the following code but it seems to get into a
>> loop and not get out of it, it goes on forever although I am looping it
>> only 10 times acr
Hello again.
Ben quant wrote
>
> Hello,
>
> In case anyone is interested in a faster solution for lots of columns.
> This
> solution is slower if you only have a few columns. If anyone has anything
> faster, I would be interested in seeing it.
>
> ### some mockup data
> z.dates =
> c("2007-03
Hi,
I am trying to run ADL model by using auto.arima in package "forecast".
I put two time series,x and xreg, in the formula, but got message: Error
in nsdiffs(xx) : Non seasonal data. Any one can tell how to use it?
Thanks
Richard
**
Hi,
I would like to use the effect() function (actually a slightly modified version
of it) on the output of the lmer() function in the lme4 package. But the
effects package requires the nlme pacvkage, which is incompatible with lme4.
Workaround?
__
Pr
If it messes up your data, then it indicates that your data is messed
up to start with. Go to the source of you data and see if they can
put field separators. If not, then if the data is column-wise fixed
format, then use read.fwf to read in the data. If the data is as your
mail shows it, then R
Hi Michael
Thanks for the report and digging into the actual XML documents
that are sent.
It turns out that if I remove the redundant namespace definitions
and just use a single one on the node, all is apparently fine.
I've put a pre-release version of the SSOAP package that does at
http://w
On Mar 7, 2012, at 3:39 PM, David Winsemius wrote:
On Mar 7, 2012, at 3:16 PM, uday wrote:
HI Sarah, thanks for reply
your method works
what I did
load (" data.2005")
load ( "data.2006")
then
data.new<- c( or we can save it as a new .RData also.
That looks failure prone. You have remove
Dear Peter and Tamre,
I took a closer look at this today, and the infrastructure was there to do
the univariate tests even with a singular SSP matrix, so I modified
Anova.mlm() to accommodate this case. The updates are in the development
version of the car package on R-Forge.
An example (checked
On Mar 7, 2012, at 4:32 PM, karena wrote:
If a text file has rows of variable lengths. How to read the file
into R?
I think some people may suggest using 'fill=T', however, it sort of
messes
the data up, for example, in the text file:
abcd
1234
1 86
1
If a text file has rows of variable lengths. How to read the file into R?
I think some people may suggest using 'fill=T', however, it sort of messes
the data up, for example, in the text file:
abcd
1234
1 86
12 0
110
If I read in the
On Mar 6, 2012, at 6:19 PM, FU-WEN LIANG wrote:
Thanks for your advise, David.
I did read the help for survreg and using the followings to calculate.
survreg's scale =1/(rweibull shape)
survreg's intercept = log(rweibull scale)
However, the scale in rweibull has been transformed by exp(
On Mar 7, 2012, at 3:35 PM, Anamika Chaudhuri wrote:
Hi All:
I am using R to calculate exact 95% confidence interval using Clopper
Pearson method. I am using the following code but it seems to get
into a
loop and not get out of it, it goes on forever although I am
looping it
only 10 times
Michael, thanks for your answer!
The attribute solution has its problems though.
First, how it will work:
foo <- function(){
w <- gtkWindow()
da <- gtkDrawingArea()
w$add(da)
asCairoDevice(da)
print(dev.cur())
gObjectSetData(da, "dev.number", data = dev.cur())
dev.set(gObjectGetD
>
> Hi All:
>
> I am using R to calculate exact 95% confidence interval using Clopper
> Pearson method. I am using the following code but it seems to get into a
> loop and not get out of it, it goes on forever although I am looping it
> only 10 times across 63 sites with 10 observations per site. I
On Mar 7, 2012, at 3:16 PM, uday wrote:
HI Sarah, thanks for reply
your method works
what I did
load (" data.2005")
load ( "data.2006")
then
data.new<- c( or we can save it as a new .RData also.
That looks failure prone. You have removed the context of your
original question ( and PLEASE
On Wed, Mar 07, 2012 at 03:14:05PM -0500, Alexander Shenkin wrote:
> Hello,
>
> I need to take a dot product of each row of a dataframe and a vector.
> The number of columns will be dynamic. The way I've been doing it so
> far is contorted. Is there a better way?
>
> dotproduct <- function(
as.matrix(df) %*% vec
Michael
On Mar 7, 2012, at 3:14 PM, Alexander Shenkin wrote:
> Hello,
>
> I need to take a dot product of each row of a dataframe and a vector.
> The number of columns will be dynamic. The way I've been doing it so
> far is contorted. Is there a better way?
>
>dot
Now I got results as I wanted.
Thank you all.
On Wed, Mar 7, 2012 at 2:51 PM, AAsk wrote:
> > x <- -1:4
> > x<0 # returns TRUE (1) or FALSE (0)
> [1] TRUE FALSE FALSE FALSE FALSE FALSE
> > x+as.numeric(x<0)
> [1] 0 0 1 2 3 4
>
> __
> R-help@r-project.
On Mar 7, 2012, at 11:41 AM, Oritteropus wrote:
Hi,
I need to sample randomly my dataset for 1000 times. The sample need
to be
the 80%. I know how to do that, my problem is that not only I need
the 80%,
but I also need the corresponding 20% each time. Is there any way to
do
that?
Alterna
On Wed, Mar 07, 2012 at 08:41:35AM -0800, Oritteropus wrote:
> Hi,
> I need to sample randomly my dataset for 1000 times. The sample need to be
> the 80%. I know how to do that, my problem is that not only I need the 80%,
> but I also need the corresponding 20% each time. Is there any way to do
> t
HI Sarah, thanks for reply
your method works
what I did
load (" data.2005")
load ( "data.2006")
then
data.new<- c( data.2005, data.2006) or we can save it as a new .RData also.
The NULL files I can remove by selecting particular file number
e.g data.new<- c( data.2005[1:3], data.2006[4:6]
Hey everybody,
I try to run the grid-bootstrap procedure of Bruce Hansen, but I am a bit
overhelmed... How do I have to adjust the script in order to make it run for
my own time series.
I set the required inputs:
dat<-read.table("c:/users/Vaio/Documents/Gauss/Bai-Perron/Real.txt")
dat = ts(dat, s
Hello,
I need to take a dot product of each row of a dataframe and a vector.
The number of columns will be dynamic. The way I've been doing it so
far is contorted. Is there a better way?
dotproduct <- function(dataf, v2) {
apply(t(t(as.matrix(a)) * v2),1,sum) #contorted!
}
You can use load() to load them both, if they do not have objects with
identical names, then save() to make a new RData file.
I'm not clear on what you mean by a NULL file, though. If you know
which objects you want to get rid of, you can do that with rm().
Sarah
On Wed, Mar 7, 2012 at 12:46 PM,
On Wed, Mar 7, 2012 at 2:41 PM, Ajay Askoolum wrote:
> Thank you. Yes, I did look at the help but could not get my expressions to
> work.
>
> You used a data.frame with thematrix values I gave. Your solution also works
> when dealing with a matrix (this my be obvious to seasoned R users, but was
>
You can install R 2.14 from this ppa
https://launchpad.net/~marutter/+archive/rrutter
In the terminal type:
sudo add-apt-repository ppa:marutter/rrutter
sudo apt-get update
And then choose the packages you want to install.
--
View this message in context:
http://r.789695.n4.nabble.com/Apt
You could make a vector containing the number of TRUE values that
makes up 80% of your data, and the number of FALSE values that makes
up 20% of your data. Use sample() to reorder it, then use it to divide
your dataset.
If you had provided a reproducible example, I could write you code.
Sarah
On
I got two .RData file e.g data.2005.RData & data.2006.RData
I would like to combine these two different data set and make single RData
file.
in both file there are some NULL files are also available and I would like
to clear this NULL files also.
$ : NULL
$ : NULL
$ : num [1:43285, 1:8] -21.1
Dear R experts...I am a simple student who is getting lost because of this
amazing but stressful program.
I have been spending the last 4 weeks in trying to understand how to create
a matrix to analyze the eventual
dependence between my variables.
To be more precise, I have 91 variables to analyz
Hi,
I need to sample randomly my dataset for 1000 times. The sample need to be
the 80%. I know how to do that, my problem is that not only I need the 80%,
but I also need the corresponding 20% each time. Is there any way to do
that?
Alternatively, I was thinking to something like setdiff () functio
hi thank you very much sir.
options(help_type="text") is working to get the helpthanx a
lot
Naveen Verma
Department of Pharmacoinformatics
[[alternative HTML version deleted]]
__
R-help@r-project.org mailing list
https://st
Hi,
I am new to R and i am not sure if i am doing something wrong.
I have a table with 4500x24 (rowsxcols) elements. The rows are data
related to each one of the individuals (A,B,C...) located on the
columns.
Example:
A B C D E F
1 5.651296 5.480589 4.253070 3.515593 6.045253 5.916222
4.1
> x <- -1:4
> x<0 # returns TRUE (1) or FALSE (0)
[1] TRUE FALSE FALSE FALSE FALSE FALSE
> x+as.numeric(x<0)
[1] 0 0 1 2 3 4
__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-
The simplest method would be:
x[x<0] <- x[x<0]+1
x <- -1:4
x
# [1] -1 0 1 2 3 4
x[x<0] <- x[x<0]+1
x
# [1] 0 0 1 2 3 4
I think where Val got confused is in thinking that
if(x<0)
is applied separately to each element of x, one at a time.
What actually happens, of course, i
On Wed, Mar 7, 2012 at 2:29 PM, Daniel Nordlund wrote:
>> -Original Message-
>> From: r-help-boun...@r-project.org [mailto:r-help-boun...@r-project.org]
>> On Behalf Of Gabor Grothendieck
>> Sent: Wednesday, March 07, 2012 8:52 AM
>> To: Alaios
>> Cc: R help
>> Subject: Re: [R] GPS handlin
Thank you. Yes, I did look at the help but could not get my expressions to work.
You used a data.frame with thematrix values I gave. Your solution also works
when dealing with a matrix (this my be obvious to seasoned R users, but was no
to me).
> x<-matrix(c(57,91,31,61,16,84,3,99,85,47,21,6,57
> -Original Message-
> From: r-help-boun...@r-project.org [mailto:r-help-boun...@r-project.org]
> On Behalf Of Gabor Grothendieck
> Sent: Wednesday, March 07, 2012 8:52 AM
> To: Alaios
> Cc: R help
> Subject: Re: [R] GPS handling libraries or (String manipulation)
>
> On Wed, Mar 7, 2012 a
Try
> ifelse( x < 0, x + 1, x)
[1] 0 0 1 2 3 4
See also ?ifelse.
HTH,
Jorge.-
On Wed, Mar 7, 2012 at 2:12 PM, Val <> wrote:
> Hi All,
>
> I have one difficulty in using the conditional if statement
>
> Assume ,
>
> x <- -1:4
> x
> [1] -1 0 1 2 3 4
>
> if x is lees than want I want to ad
How about, say, order()? Did you even try to look
in the help, under sort or order since both appear
as keywords in your question?
> dput(x)
structure(list(V1 = c(57L, 84L, 21L, 61L), V2 = c(91L, 3L, 6L,
16L), V3 = c(31L, 99L, 57L, 84L), V4 = c(61L, 85L, 91L, 3L),
V5 = c(16L, 47L, 31L, 99L)),
You need ifelse() instead of if().
On Wed, Mar 7, 2012 at 2:12 PM, Val wrote:
> Hi All,
>
> I have one difficulty in using the conditional if statement
>
> Assume ,
>
> x <- -1:4
> x
> [1] -1 0 1 2 3 4
>
> if x is lees than want I want to add 1 and I used the following command
> if(x<0) {x
be sorted by row in ascending/descending order?
Given this matrix:
57 91 31 61 16
84 3 99 85 47
21 6 57 91 31
61 16 84 3 99
I want to end with this:
21 6 57 91 31
57 91 31 61 16
61 16 84 3 99
84 3 99 85 47
The 'order' of the sort is: 3 1 4 2
Also, what R expression will give m
Hi All,
I have one difficulty in using the conditional if statement
Assume ,
x <- -1:4
x
[1] -1 0 1 2 3 4
if x is lees than want I want to add 1 and I used the following command
if(x<0) {x=x+1}
Warning message:
In if (x < 0) { :
the condition has length > 1 and only the first element
I would certainly agree that given the post below, it is almost
impossible to offer much more than vague suggestions: simple, short,
reproducible examples per the posting guide rather than lengthy
complex code is required (although there are clever, perseverant folks
who may give it a go).
However
Did you try the survival package?
On Wed, 07-Mar-2012 at 10:50AM -0500, Jason Connor wrote:
|> I thought this would be trivial, but I can't find a package or function
|> that does this.
|>
|> I'm hoping someone can guide me to one.
|>
|> Imagine a simple case with two survival curves (e.g. tr
On Thu, Mar 8, 2012 at 4:50 AM, Jason Connor wrote:
> I thought this would be trivial, but I can't find a package or function
> that does this.
>
> I'm hoping someone can guide me to one.
>
> Imagine a simple case with two survival curves (e.g. treatment & control).
>
> I just want to calculate th
?by ?aggregate
On Tue, Mar 6, 2012 at 4:14 PM, Walter Anderson wrote:
> I needed to compute a complicated cross tabulation to show weighted means
> and standard deviations and the only method I could get that worked uses a
> series of nested for next loops. I know that there must be a better wa
Google is really useful for questions like this.
http://cran.r-project.org/bin/linux/ubuntu/
---
Jeff NewmillerThe . . Go Live...
DCN:Basics: ##.#. ##.#. Live Go...
On Mar 7, 2012, at 19:06 , henk harmsen wrote:
> i have a data frame with 2 columns of dates.
> with str(dataframe) i have ensured myself that they were indeed formatted
> as dates.
> one column has NA's in it.
>
> the aim is now to make a third column that chooses date1 if it is a date,
> and c
On Tue, Mar 6, 2012 at 8:55 PM, Byerly, Mike M (DFG)
wrote:
>
> estimates <-
c(67.42,30.49,32.95,23.53,10.26,6.03,23.53,0.93,50.72,24.2,25.84,18.54,
7.16,3.6,9.35,0.33,87.28,37.25,40.16,28.59,13.77,8.92,40.74,1.68,48.28,23.09,
24.49,17.7,6.63,3.28,7.79,0.26,91.63,38.74,41.6,29.74,14.49,9.51,44.1
i have a data frame with 2 columns of dates.
with str(dataframe) i have ensured myself that they were indeed formatted
as dates.
one column has NA's in it.
the aim is now to make a third column that chooses date1 if it is a date,
and choose date2 if it is a NA.
i am trying
df$date3=ifelse(is.na(d
On 06.03.2012 22:22, yushimito wrote:
I have a problem with Winbugs.
The code is executed perfectly, the only problem is that the stats option
isn't available (grey color) and i don't know how to make it work.
This is a question for a bugs mailing list!
Maybe if i run the winbugs throug R
The issue here is the difference between what is contained in a string
and what R displays to you.
The string produced with the code:
> tmp <- "C:\\"
only has 3 characters (as David pointed out), the third of which is a
single backslash, since the 1st \ escapes the 2nd and the R string
parsing r
On Mar 7, 2012, at 12:13 PM, Lucas wrote:
Thank you.
What could be a "User Error"? where could I be making a mistake?
Please consider what you are requesting. Attempting to enumerate the
possible errors would start with data input errors, progress through
transformational errors and fin
Hi
try
glm(Response~ .^2, data=yourdata.frame)
For all predictors (.) and 2-way interactions (^2).
You might also want to see ?drop.terms and ?formula for automating the
construction of all model combinations.
Side note: R is not SAS (fortunately). Interaction is denoted ":", X*Y
is shorthand for
On 03/07/2012 04:58 PM, Prof Brian Ripley wrote:
On 07/03/2012 15:45, Duncan Murdoch wrote:
On 12-03-07 7:59 AM, Miklós Emri wrote:
Dear experts,
I have to install ggplot2 packages for R 2.11 and 2.12 but this is
available for 2.14 only.
My question: are there urls for previous R version which
You wrote:
> Works for me:
>> e <- system("date", intern=TRUE)
>> e
>[1] "Wed Mar 7 08:58:32 GMTST 2012"
I suspect you have cygwin\bin in your PATH variable,
as that does not look like Windows date command:
> system("cmd.exe /c date /T")
Wed 03/07/2012
> shell("date /T")
Wed 0
You did not provide reproducible data since you didn't give us the "test"
file or any values for gene4 and gene5. You should read the posting
guideline (particularly the use of dput) and the documentation for package
ca.
I'm assuming that you want to plot supplementary rows (as described in the
ca
Thank you.
What could be a "User Error"? where could I be making a mistake?
I cannot use a lm because my data is not normal, is categorical (count
data). So my first option was Poisson, but had severe overdispersion
problems so I used Binomial Negative as an option.
Thank you for taking the t
I have a box set up with Kubuntu as the OS. I didn't perform
the R install but was told the version of R available via the
apt-get command was 2.13.1. Is there any way to get 2.14.0
in that same manner?
__
R-help@r-project.org mailing list
https://sta
On Wed, Mar 7, 2012 at 11:51 AM, Mark Heckmann wrote:
> Hello,
>
> I have an empty environment named env.
> Now I want the locally created objects in some function (foo) to appear in
> env.
> Within the function I want to have straight forward code, no assign operation
> or env$x etc. in front o
Wow... that is WAY better!
Thanks Gabor!
On Wed, Mar 7, 2012 at 8:51 AM, Gabor Grothendieck
wrote:
> On Wed, Mar 7, 2012 at 11:28 AM, Alaios wrote:
>> Dear all,
>> I would like to ask you if R has a library that can work with different GPS
>> formats
>>
>> For example
>> I have a string of thi
On Mar 7, 2012, at 11:03 AM, Dan Abner wrote:
Hi everyone,
What is the easiest way to remove the word Average and strip leading
and trailing blanks from the character vector (d5.Region) below?
.nrow.d5. d5.Region
11
22 Coastal Average
33
On Wed, Mar 7, 2012 at 11:28 AM, Alaios wrote:
> Dear all,
> I would like to ask you if R has a library that can work with different GPS
> formats
>
> For example
> I have a string of this format
>
> N50° 47.513 E006° 03.985
> and I would like to convert to GPS decimal format.
>
> that means for
Hello,
I have an empty environment named env.
Now I want the locally created objects in some function (foo) to appear in env.
Within the function I want to have straight forward code, no assign operation
or env$x etc. in front of every command.
What is the best way to do that?
Example:
Take a look at:
http://cran.r-project.org/web/views/Spatial.html
But I've always just parsed the string...
This is from the last time I did this, its not quite the same but you
can see the similarities.
## if data is presented as 43°02'46.60059" N need to split on the °
symbol, ' and ".
to.deci
On Mar 7, 2012, at 15:02 , Lucas wrote:
> Hi Pascal.
>
> I applied my analysis in time. I have 25 fire seasons, each season starts
> on November and ends up on April (our summer)
Hey, why are you worrying about regression coefficients. _Everything_ is
upside-down at your place... ;-)
> , so I
Dear all,
I would like to ask you if R has a library that can work with different GPS
formats
For example
I have a string of this format
N50° 47.513 E006° 03.985
and I would like to convert to GPS decimal format.
that means for example converting the part N50° 47.513
to 50 + 47/60 + 513/3600.
?as.numeric
> as.numeric(c(TRUE, FALSE))
[1] 1 0
On Wed, Mar 7, 2012 at 8:02 AM, Ed Siefker wrote:
> I am trying to use the coXpress function from
> the coXpress package. This function requires
> numerical vectors indicating which columns
> are in which group.
>
> The problem is, I can only fi
On 07/03/2012 11:02 AM, Ed Siefker wrote:
I am trying to use the coXpress function from
the coXpress package. This function requires
numerical vectors indicating which columns
are in which group.
The problem is, I can only figure out how
to get a logical structure, not a numerical one.
In other
I thought this would be trivial, but I can't find a package or function
that does this.
I'm hoping someone can guide me to one.
Imagine a simple case with two survival curves (e.g. treatment & control).
I just want to calculate the difference in KM estimates at a specific time
point (e.g. 1 year
Hi again,
Thanks for the responses. The latter solution does the trick!
I had tinkered around the numeric -> character route & tried as.Date a few
different ways, but needed guidance to the bullseye.
Thanks again!
--
View this message in context:
http://r.789695.n4.nabble.com/Convert-Numeric
I am trying to use the coXpress function from
the coXpress package. This function requires
numerical vectors indicating which columns
are in which group.
The problem is, I can only figure out how
to get a logical structure, not a numerical one.
In other words, coXpress wants something like:
"1:3"
Hadley's package stringr is wonderful for all things string.
library(stringr)
?str_trim
and
?str_replace are what you want. (the base R equivalent of these two
would be ?gsub and some regular expressions)
str_trim(str_replace(d5.Region, 'Average', ''))
should do the trick.
hope that helps,
On Wed, Mar 07, 2012 at 09:00:14AM -0700, Ben quant wrote:
> Hello,
>
> I have two matrices. They both have different row names and column names,
> but they have some common row names and column names. The row names and
> column names that are the same are what I am interested in. I also want the
Sarah,
Thanks! Great stuff...
Tom
On Wed, Mar 7, 2012 at 10:07 AM, Sarah Goslee wrote:
> Quoting from today's PhD Comics, available at:
> http://www.phdcomics.com/comics.php?f=1476
>
> What the methodology section says: "Analysis was performed using a
> commercially available software package."
Hi everyone,
What is the easiest way to remove the word Average and strip leading
and trailing blanks from the character vector (d5.Region) below?
.nrow.d5. d5.Region
11 Central Average
22 Coastal Average
33East Average
44
1 - 100 of 161 matches
Mail list logo