Quoting "Hutchinson,David [PYR]" <[EMAIL PROTECTED]>:
I wrote a simple function to change values of a matrix or vector to NA
based on the element value being - or -99. I don't understand
why the function returns a unit vector (NA) instead of setting all
values in the vector which have -9
Sorry, there was a stupid cut & paste mistake (missing parentheses in
return statement...)
ConvertMissingToNA <- function(values)
{
values[values == - | values == -99] <- NA
return(values)
}
Peter
__
R-help@r-project.org mailing list
Hi all,
I'm currently working through "The Analysis of Time Series" by Chris
Chatfield. In order to also get a better understanding of R, I play
around with the examples and Exercises (no homework or assignement,
just selfstudy!!).
Exercise 2.1 gives the following dataset (sales figures for 4 we
sorry, as Mark Leeds pointed out to me, the row/column numbers where
mixed up in my example... happens when you cut & paste like mad from
your history... it should read as follows:
sales2.1 <- c(153,189,221,215,302,223,201,173,121,106,86,87,108,
133,177,241,228,283,255,238,164,128,108,87,74,95,
1
On Tue 23 Oct 07, 10:56 AM, Gad Abraham <[EMAIL PROTECTED]> said:
> caffeine wrote:
>> I'd like to fit an ARMA(1,1) model to some data (Federal Reserve Bank
>> interest rates) that looks like:
>> ...
>> 30JUN2006, 5.05
>> 03JUL2006, 5.25
>> 04JUL2006, N < here!
>> 05JUL2006, 5.
Hello,
Is there a way to get the current line number in an R script?
As a silly example, if I have the following script and a function called
getLineNumber (suppose one exists!), then the result would be 3.
1 # This is start of script
2
3 print( getLineNumber() )
4
5 # End of script
Thanks for
Hello,
I am working on a GUI, which is working well so far.
I am working on a Windows 7 machine with 64 bit R (Microsoft R Open 3.3.2)
Essentially:
1) a VBS executable is used to open the GUI leaving the R terminal running
in the background but not showing using:
CreateObject("Wscript.Shell").Ru
ay be considered a Windows question - I will look further
into opening a process running in the background using DOS commands.
Thank you for your time.
On Wed, Apr 19, 2017 at 12:14 PM, Duncan Murdoch
wrote:
> On 19/04/2017 11:44 AM, Brad P wrote:
>
>> Hello,
>>
>> I am working on
Hello,
Keep in mind I am VERY new to using R... What I am trying to do is package
hundreds of files into a new sub-directory. I was able to accomplish this
with the code below. HOWEVER, I have come to find that instead of merely
having to name the new sub-directory after the 7-digit numeric prefix
dinate=y)
xg <- make.surface.grid(grid.l)
out.p <- as.surface(xg,z)
plot.surface(out.p,type="p")
tried:
grid_new.l <- list(abcissa=c(-15.0,-10.),ordinate=y)
xg_new <- make.surface.grid(grid_new.l)
out_new.p <- predict.surface(out.p,xg_new)
results in this prompt:
predict.surfa
rmatted emails are not passed
> along.
>
>
>
> On Tue, Dec 25, 2018 at 4:13 AM M P wrote:
>
>> Hello,
>> I used commands below to obtain a surface, can plot it and all looks as
>> expected.
>> How do I evaluate values at new point. I tried as below but
Actually, let's set it
grid_new.l <- list(abcissa=c(-15.0,-14.5),ordinate=y)
to avoid out of bounds
On Tue, Dec 25, 2018 at 4:41 PM M P wrote:
> Thanks, Eric, for looking into that.
> The values are below and since I subset the new abcissa is smaller range
> grid_new.l <-
You could convert your data from a wide format to a long format using
the reshape function in base R:
DF2 <- reshape(DF, direction="long",
idvar=names(DF)[1:3],
varying=c("site1_elev", "site1_temp", "site2_elev", "site2_temp"),
v.names=c("elev", "temp"),
times=1:2
Hi,
You could use an anonymous function to operate on each `year-block' of
your dataset, then assign the result as a new column:
d <- data.frame(year=c(rep(2001, 3), rep(2002, 3)),
num=c(25,75,150,30,85,95))
d$diff <- unlist(by(d$num, d$year, function(x) x - x[1]))
d
year n
Hi,
As Jeff said, more than one grouping variable can be supplied, and there
is an example at the bottom of the help page for ave(). The same goes
for by(), but the order that you supply the grouping variables becomes
important. Whichever grouping variable is supplied first to by() will
chang
# [1] -5.052890 -9.967671
# [[2]]
# [1] 2.648870 -2.629866
xyplot(y ~ x | name, data = my.df,
type = c("p", "g"),
scales = list(relation="free", limits=holdRange))
Philip
On 6/11/2016 9:59 AM, Naresh Gurbuxani wrote:
I want to draw a lattice graph where d
Hi,
It may help that:
aggregate(DF$total, list(DF$note, DF$id, DF$month), mean)
should give you means broken down by time slice (note), id and month.
You could then subset means for GA or GB from the aggregated dataframe.
Philip
On 27/11/2016 3:11 AM, lily li wrote:
Hi R users,
I'm trying
Hi Val,
The by() function could be used here. With the dataframe dfr:
# split the data by first name and check for more than one last name for
each first name
res <- by(dfr, dfr['first'], function(x) length(unique(x$last)) > 1)
# make the result more easily manipulated
res <- as.table(res)
res
in one pass.
If your data set is really big (running out of memory big) then you
might want to investigate the data.table or sqlite packages, either of
which can be combined with dplyr to get a standardized syntax for
managing larger amounts of data. However, most people actually aren't
ne pass.
If your data set is really big (running out of memory big) then you
might
want to investigate the data.table or sqlite packages, either of
which can
be combined with dplyr to get a standardized syntax for managing
larger
amounts of data. However, most people actually aren't running ou
aggregate(), tapply(), do.call(), rbind() (etc.) are extremely useful
functions that have been available in R for a long time. They remain
useful regardless what plotting approach you use - base graphics,
lattice or the more recent ggplot.
Philip
On 22/02/2017 8:40 AM, C W wrote:
Hi Carl,
Hi Luigi,
I'm afraid I don't understand your toy data as you've described it, but
if you really don't have run 2 for target A, and don't have run 1 for
target B, why not just create another factor that reflects this, and
plot that?
my.data$clus2 <- with(my.data, interaction(cluster, target)
Dear group:
sorry for my beginners question, but I'm rather new to R and was searching high
and low without success:
I have a data frame (df) with variables in the rows and observations in the
columns like (the actual data frame has 15 columns and 1789 rows):
early1 early2 early3 ear
Dear group,
I am browsing the web to find suitable software for dynamic network analysis. I
came across the sna-package under R, but this seems to do static analysis only.
Is this right? Or is there a separate package for dynamic network analysis?
Thanks
You've already been pointed to options(digits=);
here's another way: since your data appear to be limited to 2 decimals,
why not select your noise from UNIF(0, 0.001)?
More importantly, are you really trying to do correlation between the
values you're showing us? What do you hope to learn from su
Try
sortedx1=ttx1[order(-ttx1$obs),]
(and ask yourself where obs lives)
-Peter Ehlers
Hyo Lee wrote:
Hi guys,
I need your help.
I'm trying to sort the data by the variable "obs".
This is how I tried to sort the data below.
The problem is, I have a variable name "obs"; this is.. a counter va
You don't need to attach. But you do need to mention what kind of
object ttx1 is. I had (foolishly) assumed that it was dataframe,
but I see that it's a matrix. So try this:
ttx1[order(-ttx1[,"obs"),]
-Peter Ehlers
Hyo Lee wrote:
Ok. I just figured out what the problem was.
I had to attach()
Oops, missed a square bracket:
ttx1[order(-ttx1[,"obs"]),]
-Peter Ehlers
P Ehlers wrote:
You don't need to attach. But you do need to mention what kind of
object ttx1 is. I had (foolishly) assumed that it was dataframe,
but I see that it's a matrix. So try this:
tt
Hi Martin,
Thanks for the help. Just to make sure I understand correctly.
The below steps are for creating an example table similar to the one that I
read from file.
n <- 22638
m <- 80914
nnz <- 30 # no idea if this is realistic for you
set.seed(101)
ex <- cbind(i = sample(n,nnz, replace=TR
Pallavi
On Tue, Oct 27, 2009 at 8:34 PM, Martin Maechler wrote:
> >>>>> "PP" == Pallavi P
> >>>>> on Tue, 27 Oct 2009 18:13:22 +0530 writes:
>
>PP> Hi Martin,
>PP> Thanks for the help. Just to make sure I understand correctly.
&
Martin Maechler wrote:
> >>>>> "PP" == Pallavi P
> >>>>> on Wed, 28 Oct 2009 16:30:25 +0530 writes:
>
>PP> Hi Martin,
> PP> I followed your example on my set of data. Which has non zero
> values in
>PP> 30
As I wrote earlier:
"I had to add the rectangles= and points= arguments to
auto.key to get the same key as you had earlier."
and the relevant line in the code was:
auto.key = list(space = 'right', rectangles=TRUE, points=FALSE)
-Peter Ehlers
Peng Cai wrote:
Hello Peter and David,
Thanks f
If your group sizes are not too large, I would use jittered stripcharts.
They're more informative than boxplots and much less subject to
misinterpretation. One warning, I'm not fond of the default pch=0.
-Peter Ehlers
DispersionMap wrote:
What ways are there to plot categorical vs numerical dat
I want generate R code to determine the real root of the polynomial
x^3-2*x^2+3*x-5. Using an initial guess of 1 with Newton's method.
Help please...
--
View this message in context:
http://www.nabble.com/Finding-root-using-Newton%27s-method-tp23534519p23534519.html
Sent from the R help mailing
Dear R-users,
I have a problem in making a chi-square density function curve.
I have sth like curve(dchisq(x, df))
from help, x is vector of quantiles, df is the degree of freedom.
I do not understand what vector of quantiles is, what do I need to put in?
Thanks!
--
View this message in context:
Hi!
I'm reading a tab-seperated CVS file with:
test1 <- read.table("data.txt", header=TRUE)
It's in the following format:
Date_Time qK qL vL vP ...
0 30 22 110 88 ...
...
(BTW: It seems to me R shifts the column descriptions by one.)
Anyway, I would like to Fourier-transform one colum
>
> > Anyway, I would like to Fourier-transform one column. So I say:
> >> fft(test1$vP)
> > Error in levels(x)[x] : invalid subscript type 'complex'
> >> test1$vP[1:10]
> > [1] 110 108 116 118 114 120 117 111 95 118
> > 166 Levels: - 0 1 10 100 101 102 103 104 105 106 107 108 109 11 110 111
>
In Canada we use sawmills to process logs. Or we carve
them into totem poles. Or we carve dugout canoes.
Of course, sometimes we burn them in order to keep warm.
Peter Ehlers
On 2013-01-04 10:50, peter dalgaard wrote:
On Jan 4, 2013, at 18:41 , jim holtman wrote:
what type of logs are you tr
Hello,
It should be easu but I cannot figure out how to use apply function. I am
trying to replace negative values in an array with these values + 24.
Would appreciate help. Thanks,
Mark
shours <- apply(fhours, function(x){if (x < 0) x <- x+24})
Error in match.fun(FUN) : argument "FUN" is missing,
On 2013-04-09 7:35, kais...@med.uni-duesseldorf.de wrote:
There are two misspellings in the german Error message for friedman test:
Fehler in friedman.test.default(cont$score, group = cont$goup, blocks =
cont$cont) :
y, Gruppen und blöcke müssen die sekbe Länge haben
The correct spellin
Dear all,
I'm looking for a test to identify significant differences
(p-value) between 2 raster maps.
Can you suggest me a way to solve this problem?
Thank you in advance.
Paul
[[alternative HTML version deleted]]
__
R-help@r-projec
hich.min(fit2$Cp)]),], res)
names(lasso)[4:6] <- c("X1.pval", "X2.pval", "X3.pval")
return(lasso)
# for output
#
}
# function end
# now use F.1 in a loop...
it = 3
out <- vector(&qu
Hello,
my series of dates look like
[1] "2012-05-30 18:30:00 UTC" "2012-05-30 19:30:00 UTC"
[3] "2012-05-30 20:30:00 UTC" "2012-05-30 21:30:00 UTC"
[5] "2012-05-30 22:30:00 UTC" "2012-05-30 23:30:00 UTC"
[7] "2012-05-31 00:30:00 UTC" "2012-05-31 01:30:00 UTC"
[9] "2012-05-31 02:30:00 UTC
Thats works perfectly, thanks a lot,
Mark
On Thu, Dec 13, 2012 at 11:34 AM, arun wrote:
> Hi,
> Try this:
> seq1<-seq(from=as.POSIXct("2012-05-30
> 18:30:00",tz="UTC"),to=as.POSIXct("2012-05-31 02:30:00",tz="UTC"),by="1
> hour")
> seq2<-seq(from=as.POSIXct("2012-05-31
> 00:30:00",tz="UTC"),to=as
I have very basic knowledge in R. Kindly help me to convert dot file to gml
file. Can u suggest me some datasets of social networks consists of edge
weight and edge timestamp.
Thanks and regards,
Vasanthi P.
[[alternative HTML version deleted
pls1_nipals_mod <- function(X, y, a, it = 50, tol = 1e-08, scale = FALSE)
{
Xh <- scale(X, center = TRUE, scale = scale)
yh <- scale(y, center = TRUE, scale = scale)
T <- NULL
P <- NULL
C <- NULL
W <- NULL
for (h in
Hello All,
I have multiple "list of lists" in the form of
Mylist1[[N]][[K]]$Name_i,
with N=1..6, K=1..3, and i=1..7. Each Name_i is a matrix. I have 30 of these
objects Mylist1, Mylist2, ...
I would like to merge these lists by each Name_i using rbind, but I couldn't
figure out how to do i
Jun Shen wrote:
Hi,
I have this symmetric matrix, at least I think so.
col1 col2 col3
[1,] 0.20 0.05 0.06
[2,] 0.05 0.10 0.03
[3,] 0.06 0.03 0.08
or
structure(c(0.2, 0.05, 0.06, 0.05, 0.1, 0.03, 0.06, 0.03, 0.08
), .Dim = c(3L, 3L), .Dimnames = list(NULL, c("var1", "var2",
"var3")))
B
Hi,
I need some help on integrating a function that is a vector.
I have a function - vector which each element is different. And,
naturally, function integrate() does not work
I checked the article of U. Ligges and J. Fox (2008) about code
optimization "How Can I Avoid This Loop or Make It Faster?
group2) <- c("group", "dim1", "dim2", "dim3")
combined <- rbind(group1, group2)
combined[,2:4] <- combined[,2:4] > .1
ctables <- xtabs(~., data = combined)
loglm(~group+dim1+dim2+dim3, data=ctables)
Call:
loglm(formula = ~group + dim1 + dim2 + dim
Try
x1<-matrix(1,3,1)%x%x
y1<-y%x%matrix(1,3,1)
Z<-cbind(x1,y1)
And later you need to move towards list and matrix
On Mon, Nov 8, 2010 at 11:15 AM, abotaha wrote:
>
> Hello,
>
> I have two data.
>
> x<-c(1, 2, 3)
> y<-c(4,5,6)
>
> How do i create matrix of 3 by 3 from this two, such that
>
>
Jannis wrote:
Hi Sachin,
please read the posting guide and include a reproducible example of what
you want to do.
For your first question you should have a look at ?axis. Supplying the
'at' argument with the positions of the desired marks and the 'labels'
with text strings like '10.000$' sh
Chris Carleton wrote:
Hi List,
I'm trying to get a density estimate for a point of interest from an npudens
object created for a sample of points. I'm working with 4 variables in total
(3 continuous and 1 unordered discrete - the discrete variable is the
character column in training.csv). When I
jim holtman wrote:
increase the margins on the plot:
par(mar=c(4,7,2,1))
plot(1:5,y,ylab='',yaxt='n' );
axis(2, at=y, labels=formatC(y,big.mark=",",format="fg"),las=2,cex=0.1);
That's what I would do, but if you want to see how cex works,
use cex.axis=0.5. Check out ?par.
-Peter Ehlers
Morris Anglin wrote:
I have R version 2.9.1 on my computer and the anlaysis is not working
because I need to update to R version 2.12.0 the latest release.
The person incharge of IT tried to download R version 2.12.0 but .exe file referenced in install isn't
there -What might we be doing
Vadim Patsalo wrote:
Patrick and Bert,
Thank you both for you replies to my question. I see how my naïve expectations
fail to floating point arithmetic. However, I still believe there is an
underlying problem.
It seems to me that when asked,
c(7.7, 7.8, 7.9) %in% seq(4, 8, by=0.1)
[1] TRUE
Hi,
I have used Vegan to construct an NMDS ordination plot. I plotted sites of
three forest types with the site number in it. My reviewer has asked me to
use different symbols for each of the forest types.
Can anyone send me how I can do this in R in simple steps. I have used the
options like ordi
Dear all, is there a way to loop the rp.doublebutton function in the rpanel
package? The difficulty I'm having lies with the variable name argument.
library(rpanel)
if (interactive()) {
draw <- function(panel) {
plot(unlist(panel$V),ylim=0:1)
panel
}
panel <- rp.control(V=as.l
Den wrote:
Dear R community
Recently, dear Henrique Dallazuanna literally saved me solving one
problem on data transformation which follows:
(n_, _n, j_, k_ signify numbers)
SOURCE DATA:
id cycle1 cycle2 cycle3 … cycle_n
1 c c c c
1 m
Den wrote:
Dear Dennis
Thank you very much for your comprehensive reply and for time you've
spent dealing with my e-mail.
Your kindly explanation made things clearer for me.
After your explanation it looks simple.
lapply with chosen options takes small part of cycle with same id
(eg. df[df$id==
MM wrote:
Hello,
Is the "std.dev" component of ls.diag( lsfit(x,y) ) the sample standard
deviation of the residuals of the fit?
I have
ls.diag(lsfit(xx,yy))$std.dev
different from
sd(lsfit(xx,yy)$residuals)
where xx and yy are vectors of 5 elements.
Compare
ls.diag(lsfit(xx,yy))$std
Johnson, Cedrick W. wrote:
That worked. Stupid me forgot that I had the stock ticker 'F' assigned
in my workspace.
Well.. guess I'll hit myself with a 2x4 now.. Thanks for your help guys..
No, don't do that. Instead, calculate how much time you saved by typing
'F' instead
of 'FALSE' and how
racters 1 to 3, 12 to 15 and 17 to the end.
That was a great tip, though, because it led me to strsplit, which can do
what I want, however somewhat awkwardly:
y <- "a b c d e f g h i j k l m n o p q r s t u v w x y z"
paste(unlist(strsplit(y, delim))[c(1:3,12:15,17:26)], collapse=delim)
equal. - Sarah
With na.omit around the column, but it is showing other values in the F.WW
column other than 200525, along with NA. I was hoping that this would omit all
the NA's, and show all the rows that P$F.WW=200525. I believe it did with the
previous version of R.
P[na.omit(P$F.WW)=
On 2011-05-05 0:47, Russ Abbott wrote:
Hi,
I'm having trouble with quantmod's addTA plotting functions. They seem to
work fine when run from the command line. But when run inside a function,
only the last one run is visible. Here's an example.
test.addTA<- function(from = "2010-06-01") {
already knows the information one is looking for. If
you don't know it, the help files are not very helpful. This is a good
example. In fact, it's two good examples. I didn't know that I had to
look at another page, and I (still) don't know what it means to wrap
plot calls in
Gene,
David has given you the preferred code. I just want to
point out that the $-accessor is often not the best
thing to use. Both dat[["y"]] and dat[, "y"] will work
just fine.
Peter Ehlers
On 2011-05-05 12:06, David Winsemius wrote:
On May 5, 2011, at 1:08 PM, Gene Leynes wrote:
This is
On 2011-05-05 14:20, Schatzi wrote:
I do not want smoothing as the data should have jumps (it is weight left in
feeding bunker). I was thinking of maybe using a histogram-like function and
then averaging that. Not sure if this is possible.
(It would be useful to include your original request -
On 2011-05-27 0:48, Coen van Hasselt wrote:
Hello,
I would like to change the background color in only -one- of the strips in a
multipanel lattice xyplot, from the default yellow-brown color.
Until now, I only managed to change the background strip color in all of the
strips using the par.settin
. Party
1 John x F
2 Mary s S
3 Katie x O
4 Sarah p L
5 Martin x O
6 Angelika x F
7
Thank you all,
tried all options and it gives me exactly what I needed! Many many thanks
again)
to Bert,
oh, I see, yes, next time I will do that.
Kristina
--
View this message in context:
http://r.789695.n4.nabble.com/subsetting-with-condition-tp3567193p3567645.html
Sent from the R help ma
Scott Chamberlain wrote:
This thread seems freakishly similar to what you are askingScott
Even to the point of including the same typo as well as proof
that neither poster bothered to read the posting guide.
Great spot, Scott!
Peter Ehlers
http://tolstoy.newcastle.edu.au/R/help/06/07/3
Dennis Murphy wrote:
Hi:
On Wed, Mar 2, 2011 at 1:52 PM, John Smith wrote:
Hello All,
I try to use the attached code to produce a cross over plot. There are 13
subjects, 7 of them in for/sal group, and 6 of them in sal/for group. But
in
xyplot, all the subjects are listed in both subgraphs.
LouiseS wrote:
Hi
I'm new to R and most things I want to do I can do but I'm stuck on how to
weight a sample. I have had a look through the post but I can't find
anything that addresses my specific problem. I am wanting to scale up a
sample which has been taken based on a single variable (perf
Marius Hofert wrote:
Dear expeRts,
How can I increase the space between the ticks and the labels in the wireframe
plot
below? I tried some variations with par.settings=list(..) but it just didn't
work.
Marius,
I tried setting the 'distance' parameter, but that was less
than satisfactory. On
Timothy W. Hilton wrote:
To clarify the trouble I'm having with ylab.right, I am not getting an
error message; the right-side label just does not appear on the plot.
Maybe this is mac-specific. On Windows, the label shows up
just fine. You might be able to make it appear by adjusting
the 'vjust
Alaios wrote:
That's the problem
Even a 10*10 matrix does not fit to the screen (10 columns do not fit in one
screen's row) and thus I do not get a well aligned matrix printed.
I don't see why you would want to do this, but you
could always invoke two instances of R and create
one matrix in o
Hi,
I want to apologize in advance if this has already been asked. I
wasn't able to find any information, either on google or from local
list search.
I'm running an R shell from a linux command line, in an xterm window.
Whenever I print a data frame, only the first couple of columns are
printed s
Dear Community,
my program below runs quite slow and I'm not sure whether the http-requests are
to blame for this. Also, when running it gradually increases the memory usage
enormously. After the program finishes, the memory is not freed. Can someone
point out a problem in the code? Sorry my bas
Hello,
I have a problem writing a variable to an existing file.
Below is a part of my script and how it fails.
I can't find "create.var.ncdf" in help
Thanks for any help.
Mark
nc <- open.ncdf(ncname, readunlim=FALSE, write=TRUE )
missing <- 1.e+30
xdim <- nc$dim[["west_east"]]
ydim <- nc$dim[["s
ably get
> around 220 (depending on the overhead of the program, full screen,
> etc.). I imagine it would also be possible to run into limitations
> from the terminal R is running in, though I do not know that for a
> fact.
>
> Cheers,
>
> Josh
>
> On Fri, Sep 16, 2011
Hello,
I have a question on how to get bioconductor running properly on
Ubuntu 11.10 as I have tried everything it seems like and I keep on
getting this message. But before I go farther can you please email and
say that you will help me with this because nobody seems to want to
help or know how to
Hello,
So I need some help. I have been trying to get biocondutor to work on
my computer that has Ubuntu 11.10 running on it and I found out that I
needed to install something called TkTable before I install R. So I
did that but now I have no idea how to properly uninstall/reinstall R,
and I canno
Bert: inline
On 2012-05-13 7:43, Bert Gunter wrote:
Peter/David:
1. For some reason, I didn't see Peter's reply on r-help.
2. To Peter: Aha!!
Let me play this back to you. In
text(1,1,labels=expression(atop(atop(sigma,"some text"),"another
level")),cex = 2)
The (outer) whole atop() specifi
Hello,
I used "correlogram" from "spatial" package to determine correlation scale
for my data but just looking with bare eye it seems that the correlation
scale varies over the domain.
Can someone suggest what would the best way to handle that problem?
Thanks,
Mark
[[alternative HTML versi
If you read ?mad you will find this phrase:
"median of the absolute deviations from the median"
Note the first word. I think you're too focused on
the last word.
Peter Ehlers
Nair, Murlidharan T wrote:
>
> -Original Message-
> From: Deepayan Sarkar [mailto:[EMAIL PROTECTED]
> Sent: Mo
If you need a subscript as well, I like
plot(0, main=quote({NO^'\x96'}[3]))
Peter Ehlers
Peter Dalgaard wrote:
> Gavin Simpson wrote:
>> Dear List,
>>
>> I'm trying to typeset some chemical ions in axis labels. These have both
>> super and subscript components, and for some, I need a superscri
Yes, sorry, I should have said that I was on Windows.
In a UTF-8 locale, you could try \u2013 in place of \x96.
The character is an endash.
Peter Ehlers
Scionforbai wrote:
> Hallo,
>
>> If you need a subscript as well, I like
>>
>> plot(0, main=quote({NO^'\x96'}[3]))
>
>
> I tried this but I
I find sprintf() useful for this.
Compare
lab <- rnorm(8)
plot(1:10)
text(2:9, 2:9, lab)
with
lab2 <- sprintf("%4.2f", lab)
plot(1:10)
text(2:9, 2:9, lab2)
- Peter Ehlers
Matthew Dubins wrote:
> Hi there,
>
> I want to figure out how to plot means, with 2 decimal places, of any Y
> variable
How about
a <- .33
b <- .55
legend("bottom", fill=c("red","blue"),
legend=c(bquote(p == .(a)), bquote(p == .(b))), bty="n")
or look at ?substitute
- Peter Ehlers
stat stat wrote:
> I have following syntax for putting a legend :
>
> l
You might also look at ?Arrows in package IDPmisc
and ?p.arrows in package sfsmisc.
- Peter Ehlers
> --- Lorenzo Isella <[EMAIL PROTECTED]> wrote:
>
>> Dear All,
>> I hope this is not a FAQ, but my online research was
>> not fruitful.
>> Consider a standard 2D plot generated with the
>> "plot
Omar Baqueiro wrote:
> Hello,
>
> I have tested a distribution for normality using the Shapiro-Welch
> statistic. The result of this is the following:
>
>
> Shapiro-Wilk normality test
>
> data: mydata
> W = 0.9989, p-value = 0.8791
>
>
>
John,
Jim has shown how to accomplish what you want.
Here's a slight variation (for a single model):
y <- rnorm(20)
x <- runif(20)
z <- runif(20)
fm <- lm(y ~ x + z)
m <- cbind(NA, coef(summary(fm)))
colnames(m)[1] <- deparse(formula(fm))
print(m, na.print = "")
- Peter Ehlers
jim holtman wr
Hello,
I'd like to check if my data can be well approximated with a function
(1+x/L) exp(-x/L)
and calculate the best value for L. Is there some package in R that would
simplify that task?
Thanks,
Mark
__
[[alternative HTML version de
No, that's not my homework. Does that seem so easy?
Mark
Rolf Turner <[EMAIL PROTECTED]> wrote:
On 26/10/2007, at 10:14 AM, m p wrote:
> Hello,
> I'd like to check if my data can be well approximated with a function
> (1+x/L) exp(-x/L)
> and calculate the bes
hat if there is more than one column starting with the
same letter(s), more than one letter has to be given to call the column.
However, I would appreciate if I could choose an option in my workspace,
whether this type of shortcut is allowed or not.
Is there such an option?
Thanks for any pote
Hi everybody,
Question: why are my dataframe and numeric variables a character?
I read an excel file via readxl but my dataframe is a character, and
numeric variables, eg "yi", are also a character.
My excelfile is in English numeric
Sometimes the dataframe was indeed a dataframe, but I do not kn
Dear useRs,
I'm pleased to announce the v1.0.0 major release of the "elo" package on CRAN
(https://cran.r-project.org/package=elo).
This package implements a flexible framework for calculating Elo ratings of any
two-team-per-matchup system (chess, sports leagues, 'Go', etc.). It is capable
of
Hi everybody,
How can I get text from RScript (e.g. syntax, reminder) into the result
text.
Sink() does not do that - I only read the results and therefore I have to
'guess' which syntax was used where - reminders I wrote are lost.
Bw and thank you in advance,
Roberto
[[alternative HTML
re that a reasonable answer can be extracted from a given body
> of data. ~ John Tukey
>
> ///
>
>
>
>
> 2018-04-24 11:23 GMT+02:00 P. Roberto Bakker :
> > Hi everybody,
> >
> > How can I get text from RScript (e.g. syntax, reminder) into
1 - 100 of 437 matches
Mail list logo