06601):6.360692368,5:22.69076035):5.725388419,(6:1.611149584,7:1.611149848):1.556474893,8:3.167624477):4.130280196,9:7.297904013):1.497063399,10:8.794967413):7.19682079,(11:2.539095678,12:2.539096008):13.45269085):12.42436025);
Dr. Ted Stankowich
Associate Professor
Department of Biological Sc
Thanks - a previous response resolved the issue and I'm off and running with
the analyses.
-Original Message-
From: David Winsemius [mailto:dwinsem...@comcast.net]
Sent: Thursday, June 4, 2020 5:02 PM
To: Ted Stankowich
Cc: Rui Barradas ; William Dunlap ;
r-help@r-project.org
Su
This worked! Thank you!
-Original Message-
From: Rui Barradas [mailto:ruipbarra...@sapo.pt]
Sent: Thursday, June 4, 2020 2:49 PM
To: Ted Stankowich ; William Dunlap
Cc: r-help@r-project.org
Subject: Re: [R] na.omit not omitting rows
CAUTION: This email was sent from an external source
ursday, June 4, 2020 12:39 PM
To: Ted Stankowich
Cc: r-help@r-project.org
Subject: Re: [R] na.omit not omitting rows
CAUTION: This email was sent from an external source. Use caution when
replying, opening links or attachments.
Does droplevels() help?
> d <- data.frame(size = factor(c(&quo
, "names")= chr "Alouatta_macconnelli_ATELIDAE_PRIMATES"
"Alouatta_nigerrima_ATELIDAE_PRIMATES" "Ateles_fusciceps_ATELIDAE_PRIMATES"
"Callicebus_baptista_PITHECIIDAE_PRIMATES" ...
Dr. Ted Stankowich
Associate Professor
Department of Biological Sciences
California State University Long
hy they
may not be independent of each other, the test os not valid.
You say "I'm trying to use ks.test in order to compare two curve".
When I ezecute
plot(a)
plot(b)
on your data, I see (approximately) in each case a rise from a
medium vale (~2 or
hen Prob[X > x1] = 0.
Hence if x0 is the minimum value such that Prob[X <= x0] = 1,
then X "can reach" x0. But for any x1 > x0, Prob[x0 < X <= x1] = 0.
Therefore, since X cannot be greater than x0, X *cannot reach* x1!
Best wishes,
Ted.
On Tue, 2018-10-23 at 12:06 +0100, Hame
Sorry -- stupid typos in my definition below!
See at ===*** below.
On Tue, 2018-10-23 at 11:41 +0100, Ted Harding wrote:
Before the ticket finally enters the waste bin, I think it is
necessary to explicitly explain what is meant by the "domain"
of a random variable. This is not (though
,1], the domain of X is Q.
Then for x <= 0 _Prob[X <= x] = 0, for 0 <= x <= 1 Prob(X >=x] = x,
for x >= 1 Prob(X <= x] = 1. These define the CDF. The set of poaaible
values of X is 1-dimensional, and is not the same as the domain of X,
which is 3-dimensional.
Hopiong this
es into two halves, the median
is not available, hence NA.
Best wishes to all,
Ted.
On Wed, 2018-08-22 at 11:24 -0400, Marc Schwartz via R-help wrote:
> Hi,
>
> It might even be worthwhile to review this recent thread on R-Devel:
>
> https://stat.ethz.ch/pipermail/r-devel/2018-July
Pietro,
Please post this to r-help@r-project.org
not to r-help-ow...@r-project.org
which is a mailing liat concerned with list management, and
does not deal with questions regarding the use of R.
Best wishes,
Ted.
On Sat, 2018-07-14 at 13:04 +, Pietro Fabbro via R-help wrote:
> I will try
biniks Pedersen's
inconsistency:
sum(c(NaN,NA))
[1] NaN
sum(NaN,NA)
[1] NA
is not consistent with the above reasoning.
However, in my R version 2.14.0 (2011-10-31):
sum(NaN,NA)
[1] NA
sum(NA,NaN)
[1] NA
which **is** consistent! Hmmm...
Best wishes to all,
Ted.
On Wed, 2018-07-
On Tue, Jul 3, 2018 at 9:25 AM, J C Nash wrote:
>
> > . . . Now, to add to the controversy, how do you set a computer on fire?
> >
> > JN
Perhaps by exploring the context of this thread,
where new values strike a match with old values???
Ted
___
edom' is 19.
This is not the same issue as (one of my prime hates) saying
"the data is srored in the dataframe ... ". "Data" is a
plural noun (ainguler "datum"), and I would insist on
"the data are stored ... ". The French use "une donnee" and
ng of Numbers", covering the
functions ceiling(), floor(), trunc(), round(), signif().
Well worth reading!
Best wishes,
Ted.
On Thu, 2018-05-31 at 08:58 +0200, Martin Maechler wrote:
> >>>>> Ted Harding
> >>>>> on Thu, 31 May 2018 07:10:32 +0100 writes:
>
.382 0.540
present 0.428 1.236 0.215 1.804 2.194
reward0.402 1.101 0.288 1.208 0.890
feedback 0.283 0.662NANA NA
goal 0.237 0.474NANA NA
Best wishes to all,
Ted.
On Thu, 2018-05-31 at 15:30 +1000, Jim Lemon wrote:
> Hi Joshua,
> Because there are no val
Apologies for disturbance! Just checking that I can
get through to r-help.
Ted.
__
R-help@r-project.org mailing list -- To UNSUBSCRIBE and more, see
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org
quot;@".
Once they have the address then anything can happen!
Best wishes,
Ted (eagerly awaiting attempted seduction ... ).
On Wed, 2018-04-18 at 10:36 +, Fowler, Mark wrote:
> Seems it must be the R-list. A horde of ‘solicitation’ emails began arriving
> about 27 minutes after
= i+1 ; print(i)
}
# [1] 3
# [1] 4
# [1] 5
# [1] 6
# Error in while (x[i] <= 5) { : missing value where TRUE/FALSE needed
So everything is fine so long as i <= 5 (i.e. x[i] <= 5),
but then the loop sets i = 6. and then:
i
# [1] 6
x[i]
# [1] NA
x[i] <= 5
# [1] NA
Helpful?
Best
when i > 1 then stop < start, so you get nothing. Compare with:
x <- "testing"
k <- nchar(x)
for (i in 1:k) {
y <- substr(x, i, i) ### was: substr(x, i, 1)
print(y)
}
[1] "t"
[1] "e"
[1] "s"
[1] "t"
[1] "i"
Suzen, thank you very much for your so useful information (I will try to
understand it)!
And my sincere gratitude to the moderator!
>"Suzen, Mehmet" < msu...@gmail.com >:
>I also suggest you Hadley's optimized package for interoperating xls
>files with R:
>https://github.com/tidyverse/readxl
>htt
"Data set flchain available in the survival package". How can I get it (from
R) as Excel file? Thanks!
[[alternative HTML version deleted]]
__
R-help@r-project.org mailing list -- To UNSUBSCRIBE and more, see
https://stat.ethz.ch/mailman/lis
Many thanks, Jim!!!
>Jim Lemon < drjimle...@gmail.com >:
>Have a look at axis.mult in the plotrix package.
>Jim
>>iPad via R-help < r-help@r-project.org > wrote:
>> How to multiplying y-axis ticks value by 100 (without put the % symbol next
>> to the number) here:
>> plot (CI.overall,
ndard 32-bit double precision.
>
>
> Well, for large values of 32... such as 64.
Hmmm ... Peter, as one of your compatriots (guess who) once solemnly
said to me:
2 plus 2 is never equal to 5 -- not even for large values of 2.
Best wishes,
Ted.
_
that FALSE & NA = FALS£.
On the other hand, if with the "missing" interpretation of "NA"
we don't even know that it is a logical, then it might be fair
enough to say FALSE & NA = NA.
Ted.
[Additional thought]:
Testing to see what would happen if the NA were not loig
gits = 53 binary places.
So this normally "almost" trivial feature can, for such a simple
calculation, lead to chaos or catastrophe (in the literal technical
sense).
For more detail, including an extension of the above, look at the
original posting in the R-help archives for Dec 22, 2
and may then either have
a leading 0 or not.In that case, I think Jim's solution is safer!
Best wishes,
Ted.
On 07-Feb-2017 16:02:18 Bert Gunter wrote:
> No need for sprintf(). Simply:
>
>> paste0("DQ0",seq.int(60054,60060))
>
> [1] "DQ060054" "D
,freq=TRUE, col='red', breaks=0.5+(0:6))
or
hist(y,freq=TRUE, col='red', breaks=0.25+(0:12)/2)
Hoping this helps!
Best wishes,
Ted.
On 22-Dec-2016 16:36:34 William Dunlap via R-help wrote:
> Looking at the return value of hist will show you what is happening:
>
>
that X[r] <= y, which
would then be O(log2(n)).
Perhaps not altogether straightforward to program, but straqightforward
in concept!
Apologies for misunderstanding.
Ted.
On 05-Jun-2016 18:13:15 Bert Gunter wrote:
> Nope, Ted. I asked for a O(log(n)) solution, not an O(n) one.
>
> I will
re it is at Y[2]
Easy to make such a function!
Best wishes to all,
Ted.
On 05-Jun-2016 17:44:29 Neal H. Walfield wrote:
> On Sun, 05 Jun 2016 19:34:38 +0200,
> Bert Gunter wrote:
>> This help thread suggested a question to me:
>>
>> Is there a function in some package that
, and the package installed
> seamlessly. It also loaded seamlessly.
>
> So I don't know why the computer gods are picking on you.
>
[***]
> Note that I am not working on a Mac, but rather running Linux (as do all
> civilized human beings! :-) )
Might this be y
Saludos José!
Could you please give a summary of the relevant parts of TPP
that might affect the use of R? I have looked up TPP on Wikipedia
without beginning to understand what it might imply for the use of R.
Best wishes,
Ted.
On 04-Feb-2016 14:43:29 José Bustos wrote:
> Hi everyone,
>
has been! So no change that *I* can perceive at the
R-help end.
Hoping this is useful,
Ted.
On 04-Feb-2016 16:33:29 S Ellison wrote:
> Apologies if I've missed a post, but have the default treatment of posts and
> reply-to changed on R-Help of late?
>
> I ask because as of today, my
My feelings exactly! (And since quite some time ago).
Ted.
On 25-Jan-2016 12:23:16 Fowler, Mark wrote:
> I'm glad to see the issue of negative feedback addressed. I can especially
> relate to the 'cringe' feeling when reading some authoritarian backhand to a
> new use
uot; would be ignored
(at least by R).
And then one has a variable which is a factor with 3 levels, all
of which can (as above) be meaningful), and "NA" would not be
ignored.
Hoping this helps to clarify! (And, Val, does the above somehow
correspond to your objectives).
Best wishes
p
Towards the bottom of this page is a section "Subscribing to R-help".
Follow the instructions in this section, and it should work!
Best wishes,
Ted.
-----
E-Mail: (Ted Harding)
Date: 14-Oct-2015 Time: 19:34:55
to generate 1000 numbers from N(u, a^2), however I don't
> want to include 0 and negative values. How can I use beta distribution
> approximate to N(u, a^2) in R.
>
> Thx for help
-
E-Mail: (Ted Harding)
Date: 15-Sep-2015 Time: 16:12
ewhere
in the spreadsheet? (Excel is notorious for planting things invisibly
in its spreadsheets which lead to messed-up results for no apparent
reasion ... ).
Hoping this helps,
Ted.
-
E-Mail: (Ted Harding)
Date: 09-Feb-2015 Time: 22:15:44
This me
people object to code "clutter" from parentheses that could
be more simply replaced (e.g. "var< -4" instead of "var<(-4)"),
but parentheses ensure that it's right and also make it clear
when one reads it.
Best wishes to all,
Ted.
---
Sorry, a typo in my reply below. See at "###".
On 12-Jan-2015 11:12:43 Ted Harding wrote:
> On 12-Jan-2015 10:32:41 Erik B Svensson wrote:
>> Hello
>> I've got a problem I don't know how to solve. I have got a dataset that
>> contains age intervals
t;.
Implementing the above as a procedure:
agegrp[max(which(cumsum(y1994)/sum(y1994)<0.5)+1)]
# [1] "55-64"
Note that the "obvious solution":
agegrp[max(which(cumsum(y1994)/sum(y1994) <= 0.5))]
# [1] "45-54"
give
I should have added an extra line to the code below, to complete
the picture. Here it is (see below line "##".
Ted.
On 11-Jan-2015 08:48:06 Ted Harding wrote:
> Troels, this is due to the usual tiny difference between numbers
> as computed by R and the numbers that yo
69447e-18 1.040834e-17
# [5] -1.734723e-17 3.816392e-17 9.367507e-17 2.046974e-16
# [9] 4.267420e-16 -4.614364e-16 -1.349615e-15 -3.125972e-15
Hoping this helps!
Ted.
On 11-Jan-2015 08:29:26 Troels Ring wrote:
> R version 3.1.1 (2014-07-10) -- "Sock it to Me"
> Copyright (C)
00,1003)
x1 - n1
## [1] 0 0 0 0 0 0 0
## But, of course:
1000*x0 - n1
## [1] 0.00e+00 0.00e+00 0.00e+00 0.00e+00
## [5] 0.00e+00 0.00e+00 -1.136868e-13
Or am I missing somthing else in what Mike Miller is seeking to do?
Ted.
On 01-Jan-2015 19:58:02 Mik
t you in the right direction.
With best wishes,
Ted.
On 19-Dec-2014 11:17:27 aoife doherty wrote:
> Many thanks, I appreciate the response.
>
> When I convert the missing values to NA and run the cox model as described
> in previous post, the cox model seems to remove all of the rows
sing values is another question (or many questions ... ).
So your data should look like:
V1 V2 V3 Survival Event
ann 13 WTHomo 41
ben 20 NA 51
tom 40 Variant 6
automatically updates what it is displaying).
And of course many linux users install 'acroread' (Acrobat
Reader), though some object!
Hoping this helps,
Ted.
On 09-Dec-2014 20:47:06 Richard M. Heiberger wrote:
> the last one is wrong. That is the one for which I don't know t
value 4.102431).
Ted.
On 30-Sep-2014 18:20:39 Duncan Murdoch wrote:
> On 30/09/2014 2:11 PM, Andre wrote:
>> Hi Duncan,
>>
>> No, that's correct. Actually, I have data set below;
>
> Then it seems Excel is worse than I would have expected. I confirmed
> R
17mother
107 09sibling
107 18father
107 19mother
108 16sibling
108 NAfather
108 NAmother
109 17sibling
109 NAfather
109 NAmother
That's the data. Now a litt
On 12-Aug-2014 22:22:13 Ted Harding wrote:
> On 12-Aug-2014 21:41:52 Rolf Turner wrote:
>> On 13/08/14 07:57, Ron Michael wrote:
>>> Hi,
>>>
>>> I would need to get a clarification on a quite fundamental statistics
>>> property, hope expeRts here would
cation).
The important thing when using pre-programmed functions is to know
which is being used. R uses (n-1), and this can be found from
looking at
?sd
or (with more detail) at
?cor
Ron had assumed that the denominator was n, apparently not being aware
that R
>
> Point is that I am not getting exact CORR matrix. Can somebody point me
> what I am missing here?
>
> Thanks for your pointer.
Try:
Data_Normalized <- apply(Data, 2, function(x) return((x - mean(x))/sd(x)))
(t(Data_Normalize
4*a*b
MEAN^2 - 3*SD^2 = a*b
Hence for a >= 0 and b > a you must have MEAN^2 >= 3*SD^2.
Once you have MEAN and SD satisfying this constraint, you should
be able to solve the equations for a and b.
Hoping this helps,
Ted.
-
E-Mail: (Ted Hardi
ments:
n: Number of element to permute.
so, starting with
x <- c("A","B","C","D","E")
library(e1071)
P <- permutations(length(x))
then, for say the 27th of these 120 permutations of x,
x[P[27,]]
will return it.
Ted.
On 25-Jun
ials/394-hidden-files-folders-show-hide.html
[NB: These are the results of a google search. I am no expert on
Windows myself ... ]
Hoping this helps,
Ted.
On 17-Jun-2014 12:48:54 Hiyoshi, Ayako wrote:
> Dear Martyn and Professor Ripley,
>
> Thank you so much for your help. I used Window
Maybe I am missing the point -- but what is wrong with line 3 of:
m=rbind(c(6,4,2),c(3,2,1))
v= c(3,2,1)
m%*%diag(1/v)
# [,1] [,2] [,3]
# [1,]222
# [2,]111
Ted.
On 14-May-2014 15:03:36 Frede Aakmann Tøgersen wrote:
> Have a look at ?sweep
>
>
elapsed
# 0.028 0.000 0.029
system.time(for(i in (1:1)) (1)*2)*3)*4)*5) )
# user system elapsed
# 0.052 0.000 0.081
(though in fact the times are somwhat variable in both cases,
so I'm not sure of the value of the relationship).
Best wishes,
Ted.
---
l
coma now.
Best wishes,
Ted.
On 04-May-2014 17:10:00 Gabor Grothendieck wrote:
> Checking this with the bc R package (https://code.google.com/p/r-bc/),
> the Ryacas package (CRAN), the gmp package (CRAN) and the Windows 8.1
> calculator all four give the same result:
>
>> lib
x27;,
into which one can enter a 'bc' command and get the result
returned as a string, but I can't seem to find it on CRAN now.
In any case, the raw UNIX command line for this calculation
with 'bc' (with result) is:
$ bc -l
[...]
168988580159 * 36662978
6195624596620653502
qu
ven simpler (if it is only one particular month you want,
as in your example) is:
$ cal April 2014
which yields:
April 2014
Mo Tu We Th Fr Sa Su
1 2 3 4 5 6
7 8 9 10 11 12 13
14 15 16 17 18 19 20
21 22 23 24 25 26 27
28 29 30
and now just count down the 3rd col
quot;(factors)
[...]
If, as he implies, the "acc" variable in "data" is a factor,
then lm() will not enjoy fitting an lm where the dependent
variables (response) is a factor!
Just a shot in the dark ...
Ted.
On 30-Mar-2014 18:46:27 Bert Gunter wrote:
> 1. Post in plain text, not
hat the
boundary is drawn as a set of separate partial boundaries which
are in no particular order as a whole; and in some datasets the
different separate parts of the boundary do not exactly match up
at the points where they should exactly join.
Hoping this helps,
Ted.
----
t integer when not exactly halfway between, and rounds
either always up or always down when the fractional part is exactly 1/2,
then I think (but others will probably correct me) that you may have
to write your own -- say roundup() or rounddown():
roundup <- function(x){
if((x-floor(x))==
aa0<-gsub("^[0-9]+ ","",aa)
aa0
# [1] "(472)" "(445)" "(431)" "(431)" "(415)" "(405)" "(1)"
as.numeric(gsub("[()]","",aa0))
# [1] 472 445 431 431 415 405 1
Ted.
x); so the actual byte content
of newseed is:
4b e9 76 34 41 cf 5e 17 b0 68 78 98 87 9e 8b 5f
fb 4f 52 e6 59 ef 0b 58 52 58 4a 3a df 04 c1 8d
This could be achieved via a system() call from R; and the contents
of newseed would then need to be converted into a format suitable
for use as ar
- (1-p1)*(1-p2)* ... *(1-pk)
where pj is P(Aj). Hence
punion <- function(p){1 - prod(1-p)}
should do it!
Ted.
-----
E-Mail: (Ted Harding)
Date: 18-Feb-2014 Time: 23:51:31
This message was sent by XFMail
_
ondering what context this could
arise in), then the commands
x <- seq(from=0, to=100, length.out=100)
x0 <- 65.44945
plot(x+x0, dgamma(x, shape=2, scale=5.390275),
main="Gamma",type='l')
will produce such a plot. However, I wonder if you have correctly
expres
thing interesting is sitting in my disk, I can edit it if
I wish, I can make local copies, etc. etc. etc. etc. Anything which is
not interesting gets deleted (though I can always dig into R-help
archives if need be).
Best wishes,
Ted.
On 03-Feb-2014 21:36:21 Rolf Turner wrote:
>
> For what
that it has been dealt with).
The best address for enquiries about subscribing to/using/posting
to R-help is
r-help-ow...@r-project.org
Ted.
>> thx
>> [[alternative HTML version deleted]]
>
> Don't post in html, please.
>
> Rui Barradas
>>
>>
e interval (1/50 sec is the default, I think), and record all
> functions that are currently active on the execution stack. So tiny
> little functions could be missed, but bigger ones probably won't be.
>
> There are also options to Rprof
ere)!
But, before anyone takes my posting *too* seriously, let me say that
it was written tongue-in-cheek (or whatever the keyboard analogue of
that may be). I'm certainly not "blaming R".
Have fun anyway!
Ted.
On 22-Dec-2013 17:35:56 Bert Gunter wrote:
> Yes.
>
> See also Feig
For S<-11, x[52]=8 then 6 then 10 then 2 then 4 then 8 6 10 2 4 ...
so period = 5.
For S<-13, x[51]=4 then 8 10 6 12 2 4 8 10 6 12 2 4 8 ...
so period = 6.
For S<-19, x[51]=12 then 14 10 18 2 4 8 16 6 12 ...
so period = 9.
And so on ...
So, one sniff of something like S<-19, and
-1)] + x[2:N]
# [1] 5 13 29 23 10
Best wishes,
Ted.
---------
E-Mail: (Ted Harding)
Date: 14-Dec-2013 Time: 10:54:00
This message was sent by XFMail
__
R-help@r-project.org mailing list
https://stat.ethz.ch/ma
4: "+XZZZU.C5BF89ZZZUBP+"
5: "+XZZZU.CZUZUBF89ZZZUBP+"
6: "+XZZZU.CZUZUBF89ZZZUBP+"
7: "+XZZZU.CZUZUBF89ZZZUBP+"
8: "+XZZZU.CZUZUBFUZZZ9ZZZUBP+"
9: "+XZZZU.CZUZUBFUZZZUZZUZZZUBP+"
A: "
_________
> R-help@r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
> and provide commented, minimal, self-contained, reproducible code.
1
# [2,]112
# [3,]113
# [4,]122
# [5,]123
# [6,]133
# [7,]2 2 2
# [9,]233
#[10,]333
There may be a simpler way!
Ted.
-
E-Mail:
qnorm(0.05155075)
[1] -1.63
so maybe you mistyped "1.63" instead of "1.53"?
-
E-Mail: (Ted Harding)
Date: 16-Oct-2013 Time: 16:12:56
This message was sent by XFMail
__
R-help@r-projec
and.
Therefore, with the above exemplar, is there were say 75 settings,
then that loop would complete in a very short time, after which
you would have 75 copies of R executing simulations, and your
original R command-line would be available.
Just a suggestion (which may h
cheers,
> Rolf Turner
Though, mind you, FAQ 3.71 does also offer some consolation to R:
all.equal(0,sin(pi))
# [1] TRUE
So it depends on what you mean by "different from". Computers
have their own fuzzy concept of this ... Babak has too fussy
a concept.
Ted.
-
On 21-Aug-2013 19:08:29 David Winsemius wrote:
>
> On Aug 21, 2013, at 10:30 AM, (Ted Harding) wrote:
>
>> Greetings all.
>>
>> I suspect this question has already been asked. Apologies
>> for not having taced it ...
>>
>> In the default pairs
lurking somewhere in the depths of this
function which can be set so that the scales for all the variables
X1,X2,X3,X4,X5 appear both above and below columns 1,2,3,4,5;
and both to the left and to the right of rows 1,2,3,4,5?
With thanks,
Ted.
-
E
proportions of the populatipn:
50th to 85th = 35%; 31st to 69th = 38%; 69th to 93rd = 24%. So you are
still facing issues of what you mean, or what you want to mean.
Simpler to stick to the original "odds per unit of x" and then apply
it to whatever multiple of the unit you happen to be int
Don't worry about it. As I say, it can happen to anyone (though more
often to some than to others). If it is a proper message to R-help,
one of the moderators will approve it (though quite possible not
immediately).
Hoping this helps,
Ted (one of the mode
18 15 22 29
# [2,]29 16 23 30
# [3,]3 10 17 24 31
# [4,]4 11 18 25 32
# [5,]5 12 19 26 33
# [6,]6 13 20 27 34
# [7,]7 14 21 28 35
# To permute the rows:
t(app
eak)
choose(37,7)/choose(40,10)
# [1] 0.01214575
so the chance of all 3 being in some one of the 4 groups is
4*choose(37,7)/choose(40,10)
# [1] 0.048583
which, if you are addicted to P-values, is just significant
at the 5% (P <= 0.05) level. So this gives some indication
that the &q
Thanks, Jorge, that seems to work beautifully!
(Now to try to understand why ... but that's for later).
Ted.
On 25-Apr-2013 10:21:29 Jorge I Velez wrote:
> Dear Dr. Harding,
>
> Try
>
> sapply(L, "[", 1)
> sapply(L, "[", 2)
>
> HTH,
> Jor
trings
which are second, i.e. from L (as above) I would want to extract:
V1 = c("A1","A2","A3",...)
V2 = c("B1","B2","B3",...)
Suggestions?
With thanks,
Ted.
-
E-Mail: (Ted Harding)
ng from which you can extract the
individual digits. And then on to whatever you want to do ...
Hoping this helps,
Ted.
-
E-Mail: (Ted Harding)
Date: 18-Apr-2013 Time: 10:06:43
This message was sent by XFMail
_
(S), you know which element of S to put in
each position of the sorted order:
S[order(S)]
[1] 210 210 505 920 1045 1210 1335 1545 2085 2255 2465
Does this help to explain it?
Ted.
> Please help me to understand all this!
>
> Thanks,
>
> -Sergio.
>
>
is a vector, it will be promoted to either a row or column matrix
to make the two arguments conformable. If both are vectors it
will return the inner product (as a matrix).
Usage:
x %*% y
[etc.]
Ted.
-----
E-Mail: (Ted Harding)
Date: 11
es"
will be the result of a calculation) then one useful precaution
could be to round the result:
round(0.29*100)
# [1] 29
29-round(0.29*100)
# [1] 0
length(rep(TRUE,0.29*100))
# [1] 28
length(rep(TRUE,round(0.29*100)))
# [1] 29
(The default for round() is 0 decimal places, i.e.
On 01-Apr-2013 21:26:07 Robert Baer wrote:
> On 4/1/2013 4:08 PM, Peter Ehlers wrote:
>> On 2013-04-01 13:37, Ted Harding wrote:
>>> Greetings All.
>>> This is a somewhat generic query (I'm really asking on behalf
>>> of a friend who uses R on Window
Chunk 1? (The size-change may perhaps
have to be determined empirically).
With thanks,
Ted.
---------
E-Mail: (Ted Harding)
Date: 01-Apr-2013 Time: 21:37:17
This message was sent by XFMail
__
R-help@r-project.org mail
p
# 0.4957627
So it doesn't do the requested continuity correction in [A] because
there is no need to. But in [B1] it makes a difference (compare
with [B2]), so it does it.
Hoping this helps,
Ted.
-
E-Mail: (Ted Harding)
Date: 27-Mar-2013
Thanks! ?View does indeed state "The object is then viewed
in a spreadsheet-like data viewer, a read-only version of
'data.entry', which is what I was looking for!
Ted.
On 26-Mar-2013 10:23:59 Blaser Nello wrote:
> Try ?View()
>
> -Original Message-
> From: r
Sorry, I meant "data.entry()", not "edit.data()" (the latter due
to mental cross-wiring with "edit.data.frame()").
I think that Nello Blaser's suggestion of "View" may be what I
seek (when I can persuade it to find the font it seeks ... )!
With thank
Or some other
function which could offer similar viewing capability without
the risk of data change?
With thanks,
Ted.
-----
E-Mail: (Ted Harding)
Date: 26-Mar-2013 Time: 10:08:58
This message was sent by XFMail
_
."
This could mean that the vector (X1,...,X10) has a multivariate
normal distribution with 10 dimensions, and, for a single vector
(X1,...,X10) drawn from this distribution, (X(1), ..., X(10))
is a vector consisting of these same values (X1,...,X10), but
in increa
imilar for other arbitrary choices of first and second distribution
(so long as each has at least a second moment, hence excluding, for
example, the Cauchy distribution).
That's about as far as one can go with your question!
Hoping it helps, howevr.
Ted.
---
# [1] FALSE
(0.1 + 0.05) < (0.15 - .Machine$double.eps^0.5)
# [1] FALSE
(or similar). Observe that
.Machine$double.eps^0.5
# [1] 1.490116e-08
.Machine$double.eps
# [1] 2.220446e-16
(0.1 + 0.05) - 0.15
# [1] 2.775558e-17
Hoping this helps,
Ted.
---
1 - 100 of 1124 matches
Mail list logo