Jim Porzak gmail.com> writes:
>
> I just noticed that CRAN Package check Error for Windows & Mac:
>
http://www.r-project.org/nosvn/R.check/r-patched-windows-x86_64/odfWeave-00check.html
>
> >
> > I've run into a problem with odfWeave 0.7.5 running under R 2.7.1
> > (Windows XP, SP2) doing some
round(0.405852702657738,6)
[1] 0.405853
Super
Chunhao Tu
Quoting Daren Tan <[EMAIL PROTECTED]>:
I am confused by options("digits") and options("scipen"), which
should be used to output 0.405852702657738 as 0.405853 in
write.table ?
___
I am confused by options("digits") and options("scipen"), which should be used
to output 0.405852702657738 as 0.405853 in write.table ?
_
[[alternative HTML version deleted]]
__
On Thu, 10 Jul 2008, Georg Ehret wrote:
Dear R community,
I am using 6 variables to test for an effect (by linear regression).
These 6 variables are strongly correlated among each other and I would like
to find out the number of independent test that I perform in this
calcuation.
For wha
Hi, what do you mean by effective number of tests? How you approach it also
depends on the research tradition in your field. Some fields just include
the variables in alternative regressions and then include them jointly.
However, since your variables are so highly correlated (i.e. they convey
alm
It looks like SR, SU and ST are strongly correlated to each other, as well as
DR, DU and DT.
You can try to do PCA on your 6 variables, pick the first 2 principal
components as your new variables and use them for regression.
--- On Fri, 11/7/08, Georg Ehret <[EMAIL PROTECTED]> wrote:
> From: G
Here is one way:
> v <- c(20, 134, 45, 20, 24, 500, 20, 20, 20)
> tail(v[v>20],1)
[1] 500
On Thu, Jul 10, 2008 at 8:41 AM, Thaden, John J <[EMAIL PROTECTED]> wrote:
> This shouldn't be hard, but it's just not
> coming to me:
> Given a vector, e.g.,
> v <- c(20, 134, 45, 20, 24, 500, 20, 20, 20)
Here is what I get on a Windows with r2.71
> x <- 0.55
> 100*x - 55
[1] 7.105427e-15
> y <- 2.55
> 100*y - 255
[1] -2.842171e-14
> round(x,1)
[1] 0.6
> round(y,1)
[1] 2.5
>
On Thu, Jul 10, 2008 at 9:10 PM, Moshe Olshansky <[EMAIL PROTECTED]> wrote:
> Actually, your result is strange, since if x
Hi Chris --
"Chris Gaiteri" <[EMAIL PROTECTED]> writes:
> I have an "embarrassingly parallel" routine that I need to run 24000^2/2
> times (based on some microarray data). All I really need to do is
> parallelize a nested for-loop. But I haven't found a clear list of what
> packages/commands I'
Dear R community,
I am using 6 variables to test for an effect (by linear regression).
These 6 variables are strongly correlated among each other and I would like
to find out the number of independent test that I perform in this
calcuation. For this I calculated a matrix of correlation coeff
On 11/07/2008, at 12:47 PM, hippie dream wrote:
I am having difficulty installing packages. For example, here I
have tried to
install "gplots" but to no avail.
install.packages()
** building package indices ...
* DONE (gplots)
The downloaded packages are in
You must load the package with the library function after installing it.
so:
library(gplots)
and then
?plotCI
hippie dream wrote:
I am having difficulty installing packages. For example, here I have tried to
install "gplots" but to no avail. I have tried multiple mirrors and
different packag
Actually, your result is strange, since if x = 0.55 then a hexadecimal
representation of x*100 is 404B8000 0001 which is above 55, meaning that x
is above 0.55 and should have been rounded to 0.6, while if x = 2.55 then the
hexadecimal representation of x*100 is 406FDFFF which is be
I am having difficulty installing packages. For example, here I have tried to
install "gplots" but to no avail. I have tried multiple mirrors and
different packages (errbar) but after installation I seem to be unable to
access any of the commands or help pages. I am a little concerned about the
'l
The problem is that neither 0.55 nor 2.55 are exact machine numbers (the
computer uses binary representation), so it may happen that the machine
representation of 0.55 is slightly less than 0.55 while the machine
representation of 2.55 is slightly above 2.55.
--- On Fri, 11/7/08, Korn, Ed (NIH
on 07/10/2008 01:39 PM Carlisle Thacker wrote:
Search is not working after starting the help facility using the
help.start function. The firefox browser says the applet has started,
but nothing happens What to do to get it working? Versions of R, java,
and firefox are shown below.
Thanks,
I got
round(0.55,1)
[1] 0.6
round(2.55,1)
[1] 2.5
Your answers are also correct. Please check ?round
Quoting "Korn, Ed (NIH/NCI) [E]" <[EMAIL PROTECTED]>:
Hi,
Round(0.55,1)=0.5
Round(2.55,1)=2.6
Can this be right?
Thanks,
Ed
[[alternative HTML version deleted]]
u... ?princomp ** does give you references **. Did you not first check
this before posting?
-- Bert Gunter
Genentech, Inc.
-Original Message-
From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] On
Behalf Of Wu, Hong-Sheng
Sent: Thursday, July 10, 2008 4:01 PM
To: r-help@r-project.org
Su
Please provide a self-contained example with a post like this. I guess you
mean the output of the last line below? It looks those are just the
theoretical proportions if your data is perfectly orthogonal. I found those
values invariant to changing the distributional parameters of the variables,
but
Dear all,
When I print out princomp's loading outputs, there is alwasy a section for "SS
loading", "Proportional Var" and "Cumulative Var". Anybody can tell what they
are for? Or anyone can direct me to some reference to read about?
Any help will be highly appricated.
Hongsheng
[[
I think it's rather presumptuous of you to ask whether a fundamental
behavior that you don't understand in a mature software product that has
been used by literally thousands of people for around 10 years (>20 for S)
is "a bug", don't you?
So the answer is, no, this is how R's evaluation mechanis
On Thu, Jul 10, 2008 at 3:28 PM, Jim Price <[EMAIL PROTECTED]> wrote:
>
> Playing with ggplot, something I'd promised myself I'd get around to. I've
> the following scenario:
>
> library(lattice)
> library(ggplot2)
>
> myData <- data.frame(
>x = rnorm(100),
>y = rnorm(100),
>
I just noticed that CRAN Package check Error for Windows & Mac:
http://www.r-project.org/nosvn/R.check/r-patched-windows-x86_64/odfWeave-00check.html
-Jim
On Thu, Jul 10, 2008 at 3:03 PM, Jim Porzak <[EMAIL PROTECTED]> wrote:
> Max & Friends,
>
> I've run into a problem with odfWeave 0.7.5 runni
Actually my last reply will drop one row since its pushed off
to beyond the data range. You can avoid that with zooreg:
# from before for comparison
lag(z, -1, na.pad = TRUE)
# pure shift - note use of zooreg here
# Unlike lag.zoo, lag.zooreg can shift beyond data range
zr <- as.zooreg(z)
lag(zr
Max & Friends,
I've run into a problem with odfWeave 0.7.5 running under R 2.7.1
(Windows XP, SP2) doing some monthly production reports.
Under 2.7.1 I'm getting various parsing errors after Sweaving starts, e.g.:
Sweaving content.Rnw
Writing to file content_1.xml
Processing code chunks
On 10-Jul-08 20:43:12, DaveFrisch wrote:
>
> Okay, so I'm fairly retarded, and asked a question about finding
> the T-Value in the Fisher Exact method. I suppose what I'm truly
> after can best be explained by the Biddle Consulting site that has
> a program setup to deal with this kind of thing.
See ?lag and in zoo ?lag.zoo. Both pages have
examples. Using lag.zoo here it is with your data:
Lines <- "Date Apples Oranges Pears
1/7 2 35
2/7 1 47
3/7 3 810
4/7 5 72
5/7 6 3
On 11/07/2008, at 8:07 AM, Korn, Ed (NIH/NCI) [E] wrote:
Hi,
Round(0.55,1)=0.5
Round(2.55,1)=2.6
Can this be right?
Yes.
Rolf Turner
##
Attention:\ This e-mail message is privileged and confid...{
Hi,
I'm trying to run a garch model. and I want to use the predict
function to predict ahead the conditional standard deviation.
but when I run the predict function I get the same number of return as my data.
How should I define the number of days looking ahead in garch predict?
I really apprecia
Playing with ggplot, something I'd promised myself I'd get around to. I've
the following scenario:
library(lattice)
library(ggplot2)
myData <- data.frame(
x = rnorm(100),
y = rnorm(100),
group = 1:4
)
xyplot(y ~ x | factor(group), data = myData, layout = c(2, 2))
qplot(
Okay, so I'm fairly retarded, and asked a question about finding the T-Value
in the Fisher Exact method. I suppose what I'm truly after can best be
explained by the Biddle Consulting site that has a program setup to deal
with this kind of thing. Unfortunately, it is not currently functioning,
an
Hi, everybody.
I have visited the revolution computing web page, but its not clear when or
what will be the new release be. Does anybody have information about that.
Thank you
Mariana
John Maindonald wrote:
>
> Does anyone know any more than is in the following press release
> about REvolution
Hi everyone,
Thanks very much for all your replies.
I'm interested in hearing more about the lag function. I remember coming
across this in the R intro manual, but I couldn't figure out how to apply it
to my case. Does anyone know how it is applied, assuming a time series data
frame?
Thanks,
r
Hi,
Round(0.55,1)=0.5
Round(2.55,1)=2.6
Can this be right?
Thanks,
Ed
[[alternative HTML version deleted]]
__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-
You need a gcc version that supports OpenMP; I believe that means gcc
4.2 or later.
pnmath0 should work with older gcc versions.
luke
On Thu, 10 Jul 2008, Mike Lawrence wrote:
Has anyone successfully compiled pnmath
(http://www.stat.uiowa.edu/~luke/R/experimental) for an intel processor
runn
The 'polynom' package provides one example of how to do this.
However, getAnywhere("+.polynomial") just told me "no object named
‘+.polynomial’ was found"; I don't know why. The documentation says it
uses S3 classes. The 'Matrix' package should provide other examples,
using S4 classes.
Hi:
I have a function as follows:
my.plot<- function( x, y = NULL, ... )
{
plot( x, y, cex.axis=0.5, ...)
}
Set up the variables:
x <- 1:10; y <- x; tdf <- data.frame( x, y ); main.str <- "test"
I will exercise the function in two ways:
> my.plot( y ~ x, tdf, main = "test" )
This works fi
1. This is a data frame, not a matrix ! -- ?data.frame
2. help.search("sort") is the first thing you should have done. You probably
wouldn't have needed to post if you had.
3. The canonical answer is ?order and indexing -- as in:
yourdf[order(yourdf$Age),]
Note that order() can be used to sort
Search is not working after starting the help facility using the
help.start function. The firefox browser says the applet has started,
but nothing happens What to do to get it working? Versions of R, java,
and firefox are shown below.
Thanks,
Carlisle
> version
_
platform
On 10-Jul-08 16:49:57, DaveFrisch wrote:
> I do not understand how to interpret this to find the T Value
> for the data.
> Is there a way to figure this out, or another function that will
> provide this for me using Fisher's Exact Test?
If I interpret your query correctly, you are expecting to fi
Dear Angelo,
What about this?
mydata=read.table(textConnection("NameSex Age
FredM 24
JohnM 18
MaryF 21"), header=TRUE,sep="")
mydata[order(mydata$Age),]
HTH,
Jorge
On Thu, Jul 10, 2008 at 2:21 PM, Angelo Scozzarella <
[EMA
Has anyone successfully compiled pnmath (http://www.stat.uiowa.edu/~luke/R/experimental
) for an intel processor running mac OS 10.5? When I attempt to do so
via the R package installer (choosing "Local Source Package" and
pointing to the pnmath_0.0-2.tar.gz file), I get the following errors:
Do you really want to sort a *matrix*, or do you mean a data.frame? See
?order.
Angelo Scozzarella wrote:
Hi,
I want to sort a matrix by a specific variable without changing the row
binding between variables.
Ex.
NameSexAge
FredM24
JohnM18
MaryF
Christoph Meyer uni-ulm.de> writes:
>
> Hi Henning,
>
> have a look at the scales argument in the xyplot documentation.
> You could try adding the following to your code:
>
> scales=list(y=list(relation="free"))
>
> Cheers,
>
> Christoph
>
Yes, but this apparently doesn't work for xYplots
Hi,
I want to sort a matrix by a specific variable without changing the
row binding between variables.
Ex.
NameSex Age
FredM 24
JohnM 18
MaryF 21
ordered by Age
NameSex Age
JohnM 18
Ma
Henning Wildhagen gmx.de> writes:
>
> Dear list,
>
> using the packages Hmisc and lattice i produced some nice xYplots. However,
> since the data range of the conditioning variable is very big, i need to
> define more than one y-scale for the plot (in some panels you just see a
> flat line o
I did a search on my system and it seems like R.h and the other files are not
on the system. Also, CRAN or a google search does not seem to have any relevant
info about these files.
Here's the most relevant one in regards to Rversion.h:
[Rd] Rversion.h [EMAIL PROTECTED] [EMAIL PROTECTED]
Thu,
pnmath currently uses up to 8 threads (i.e. 1, 2, 4, or 8).
getNumPnmathThreads() should tell you the maximum number used on your
system, which should be 8 if the number of processors is being
identified correctly. With the size of m this calculation should be
using 8 threads, but the exp calcula
Hi Henning,
have a look at the scales argument in the xyplot documentation.
You could try adding the following to your code:
scales=list(y=list(relation="free"))
Cheers,
Christoph
Thursday, July 10, 2008, 4:35:28 PM, you wrote:
> Dear list,
> using the packages Hmisc and lattice i produced
Hi Dave,
As I know there is no T value for Fisher's exact test (no matter you
use SAS, R or other packages)
Fisher's exact test is for the categorical data analysis.
Fisher's exact test for testing the null of independence of rows and
columns in a contingency table with fixed marginals
so y
DaveFrisch wrote:
I do not understand how to interpret this to find the T Value for the data.
Is there a way to figure this out, or another function that will provide
this for me using Fisher's Exact Test?
What is a T value for Fisher's test _supposed_ to mean?
The outcome of my data
I have just installed Rmpi on a Suse 9.1 linux cluster with
openmpi-1.0.1. I am trying the example included below from the tutorial
website. However, I keep getting the following error:
> # Load the R MPI package if it is not already loaded.
> if (!is.loaded("mpi_initialize")) {
+ library
I do not understand how to interpret this to find the T Value for the data.
Is there a way to figure this out, or another function that will provide
this for me using Fisher's Exact Test?
The outcome of my data is listed below.
data: DATA
p-value = 0.1698
alternative hypothesis: true odds ra
Hi All,
Title: Non-normal data issues in PhD software engineering experiment
I hope I am not breeching any terms of this forum by this rather general
post. There are very R specific elements to this rather long posting.
I will do my best to clearly explain my experiment, goals and problems here
On Thu, Jul 10, 2008 at 11:06 AM, Daniel Malter <[EMAIL PROTECTED]> wrote:
>
> I hope you don't really want our patients :)
>
> It looks that you have an experiment with two groups. You have several
> trials for each group. And within each trial you observe your units a
> distinct points in time.
>
I have an "embarrassingly parallel" routine that I need to run 24000^2/2
times (based on some microarray data). All I really need to do is
parallelize a nested for-loop. But I haven't found a clear list of what
packages/commands I'd need to do this. I've got a dual quad core xeon
system running
On Thu, Jul 10, 2008 at 5:15 PM, Andrew Jackson <[EMAIL PROTECTED]> wrote:
> Hi All,
>
Hi Andrew,
The main questions here are not R-related, but statistical modelling
questions, and much too broad for the R list. They are things you'd
ask a (paid) statistical consultant. I would suggest taking co
"Li, Xuejun" <[EMAIL PROTECTED]> writes:
>
>
> Hi, I'd like to create a S4 class contains only one type of data.
> However, the number of slots varies.
>
> For example, I want to create a class "a", each slots in "a" contains
> numeric value only.
>
>> setClass("a", contains = "numeric")
>
> I
I have a longitudinal data set in long format and I want to run
individual regressions. I do this by using the by() function as
follows:
temp <- by(tolerance.pp, tolerance.pp$id, function(x)
summary(lm(tolerance ~ time, data=x)))
This works fine. Coefficients for the first two subjects ar
"Juan Pablo Romero Méndez" <[EMAIL PROTECTED]> writes:
> Just out of curiosity, what system do you have?
>
> These are the results in my machine:
>
>> system.time(exp(m), gcFirst=TRUE)
>user system elapsed
>0.520.040.56
>> library(pnmath)
>> system.time(exp(m), gcFirst=TRUE)
>
I hope you don't really want our patients :)
It looks that you have an experiment with two groups. You have several
trials for each group. And within each trial you observe your units a
distinct points in time.
The first advice for you is to graphically display your data. Before you
start modeli
Hi, I'd like to create a S4 class contains only one type of data.
However, the number of slots varies.
For example, I want to create a class "a", each slots in "a" contains
numeric value only.
> setClass("a", contains = "numeric")
If I want to create an object "a" with only one slo
Hi All,
This is a rather general post to begin with. This is because I need to
provide you some important context. There are very R specific elements to
this further along in this rather long posting so I thank you in advance
for your patients.
I will do my best to clearly explain my experiment,
Hello RUser!
I try to use ace for an ancestral state reconstruction but got back an error
message.
ace(FacVar,Tree, type="discrete")
Warning messages:
1: In nlm(function(p) dev(p), p = rep(ip, length.out = np), hessian = TRUE)
:
NA/Inf durch größte positive Zahl ersetzt (NA/Inf repla
As I understand it, Duncan MacKay's solution involves simply pasting
the factors together, as in:
|_AX_|_AY_|_BX_|_BY_|
Which isn't quite as aesthetically pleasing as what I I'm looking for:
|___A___|___B___|
|_X_|_Y_|_X_|_Y_|
Any further suggestions?
Mike
On 10-Jul-08, at 2:43 AM, Duncan
Creating variables for small dataset is very mundane, but lately I am dealing
with 10^7 by 10^3 datasets which eats up alot of memory. How can I monitor how
much has been used or reserved ?
_
[[alternative HTML version de
Hi
Are all your input p values the same? If so your output FDR values would
be the same.
Or are all your p-values relatively large? Then (nearly) all your FDR
values might be 1.
Why don't you put a small example up of what you did? Then we could see
what method you used etc.
Regards
JS
Dear All,
It is not a typical R question (though I use R for this) but I thought someone
will help me. For the list of P values, I have calculated FDR using p.adjust()
in R (bioconductor). But my FDR values are same for all the P values. When do
we get same FDR values? Does the smallest P value
Dear list,
using the packages Hmisc and lattice i produced some nice xYplots. However,
since the data range of the conditioning variable is very big, i need to
define more than one y-scale for the plot (in some panels you just see a
flat line of data points very close to the x-axis), e.g. diffe
Does this thread solve your problem? ->
https://stat.ethz.ch/pipermail/r-help/2007-July/136814.html
On 10-Jul-08, at 3:15 AM, [EMAIL PROTECTED] wrote:
Hello,
I have the data whcih are not balanced (several missing observations),
and one possibility is t use interpolation method
to get the inf
Try
help.search('interpolate')
and
help.search('impute')
(most of the responses to the latter come from packages that you may
not have installed, such as Hmisc)
-Don
At 8:15 AM +0200 7/10/08, [EMAIL PROTECTED] wrote:
Hello,
I have the data whcih are not balanced (several missing observ
ACroske yahoo.com> writes:
>
>
> Ben:
> Thanks for the reply. One further question, and this is where my novice
> status at R shows through. The code makes sense, but what would I put it for
> "m"? Is it the same number for all three (that was my first thought since it
> was the same placeholde
Try this:
Lines <- "Q1-60 26.528 1.268
Q2-60 27.535 1.087
Q3-60 27.737 1.346
Q4-60 28.243 3
Q1-61 26.462 3.272
Q2-61 27.769 3.863
Q3-61 27.903 4.606
Q4-61 31.673 4.1
Q1-62 28.211 5.395
Q2-62 29.469 5.554
Q3-62 30.249 4.903"
library(zoo)
# z <- read.zoo("myfile.dat", FUN = as.yearqtr, format = "Q
if it is a time series the interpolation methods in zoo are an option.
On Thu, Jul 10, 2008 at 6:41 AM, Daniel Malter <[EMAIL PROTECTED]> wrote:
>
> Please do read the posting guide. Please provide self-contained code (e.g.
> to
> randomly generate data) and illustrate (e.g. in a small table) wha
Hi,
Just so this doesn't end up being a dead-end post, here's what I figured out:
It's a GSview issue- for whatever reason, the version of GSview (4.9)
that I am using doesn't recognize WinAnsi encoding. I imagine there is
a way to fix this, but I don't care, because specifying an alternative
enc
I think the problem is that you are trying to plot non-numeric values
on your x. I built some test code around your example; I don't think
you can plot non-numeric characters. If the plot is working on your
end, it's likely because it's recognizing your X variable as a factor
and it's plotting the
Try this:
v[max(which(v > 20))]
On Thu, Jul 10, 2008 at 9:41 AM, Thaden, John J <[EMAIL PROTECTED]> wrote:
> This shouldn't be hard, but it's just not
> coming to me:
> Given a vector, e.g.,
> v <- c(20, 134, 45, 20, 24, 500, 20, 20, 20)
> how can I get the index of the last value in
> the vector
This shouldn't be hard, but it's just not
coming to me:
Given a vector, e.g.,
v <- c(20, 134, 45, 20, 24, 500, 20, 20, 20)
how can I get the index of the last value in
the vector having a value greater than n, in
this case, greater than 20? I'm looking for
an efficient function I can use on very
So the solution is : tapply(content, list(factor1, factor2), mean)
An example of what it does :
> my.data
name item vote
1 Ricardo Coke 20
2 Ricardo Fanta 60
3 Ricardo Pepsi 100
4 Marie Pepsi 40
5 Marie Coke 60
6 Julia
The canonical answer is: It is R, so everything is possible.
Sounds like you need to read what is produced by
?summary.rq
carefully.
url:www.econ.uiuc.edu/~rogerRoger Koenker
email [EMAIL PROTECTED] Department of Economics
vox:217-333-45
Yeah,
It is a IF-THEN-ELSE.
Possible combinations are c4...c7. I know how to combine them in a plot.
But what I don't know is to build a function which return a plot depending
on those conditions !!!
Any help?
Thanks,
Adel Tekari
jholtman wrote:
>
> It sounds like an IF-THEN-ELSE should do th
hadley wickham wrote:
> On Thu, Jul 10, 2008 at 5:09 AM, Peter Dalgaard
> <[EMAIL PROTECTED]> wrote:
>
>> Tine wrote:
>>
>>> Hi!
>>>
>>> I was just wondering is there anyway to overload operator for custom
>>> class.
>>> For example:
>>> I have two matrices A and B with same dimensions. I w
On Thu, Jul 10, 2008 at 5:09 AM, Peter Dalgaard
<[EMAIL PROTECTED]> wrote:
> Tine wrote:
>> Hi!
>>
>> I was just wondering is there anyway to overload operator for custom
>> class.
>> For example:
>> I have two matrices A and B with same dimensions. I want to overload
>> operator '+' with my own fu
Thanks all of you. It's really catchy name for the wanted function
("reverse"), somehow I could not find it unfortunately :( Z
On Thu, Jul 10, 2008 at 2:04 PM, Gabor Csardi <[EMAIL PROTECTED]> wrote:
> It is called 'rev', see ?rev.
>
> > rev(1:10)
> [1] 10 9 8 7 6 5 4 3 2 1
>
> G.
>
> O
Is this what you want:
> x <- 1:10
> x
[1] 1 2 3 4 5 6 7 8 9 10
> rev(x)
[1] 10 9 8 7 6 5 4 3 2 1
>
On Thu, Jul 10, 2008 at 7:56 AM, Zroutik Zroutik <[EMAIL PROTECTED]> wrote:
> Dear R-users,
>
> I'd like to turn a vector so it starts with it's end. For better
> understanding
> rev(1:10)
[1] 10 9 8 7 6 5 4 3 2 1
>
Javier
> Dear R-users,
>
> I'd like to turn a vector so it starts with it's end. For better
> understanding, this set of commands will do what I need:
>
> i <- seq(1:10)
> i_turned <- i
> for (j in 1:length(i)) i_turned[j] <- i[length(i)-j+1]
>
It is called 'rev', see ?rev.
> rev(1:10)
[1] 10 9 8 7 6 5 4 3 2 1
G.
On Thu, Jul 10, 2008 at 01:56:58PM +0200, Zroutik Zroutik wrote:
> Dear R-users,
>
> I'd like to turn a vector so it starts with it's end. For better
> understanding, this set of commands will do what I need:
>
> i
See ?rev
On Thu, Jul 10, 2008 at 8:56 AM, Zroutik Zroutik <[EMAIL PROTECTED]> wrote:
> Dear R-users,
>
> I'd like to turn a vector so it starts with it's end. For better
> understanding, this set of commands will do what I need:
>
> i <- seq(1:10)
> i_turned <- i
> for (j in 1:length(i)) i_turned[
Dear R-users,
I'd like to turn a vector so it starts with it's end. For better
understanding, this set of commands will do what I need:
i <- seq(1:10)
i_turned <- i
for (j in 1:length(i)) i_turned[j] <- i[length(i)-j+1]
now, i_turned is what I call turned. Is there a function which would make a
Hi list,
I have an database (30.000 entrys) that my R is not modelling into betareg()
so I decided to do an sample and uses an bootstrap to give me the correct
estimates to the variance. Anybody knows what library can I use in this
case?
Regards,
Atenciosamente,
Leandro Lins Marino
Centro de Av
Dear all,
I am running a quantile estimation with Sparse matrix and when I run
the procedure rq.fit.sfn I receive the following warning: tiny
diagonals replaced with Inf when calling blkfct.
Does anyone knows exactly what does it mean? What's this kind of
error? Should I get worried about this mes
On Thu, Jul 10, 2008 at 6:26 AM, Vincent Goulet
<[EMAIL PROTECTED]> wrote:
> Le mer. 09 juil. à 06:20, Rainer M Krug a écrit :
>
>> Hi
>>
>> I tried to install rkward under ubuntu hardy heron, but it tried to
>> use the one from the cran repository which was newer, but it did not
>> install. To be
On Wed, 2008-07-09 at 17:12 -0500, hadley wickham wrote:
> What do you mean by equidistant? You can have three points that are
> equidistant on the plane, but there's no way to add another point and
> have it be the same distance from all of the existing points. (Unless
> all the points are in th
[EMAIL PROTECTED] napsal dne 10.07.2008 08:10:36:
> Hello,
> I have to merge several serie by "date". I used:
>
> cb<-merge(cbds,cbbond,by=c("date"),all=T).
>
> I have the daily and the high frequency data.
> Unfortunately, the programm did not sort by "sort"
Maybe ?sort or ?order can help you.
Dear list,
I'm using the quantreg package for quantile regression. Although it's
fine, there're is some weird behavior a little bit difficult to
understant. In some occasions, the regression results table shows
coefficients, t-statistics, standard errors and p-values. However, in
other occasi
Please do read the posting guide. Please provide self-contained code (calls
to randomly generated data) and illustrate (e.g. in a small table) what you
want to do and also illustrate (with the self-contained code) where your
current approach (if any) fails. After reading your message, I have only
http://www.ats.ucla.edu/stat/r/faq/sort.htm
Best,
daniel
sprohl wrote:
>
> Hello,
> I have to merge several serie by "date". I used:
>
> cb<-merge(cbds,cbbond,by=c("date"),all=T).
>
> I have the daily and the high frequency data.
> Unfortunately, the programm did not sort by "sort"
> my dat
Type
?ifelse
in the R prompt and hit enter
you will have to put the yes (or the no) in " " in your ifelse command.
Best,
Daniel
Jörg Groß wrote:
>
> Hi,
>
> I have a problem sorting a table;
>
> When I read a table into R by x <- read.table() I get something like
> this:
>
> V1V2
Jörg Groß wrote:
I have a problem sorting a table;
When I read a table into R by x <- read.table() I get something like this:
V1V2V3
yes13
no26
yes39
no412
Now I want to generate a vector of V2.
But R should only put in the numbers of V2 into the new ve
Jörg Groß wrote:
> Hi,
>
> I have a problem sorting a table;
>
> When I read a table into R by x <- read.table() I get something like
> this:
>
> V1V2V3
> yes13
> no26
> yes39
> no412
>
> Now I want to generate a vector of V2.
> But R should only put in the
1 - 100 of 113 matches
Mail list logo