Hi
>
>
> I might be silly but if I was going to type in dput() then how should I
> send the data over here?
> dput(zdrz20)
outputs tou your console
structure(list(sklon = c(95, 95, 40, 40, 40, 40, 20, 20, 20,
20, 20, 20, 20), ot = c(15, 4, 10, 15, 4, 1.5, 1.5, 4, 10, 15,
4, 10, 15), doba
On Tue, May 8, 2012 at 3:38 PM, R. Michael Weylandt
wrote:
> So this actually looks like something of a tricky one: if you wouldn't
> mind sending the result of dput(head(agg)) I can confirm, but here's
> my hunch:
Hi Michael,
while I'm trying to get my head around the rest of your post, here's
I am totally ignorant on these matters, but ..
R is open source statistical software written largely for (and used a
lot by) academics for research. So I would not be surprised if it has
"security vulnerabilities". As usual, the GPL explicitly exempts the R
organization from any responsibility on
"It gets curiouser and curiouser," said Alice.
-- Bert
On Tue, May 8, 2012 at 9:07 PM, array chip wrote:
> Paul, thanks for your thoughts. blunt, not at all
>
> If I understand correctly, it doesn't help anything to speculate whether
> there might be additional variables existing or not. Gi
Your query is far too vague to answer -- probably 90% of R packages qualify.
As you are an economist, obvious question: Have you looked at the CRAN
econometrics task view?
-- Bert
On Tue, May 8, 2012 at 8:30 PM, ivo welch wrote:
> dear R experts---now I have a case where I want to estimate very
Hi Ivo,
You might check out biglm. It is not clear to me how to parallelize a single
model, but if you are running several, of course you can (but you already know
that). The one thing that may help is to link R against an optimized,
multithreaded BLAS such as Atlas (I think you have to do th
Paul, thanks for your thoughts. blunt, not at all
If I understand correctly, it doesn't help anything to speculate whether there
might be additional variables existing or not. Given current variables in the
model, it's perfectly fine to draw conclusions based on significant
coefficients reg
dear R experts---now I have a case where I want to estimate very large
regression models with many fixed effects---not just the mean type, but
cross-fixed effects---years, months, locations, firms. Many millions of
observations, a few thousand variables (most of these variables are
interaction fix
On Tue, May 8, 2012 at 3:45 PM, array chip wrote:
> Thanks again Peter. What about the argument that because low R square (e.g.
> R^2=0.2) indicated the model variance was not sufficiently explained by the
> factors in the model, there might be additional factors that should be
> identified and
On Tue, May 8, 2012 at 5:16 PM, rbuxton wrote:
> http://r.789695.n4.nabble.com/file/n4618871/Data_for_list_serve.csv
> Data_for_list_serve.csv
>
> Here is my data, hope this helps.
>
> The "LESP CHUCKLE" , "FTSP FLIGHT", and "ANMU CHIRRUP" are the dependent
> variables, I want to run one model fo
Hi: I sent you an email earlier privately. why you keep sending the same
email over
and over is not clear to me. ? the package by rossi et al, called "bayesm",
has a function in it that supposedly does what you want. I don't know the
details of the function because I was using
their package for som
I used only these three command lines in R
>arq <-read.table("file")
> arq_matrix <-data.matrix(arq)
> arq_heatmap <- heatmap(arq_matrix, Rowv = NA, Colv = NA,col = cm.colors
> (256), scale = "column", margins =c(5,10))
Excuse me if poorly written, because I'm Brazilian
--
View this messa
Hello,
Please read the posting guide, like this it's difficult for us to give a
sensible an answer. In particular,
1. Use R syntax, your "table" seems to be a "matrix" or "data.frame". Which
is it?
2. Post your data using dput(). Just copy it's output and paste it in here,
then,
we'll beble t
Hi group,
I'm running a two-level model using nlme package (lme function). The output
includes unstandardized coefficient values. Does anyone know how to generate
standardized coefficient?
Thanks,
Yuan Jiang
Assistant professor
Organizational Leadership
Indiana University-Purdue University For
I would like to organize my data as follows:
I have a table that contains various data, and the numbers represent a level
of similarity between these data,
eg RF00013 has 100% similarity with the data RF00014.
I would leave my table as a heatmap where darker colors represent higher
similarity, a
Dear all,
Several days ago, I posted How to write a bmp file pixel by pixel.
Instead of bmp, I succeeded in writing a PPM file by using the pixmap package.
Thanks for the hint generously provided by Uwe Ligges.
Now I have a new question. How to convert a PPM file to BMP file in R
Hi Peter, searched old mail archive and found this topic had been discussed
before. The previous discussion was around a situation where there was a very
large sample size involved so even a small effect still showed up as
significant even with low R square of the model.
In my case, the sample
http://r.789695.n4.nabble.com/file/n4618871/Data_for_list_serve.csv
Data_for_list_serve.csv
Here is my data, hope this helps.
The "LESP CHUCKLE" , "FTSP FLIGHT", and "ANMU CHIRRUP" are the dependent
variables, I want to run one model for each.
So, again the desired model is:
mod <- glmmadmb(LE
I just fixed the issue. I had to do a full re-install of R to get it working.
I had initially re-installed Rtools and Perl which did not work. However,
upon re-installing R, the error seemed to go away. So I do not really know
what corrupted the R install from earlier, but at the moment, I seemed t
Hi Vihan,
The link below might be helpful.
(http://stackoverflow.com/questions/3415097/controlling-number-formatting-at-axis-of-r-plots)
A.K.
- Original Message -
From: Vihan Pandey
To: r-help
Cc:
Sent: Tuesday, May 8, 2012 1:29 PM
Subject: [R] Axes value format
Hi all,
I ha
Hello,
I'm not at all sure if I understand your problem. Does this describe it?
test first model for months 1 and 2
if test statistic less than critical value{
test second model for months 1 and 2
print results of the first and second tests? just one of them?
}
move on to months
Hello,
I used optim to find the MLE estimates of some parameters. See the code
below. It works for data1(x). but It did not work for data2 and the error
says" L-BFGS-B needs finite values of 'fn' ".
data2: c(x, 32) that is, if I added the number 32 at the end of data1.
The error appears "non
Thanks again Peter. What about the argument that because low R square (e.g.
R^2=0.2) indicated the model variance was not sufficiently explained by the
factors in the model, there might be additional factors that should be
identified and included in the model. And If these additional factors wer
Dear R users,
I'm plotting housing prices in City A over past 30 years in ggplot2. The Xs
are years since 1980. I have two housing price variables: new home prices
and old home prices, both of them measured by $/sqft. I have searched
related threads on multiple Y axes in ggplot2 and I understand t
Dear R users,
I'm plotting housing prices in City A over past 30 years in ggplot2. The Xs
are years since 1980. I have two housing price variables: new home prices
and old home prices, both of them measured by $/sqft. I have searched
related threads on multiple Y axes in ggplot2 and I understand t
Hi All,
Sorry for posting the same question again. I was not sure if the message was
sent initially since it was my first post the forum.
Can the MNP package available in R be used to analyze panel data as well?
i.e., if there are 3 observed discrete choices for three time periods for
the same
Dear R users,
I'm plotting housing prices in City A over past 30 years in ggplot2. The Xs
are years since 1980. I have two housing price variables: new home prices
and old home prices, both of them measured by $/sqft. I have searched
related threads on multiple Y axes in ggplot2 and I understand t
HI Sarah,
I run the same code from your reply email. For the makegroup2, the results are
0 in places of NA.
> makegroup1 <- function(x,y) {
+ group <- numeric(length(x))
+ group[x <= 1990 & y > 1990] <- 1
+ group[x <= 1991 & y > 1991] <- 2
+ group[x <= 1992 & y > 1992] <- 3
+ group
+ }
> makegr
Sorry, my mistake.
it works very well!!!
thanks,
Rui Barradas wrote
>
> Hello,
>
>
> york8866 wrote
>>
>> Hi, John,
>>
>> the code ran well.
>>
>> however, somehow, the means were not calculated correctly using the
>> following code.
>>
>> test <- read.csv("Rtestdataset.csv", as.is=T,he
I have not done this myself, but reading through your book I see no reference
to actual sample file names. I mention this because UNIX-ish operating systems
download the tar.gz source archives while Windows works with the zip binary
packages, and I can't tell what files you are putting in the re
I set up a local repo for testing packages. My packages are not
showing up from the repository when viewed by Linux clients. I suspect
this is a web administrator/firewall issue, but it could be I created
the repo wrongly. I am supposed to run write_PACKAGES separately in
each R-version folder. Ri
You did not specify any object in the function. Thus R is building the
package "test" with all the objects present in your session when you are
calling the package.skeleton function. I suppose that one of these
objects is causing problem. I suggest you list all the
variables/function necessary
Can you show us the file that's throwing an error? This suggests
there's something syntactically invalid in your code, but it's
impossible to say what without seeing it.
Best,
Michael
On Tue, May 8, 2012 at 1:00 PM, abhisarihan wrote:
> I am a newbie in R, and I am trying to build an R package
R Users-
I have been trying to automate a manual code that I have developed for
calling in a .csv file, isolating certain rows and columns that correspond
to specified months:
something to the effect
i=name.csv
N=length(i$month)
iphos1=0
iphos2=0
isphos3=0
for i=1,N
if month=1
iphos1=iphos+1
On Tue, May 08, 2012 at 02:50:47PM -0500, Jeff wrote:
>
> ...still new to R and trying to figure this one out.
>
> I have a number of variables x, y, z, etc. in a data frame.
>
> Each contains a 2 digit year (e.g., 80, 81, 82) representing the
> first year that something occurred. Each variable
It's neater if you use dput() to give your data rather than just
copying it into the email, but anyway:
> testdata <- read.table("clipboard", header=TRUE)
> apply(testdata, 1, function(x)if(all(x == 0)) {0} else {min(x[x > 0])})
[1] 80 76 86 0
Sarah
On Tue, May 8, 2012 at 3:50 PM, Jeff wrote:
...still new to R and trying to figure this one out.
I have a number of variables x, y, z, etc. in a data frame.
Each contains a 2 digit year (e.g., 80, 81, 82) representing the
first year that something occurred. Each variable represents a
different type of event.
If the event did not occu
On Tue, May 08, 2012 at 10:21:59AM -0700, Haio wrote:
> Hi everyone, i´m a new user of R and i´m trying to translate an linear
> optimization problem from Matlab into r.
>
> The matlab code is as follow:
> options = optimset('Diagnostics','on');
>
> [x fval exitflag] = linprog(f,A,b,Aeq,beq,lb,
Dear all,
For the following code, I have the error message
"Error in uniroot(f1star, lower = -10, upper = 0, tol = 1e-10, lambda =
lam[i], :
f() values at end points not of opposite sign".
It seems the problem occurs when lambda is equal to 0.99.
However, there should be a solution for "f1
Actually I meant a working example and some data (See ?dput for a handy way to
supply data)
It is also a good idea to include the information from sessionInfo()
I think David W has a good approach.
Otherwise you might just want to write the axis yourself.
=
x
Kristi,
It's a little unclear what exactly you're trying to do. However, I
recently wanted to run a series of ANOVAs in a for loop and found this R
Help thread useful:
http://tolstoy.newcastle.edu.au/R/e6/help/09/01/2679.html
I also found Chapter 6 of the following book helpful:
Zuur, A. F., E
On May 8, 2012, at 2:23 PM, Vihan Pandey wrote:
On 8 May 2012 19:47, John Kane wrote:
Quite likely, but we need to know what you are doing and what
graphics package you are using.
PLEASE do read the posting guide
http://www.R-project.org/posting-guide.html
and provide commented, minimal,
Sorry, yes: I changed it before posting it to more closely match what
the default value in the pseudocode. That's a very minor issue: the
very last value in the nested ifelse() statements is what's used by
default.
Sarah
On Tue, May 8, 2012 at 2:46 PM, arun wrote:
> HI Sarah,
>
> I run the same
Hi All,
Sorry for posting the same question again. I was not sure if the message
was sent initially since it was my first post the forum.
Can the MNP package available in R be used to analyze panel data as well?
*i.e., *if there are 3 observed discrete choices for three time periods for
the same
Hi,
On Tue, May 8, 2012 at 2:17 PM, Geoffrey Smith wrote:
> Hello, I would like to write a function that makes a grouping variable for
> some panel data . The grouping variable is made conditional on the begin
> year and the end year. Here is the code I have written so far.
>
> name <- c(rep('F
On Tue, May 8, 2012 at 9:32 AM, maxbre wrote:
> and then with the superposition of relative average values to the boxplots,
> i.e. something like:
>
> panel.points(…, mean.values, ..., pch = 17)
Almost. You need to give panel.points the new x, and make sure the
right mean.values go to the right
Hello,
york8866 wrote
>
> Hi, John,
>
> the code ran well.
>
> however, somehow, the means were not calculated correctly using the
> following code.
>
> test <- read.csv("Rtestdataset.csv", as.is=T,header=T)
> test <- data.frame(test)
> test
> rowMeans(test)
> apply(test,1,function(y)mean(y
On 8 May 2012 19:47, John Kane wrote:
> Quite likely, but we need to know what you are doing and what graphics
> package you are using.
>
> PLEASE do read the posting guide
> http://www.R-project.org/posting-guide.html
> and provide commented, minimal, self-contained, reproducible code.
Frightf
Hi, John,
the code ran well.
however, somehow, the means were not calculated correctly using the
following code.
test <- read.csv("Rtestdataset.csv", as.is=T,header=T)
test <- data.frame(test)
test
rowMeans(test)
apply(test,1,function(y)mean(y>=0))
Is there anything wrong?
thanks,
--
View
Hello, I would like to write a function that makes a grouping variable for
some panel data . The grouping variable is made conditional on the begin
year and the end year. Here is the code I have written so far.
name <- c(rep('Frank',5), rep('Tony',5), rep('Edward',5));
begin <- c(seq(1990,1994),
Assuming the 400 numeric variables are integers this will be simpler if you
can identify the columns to be converted to factors as a block of column
numbers (e.g. 1:400, or 401:800)
# Create some data
X <- data.frame(matrix(nrow=20, ncol=20))
for (i in 1:10) X[,i] <- round(runif(20, .5, 5.5), 0)
I made this rather cool plot which I am quite pleased with:
http://brainimaging.waisman.wisc.edu/~perlman/data/BeeswarmLinesDemo.pdf
However, I feel there must be a better way to do it than what I did. I'm
attaching the code to create it, which downloads the data by http so it should
run for yo
It looks fine to me. Why do you say it does not work?
Any error messages?
John Kane
Kingston ON Canada
> -Original Message-
> From: yu_y...@hotmail.com
> Sent: Tue, 8 May 2012 10:06:07 -0700 (PDT)
> To: r-help@r-project.org
> Subject: Re: [R] Help "deleting negative values in a matrix,
On Tue, 8 May 2012, Uwe Ligges wrote:
If it is a 64-bit R, it will take as much memory as it needs unless your
admin applied some restrictions.
Some BIOS versions limit the memory the system sees. When I bought my Dell
Latitude E5410 in June 2010 it came with BIOS version A03 and supported n
Quite likely, but we need to know what you are doing and what graphics package
you are using.
PLEASE do read the posting guide
http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.
John Kane
Kingston ON Canada
> -Original Message---
Here is an example that may help. I found the idea somewhere in the R-help
archives but don't have a reference any more.
mydata <- data.frame(a1 = 1:5, a2 = 2:6, a3 = 3:7)
str(mydata)
mydata[, 1:2] <- lapply(mydata[,1:2], factor)
str(mydata)
so basically all you need to do is specific what c
Hi everyone, i´m a new user of R and i´m trying to translate an linear
optimization problem from Matlab into r.
The matlab code is as follow:
options = optimset('Diagnostics','on');
[x fval exitflag] = linprog(f,A,b,Aeq,beq,lb,ub,[],options);
exitflag
fval
x=round(x);
Where:
f = Linear obj
I am a newbie in R, and I am trying to build an R package but I keep getting
an unexpected input error when I try using the build, check or install
commands. I used the following command to generate the skeleton:
package.skeleton("test")
After this I went to the command prompt and to the directory
Given the following example
library(lattice)
attach(barley)
After a long meandering around the web I managed to get a side by side
boxplots through:
bwplot(yield ~ site, data = barley, groups=year,
pch = "|", box.width = 1/3,
auto.key = list(points = FALSE, rectangles = TRUE, spac
Dear community,
First of all, apologies, I'm pretty newbie, and maybe have not truly
understood this multiple correspondence analysis.
I have 9 categorial variables with 15, 12,12,7,9,11,8 ,4 , 31 levels
respectively; that is 109 levels.
(*By the way, is there any problem because of having diff
Hi, Rui,
I tried your code. It did not work.
thanks,
--
View this message in context:
http://r.789695.n4.nabble.com/Help-deleting-negative-values-in-a-matrix-and-do-statistic-analysis-tp4617792p4618080.html
Sent from the R help mailing list archive at Nabble.com.
_
Dear all,
I have a database of 93 variables.
I have created a few subsets (10) by inserting different numbers of
variables in each one of them (the maximum is 6 anyway), like to represent
different phenomena. Hence, this is the logic;
1subset=contains a few variables=expresses 1phenomenon.
Now I
Hello,
Try
(x is your matrix)
rowMeans(x)
apply(x, 1, function(y) mean( y[y >= 0] ))
Hope this helps,
Rui Barradas
york8866 wrote
>
> Dear all,
>
> I have encountered a problem with such a dataset:
>
> 1 52 2 5 2 6
> 1523 2 1 3 3
> 2
So, I'm maintaining some else's code, which is as always, a fun thing. One
feature of this code is the use of the 'seek' command.
In ?seek:
We have found so many errors in the Windows implementation of file
positioning that users are advised to use it only at their own
risk, and
Hi all,
I have some graphs where the values on the X and Y axes are by default
in exponent form like 2e+05 or 1.0e+07. Is it possible to make them in
a more readable form like 10M for 1.0e+07 or 200K for 2e+05?
Thanks and Regards,
- vihan
__
R-help@r-
On 08.05.2012 14:34, Stephen Sefick wrote:
Can you parallelize the code? It really depends on where the bottle neck
is.
HTH,
Stephen
On 05/07/2012 10:37 PM, arunkumar wrote:
HI
I've a unix machine which is of 16 GB. when i run any R process it takes
only 2GB of Memory. How to increase Me
On 05/08/2012 06:02 PM, Rich Shepard wrote:
On Tue, 8 May 2012, Hugh Morgan wrote:
Perhaps I have confused the issue. When I initially said "data points" I
meant one stand alone analysis, not one piece of data. Each analysis
point
takes 1.5 seconds. I have not implemented running this over th
Hi Jim and Michael,
Thank you very much for replying.
Here is the information about my data. I have a data frame, including more than
800 variables(columns) and 3 cases(rows).400 of those variables are
categorical variables. I used to use Rcmdr to convert variables, however, when
the
On Tue, 8 May 2012, Hugh Morgan wrote:
Perhaps I have confused the issue. When I initially said "data points" I
meant one stand alone analysis, not one piece of data. Each analysis point
takes 1.5 seconds. I have not implemented running this over the whole
dataset yet, but I would expect it to t
I think this may be an R 2.14 vs R 2.13 difference: like you I get
different results for each run the beta of Revolution R Enterprise 6.0,
which has the R 2.14.2 engine (see below). In earlier versions of R, you
can manage parallel random number streams with the rsprng library.
By the way, you can
Put a number on it.
"really slow" is not quantitative. What are we specially talking
about with respect to the size of the object you are converting? What
have you experienced so far? Exactly what is the code you are doing?
"simultaneously" would only happen if you parallelized the code and
dep
Perhaps I have confused the issue. When I initally said "data points" I
meant one stand alone analysis, not one piece of data. Each analysis
point takes 1.5 seconds. I have not implemented running this over the
whole dataset yet, but I would expect it to take about 5 to 10 hours.
This is ju
How are they arranged currently? And should they be all one set of
levels or different factor sets?
Michael
On Tue, May 8, 2012 at 12:32 PM, ya wrote:
> Hi everyone,
>
> Is there anyway I can convert more than 400 numeric variables to categorical
> variables simultaneously?
>
> as.factor() is r
Hi everyone,
Is there anyway I can convert more than 400 numeric variables to categorical
variables simultaneously?
as.factor() is really slow, and only one at a time.
Thank you very much.
ya
[[alternative HTML version deleted]]
__
R-help@r-
dear Szymon,
what do you mean
"it does not work for others.. that fit within similar range"?
Each dataset has its own features and breakpoint estimation is not as
simple as estimation of linear models even if your data "fit within
similar range".
I will contact you out of the list for detai
Probably just pointing out the obvious, but:
200,000 data points may not be that many these days, depending on the
dimensionality of the data. Nor is 10 times that number, neither now
nor in 5 years, again depending on data dimensionality. So my question
is, have you actually tried running your si
You should think about the cloud as a serious alternative.
I completely agree with Barry. Unless you will utilize your machines
(and by utilize, I mean 100% cpu usage) all the time (including
weekends) you will probably better use your funds to purchase blocks
of machines when you need to run you
On 05/08/2012 12:14 PM, Zhou Fang wrote:
How many data points do you have?
Currently 200,000. We are likely to have 10 times that in 5 years.
Why buy when you can rent? Unless your hardware is going to be
running 24/7 doing these analyses then you are paying for it to sit
idle. You might
Hi everyone,
while trying to use 'segmented' (R i386 2.15.0 for Windows 32bit OS) to
determine the breakpoint I got stuck with an error message and I can't find
solution. It is connected with psi value, and the error says:
Error in seg.glm.fit(y, XREG, Z, PSI, weights, offs, opz) :
(Some)
You have received no answer yet. I think this is largely because there
is no simple answer.
1. You don't need to mess with dummy variable. R takes care of this
itself. Please read up on how to do regression in R.
2. However, it may not work anyway: too many variables/categories for
your data. Or
Dear useRs,
I am using mgcv version 1.7-16. When I create a model with a few
non-linear terms and a random intercept for (in my case) country using
s(Country,bs="re"), the representative line in my model (i.e.
approximate significance of smooth terms) for the random intercept
reads:
rbuxton hotmail.com> writes:
> I am new to the package glmmadmb, but need it to perform a
> zero-inflated gzlmm with a binomial error structure. I can't seem
> to get it to work without getting some strange error messages.
# I am trying to find out what is affecting the number of seabird
# cal
So this actually looks like something of a tricky one: if you wouldn't
mind sending the result of dput(head(agg)) I can confirm, but here's
my hunch:
Try this:
agg2 <- aggregate(len ~ ., data = ToothGrowth, function(x) c(min(x), max(x)))
print(agg2)
str(agg2)
You'll see that the third "column" i
ramakanth reddy gmail.com> writes:
> I want to perform nagelkerke pseudo r2 test ...
> can someone tell me is there any r function or package available for doing
> it.
> and also the sample input data how it should be.
How about
library(sos)
findFn("nagelkerke")
?
_
Hello all,
I am doing an aggregation where the aggregating function returns not a
single numeric value but a vector of two elements using return(c(val1,
val2)). I don't know how to access the individual columns of that
vector in the resulting dataframe though. How is this done correctly?
Thanks, r
I'd imagine there are better tricks, but I know you can use
as.numeric() if you signal to R that you've got a hex value. See,
e.g.,
http://tolstoy.newcastle.edu.au/R/help/06/08/33758.html
Best,
Michael
On Tue, May 8, 2012 at 5:44 AM, Fang wrote:
> Hi all,
>
> Basically, I have data in the forma
Are you the oswi who just asked a very similar question?
Regardless, as Josh said, the high-performance way to do this is to
use the specialty C code available through the xts package and the
to.period() functions, specifically to.minutes5
Michael
On Tue, May 8, 2012 at 8:48 AM, Milan Bouchet-Va
On Tue, May 8, 2012 at 11:49 AM, Hugh Morgan wrote:
> Has anyone got any advice about what hardware to buy to run lots of R
> analysis? Links to studies or other documents would be great as would be
> personal opinion.
>
> We are not currently certain what analysis we shall be running, but our
>
> [...]
> But having indicated that I don't see a biplot's multiple scales as
> particularly likely to confuse or mislead, I'm always interested in
> alternatives. The interesting question is 'given the same objective - a
> qualitative indication of which variables have most influenced the loca
I think the general experience is that R is going to be more
memory-hungry than other resources so you'll get the best bang for
your buck on that end. R also has good parallelization support: that
and other high performance concerns are addressed here:
http://cran.r-project.org/web/views/HighPerfo
On Tue, May 8, 2012 at 12:14 PM, Apoorva Gupta wrote:
> I have checked that. It allows me to get the t-1, t-2 value but not the t+1
> value.
> Is there any other way of achieving this other than using the plm package?
>
It would be easier to help if you provided a minimal reproducible
example, as
On Mon, May 7, 2012 at 8:54 PM, Santosh wrote:
> Hello experts!!
> I apologize for posting SPlus related query here..badly in need of relevant
> info..
>
> I usually use R (and your advice/tips) for my daily work. Was wondering if
> there is an equivalent of "sheetCount" of the package "gdata" ava
Le mardi 08 mai 2012 à 10:44 +0200, osvald wiklander a écrit :
>
>
>
> Hi everybody, I am sorry that I am kind of spamming this forum, but I
> have searched for some input everywhere and cant really find a nice
> solution for my problem.
>
> Data looks like:
>
>
You can use the to.period family of functions in the xts package for
this. For example,
Lines <-
"2011-11-01 08:00:00 0.0
2011-11-01 08:00:00 0.0
2011-11-01 08:02:00 0.0
2011-11-01 08:03:00 -0.01709
2011-11-01 08:24:00 0.0
2011-11-01 08:24:00 0.0
2011-1
On May 8, 2012, at 4:35 AM, Suhaila Haji Mohd Hussin wrote:
Hello.
Sorry if that's considered laziness as I've just learnt R and didn't
know how important it is to do dput for all problems.
If I was truly lazy then I wouldn't even bother to sign up here and
ask questions.
I didn't say
Can you parallelize the code? It really depends on where the bottle
neck is.
HTH,
Stephen
On 05/07/2012 10:37 PM, arunkumar wrote:
HI
I've a unix machine which is of 16 GB. when i run any R process it takes
only 2GB of Memory. How to increase Memory limit. It takes a lot of time to
run th
I don't know if we can figure that out... I would figure out what these
data are, and then read the relevant help files, ?glm, and literature
associated with linear modeling.
HTH,
Stephen
On 05/08/2012 01:15 AM, T Bal wrote:
Hi,
I have a data with the forum
a b c
8.9 0
On May 8, 2012, at 4:35 AM, Suhaila Haji Mohd Hussin wrote:
> Hello.
>
> Sorry if that's considered laziness as I've just learnt R and didn't
> know how important it is to do dput for all problems.
>
> If I was truly lazy then I wouldn't even bother to sign up here and
> ask questions.
>
> Pl
I think the question on your mind should be: 'what do I want to do with this
plot'? Just producing output from the PCA is easy - plotting the output$sd
is probably quite informative. From the sounds of it, though, you want to do
clustering with the PCA component loadings? (Since that's mostly what
Hi there,
I'm sorry if I a send it for second time, I've just subscribed for the list.
I am trying to interface c++ code in R and make a package. With R CMD SHLIB
the dll was created, but when I try R CMD check, I am getting 'undefined
reference to..' linkage error messages.
The relevant c++ sou
1 - 100 of 125 matches
Mail list logo