Dear R-user:
I am using survreg(Surv()) for fitting a Tobit model of left-censored
longitudinal data. For logarithmic transformation of y data, I am trying use
survreg.distributions in the following way:
tfit=survreg(Surv(y, y>=-5, type="left")~x + cluster(id), dist="gaussian",
data=y.data,
Dear list,
Hello! I had a problem of reading in a txt file and need your help.
The txt file, called A, comprises of 592 columns and 34179 rows.
I need to note that for some cells of A , A[i,j], are blank.
I used read.table() and got the warning message:
> A<-read.table(file="A.txt",sep="\t")
War
Note that the problem is not that you got the message, but an apparent
whitespace issue in the output. That's clearer in this version: note that
the output for reg-IO.Rout.save is shifted left one column. But I don't
trust email to transmit these things faithfully (especially after the last
t
On Fri, 25 Apr 2008, Michele Christina Itten wrote:
> Is there a way to force certain formula parameters to be nonnegative?
>
> What I want to do is to estimate student capacity over time, namely by
>
>> capacity ~ Student + Student:Day
>
> I add this formula to a glm call and obtain negative lea
You don't want shQuote - that makes it a single argument.
Shells strip quotes, but system() does not use a shell on Windows -- you
could use shell() not system(), but that would be overkill here.
On Thu, 24 Apr 2008, Jeff Breiwick wrote:
> Hi,
>
> I am trying to run the command: R CMD INSTALL -
Is there a way to force certain formula parameters to be nonnegative?
What I want to do is to estimate student capacity over time, namely by
> capacity ~ Student + Student:Day
I add this formula to a glm call and obtain negative learning slope estimates
(Student:Day) in some cases.
However, I
Hi,
I'm sure it could be better but try this:
# F statistics based on lm
FSTAT=function(y,x) summary(lm(y~x))$f[1]
# Correlation and p-value
CORR=function(y,x){
tc=cor.test(x,y,method="spearman",alternative="two.sided")
temp=matrix(c(tc$estimate,tc$p.value),ncol=2)
colnames(temp)=c('rho','pvalue
Dear Colleagues,
It seems to me that the issue is not whether the information we seek
is in the documentation. I, for one, am amazed at the quality of the
documentation of R and of contributed materials. For me the issue is
one of finding the information in an efficient way out of the
mountains
I installed R-2.7.0 from the tar.gz file. It's the end of make check
that my question is about.
make[3]: Entering directory `/usr/local/R-2.7.0/tests'
make[3]: `reg-tests-1.Rout' is up to date.
make[3]: `reg-tests-2.Rout' is up to date.
running code in 'reg-IO.R' ... OK
comparing 'reg-IO.Rout' to
I just read through this thread and I didn't see the R Language
Definition mentioned. As with An Introduction to R it can be accessed
-- at least in my Windows GUI -- via the menu bar: Help -> Manuals (in
PDF). If An Introduction to R is too basic, then the Language
Definition should be a good pl
This has the disadvantage of producing a warning when it finds
non-numerics and also there
are situations like 1E1 which it will regard as numeric so using
a regexp is probably preferred but it is simple:
!is.na(as.numeric(x))
On Thu, Apr 24, 2008 at 10:39 PM, Farrel Buchinsky <[EMAIL PROTECTED]
The following will return the indices or the values of character
strings that are all numeric:
> x <- c("12345", "123AS23", "A123", "398457")
> grep("^[[:digit:]]*$", x) # index
[1] 1 4
> grep("^[[:digit:]]*$", x, value=TRUE) # values
[1] "12345" "398457"
>
On Thu, Apr 24, 2008 at 10:39 PM
I have a bunch of tables in a Microsoft Access database. An updated database
is sent to me every week containing a new table. I know that is inefficient
and weird but welcome to my life. I want to read the tables whose names are
something such as "040207" but not the ones that have alphanumeric nam
You can read the first line in and determine which of the columns
contain the names you want to use and then create the parameters for
'colClasses' to control which columns you want to read. Or you can
just read in all the data and then delete the columns you don't need.
On Thu, Apr 24, 2008 at 1
Thanks Jim.
I got this:
> A<-read.table("a.txt", sep="\t", fill=TRUE)
> dim(A)
[1] 33623 592
> x <- count.fields("a.txt", sep="\t")
which(x != 592) # print out the lines that are not correct
> which(x != 592) # print out the lines that are not correct
[1] 31279 31281 33625
>
Actually, I jus
It seems to indicate that you don't have 592 columns on all lines.
Try the following to see how many columns are in each line:
x <- count.fields("A.txt", sep="\t")
which(x != 592) # print out the lines that are not correct
You might also try:
read.table("a.txt", sep="\t", fill=TRUE)
On Thu, Ap
Hy Katie,
There are many ways to do this. A simple one is to create a vector of
the same length than your 'x' vector, containing a group label.
> group=rep(c(1,2,3),times=nr[1,])
Then you can use tapply to apply a function (in this case mean and
variance) of the values of x within each group
Dear list,
Hello! I had a problem of reading in a txt file and need your help.
The txt file, called A, comprises of 592 columns and 34179 rows.
I need to note that for some cells of A , A[i,j], are blank.
I used read.table() and got the warning message:
> A<-read.table(file="A.txt",sep="\t")
War
Hello,
I have two random variables with their percentiles which correspond to their
probability distribution function. My objective is to sum these two random
variables. There exists any algorithm or procedure in R capable of converting
the percentiles to a probability density function? is the
tzsmile wrote:
> i want to read a txt file "weekly". this file is in d:/my documents
> i tried the following
> a<-read.table("d:/my documents/weekly", sep=" ", header=TRUE)
> but get the following error:
> Error in open.connection(file, "r") : unable to open connection
> In addition: Warning mess
Hi there,
You need the extension "txt". Try this:
a<-read.table("d:/my documents/weekly.txt", sep=" ", header=TRUE)
HTH,
Jorge
On Thu, Apr 24, 2008 at 4:43 PM, tzsmile <[EMAIL PROTECTED]> wrote:
>
> i want to read a txt file "weekly". this file is in d:/my documents
> i tried the following
i want to read a txt file "weekly". this file is in d:/my documents
i tried the following
a<-read.table("d:/my documents/weekly", sep=" ", header=TRUE)
but get the following error:
Error in open.connection(file, "r") : unable to open connection
In addition: Warning message:
cannot open file 'd:/
Yes, unlist is the magic wand I was looking for. Thanks a million!
Having said that, I find it rather arbitrary that you can write mat[1:4]
but not list[[1:2]]; IMO there should be no need for a "magic" operator
like unlist: list[[1:length(list)]] could do the job.
-- O.L.
Dear Olivier,
You can use is.null().
apropos("null") finds is.null(), and help.search("null") turns up ?NULL,
which documents is.null().
I hope this helps,
John
--
John Fox, Professor
Department of Sociology
McMaster University
Hamilton, Ontario, Canada
web: socser
chenxh007 wrote:
> is.null
Thanks! That one is mentioned in the LRM under §2.1.6 (NULL), so I should
have found it...
-- O.L.
__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www
Hi Olivier,
is this what you want?
x="col1 col2
1 0.1 1.1
2 0.2 1.2"
m=read.table(textConnection(x),header=TRUE)
m1=matrix(unlist(m),ncol=2)
m1
[,1] [,2]
[1,] 0.1 1.1
[2,] 0.2 1.2
HTH,
Jorge
On Thu, Apr 24, 2008 at 6:02 PM, Olivier Lefevre <[EMAIL PROTECTED]> wrote:
> Another possibly
Another possibly simple thing that I cannot get right is how to extract the
data part of a list as a matrix. The data were read from xls, with labels,
and thus are of list mode, e.g.,
col1 col2
1 0.1 1.1
2 0.2 1.2
I want to extract from that just the numeric data part, i.e., (in this
case
Hello. I am a newbie to R. If I should be reading some FAQ or manual
that could help answer my question please tell me and I will go there.
Problem:
I have a spreadsheet that contains a character code in each cell. The
columns in the spreadsheet represent time and the rows represent people.
I wa
is.null
Olivier Lefevre wrote:
> x == NULL returns logical(0) instead of FALSE or TRUE as you might expect
> and I cannot find the right way to write this test in R.
>
> Thanks in advance for any hint,
>
> -- O.L.
>
> __
> R-help@r-project.org mailin
x == NULL returns logical(0) instead of FALSE or TRUE as you might expect
and I cannot find the right way to write this test in R.
Thanks in advance for any hint,
-- O.L.
__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-hel
Hi,
I am trying to run the command: R CMD INSTALL -l mypath mypackagename
from within R (Windows XP) using system() and get the following error:
ARGUMENT 'CMD INSTALL -l D:/R/JMB.LIBS jmb.test' __ignored__
Fatal error: you must specify '--save', '--no-save' or '--vanilla'
My function contains t
On Thu, 24 Apr 2008, Dieter Menne wrote:
> Achim Zeileis wu-wien.ac.at> writes:
> > However, I guess that it will be hard to select a qualitative
> > palette with 18 distinct colors...I couldn't imagine a plot where it would
> > be sufficiently easy for humans to decode that. But maybe you can co
Hi,
I would like to obtain correlation parameters (e.g., coefficients, p-value)
for multiple samples in regard to a reference. I have my data in a table
with the reference as the second row (first row are headers) and then each
sample in a row. What I do so far is load up the data, get the refere
Or you could use the read.xls program in the gdata library that uses a
perl script underneath.
Charles Danko wrote:
> try:
> var <- read.table("weekly.txt", sep="\t", header=TRUE)
>
> Charles
>
> On Thu, Apr 24, 2008 at 3:29 PM, tzsmile <[EMAIL PROTECTED]> wrote:
>
>> i just want to read data
Try also,
setwd("C:\\")
yourdata=read.table("tzmile.txt",header=TRUE)
attach(yourdata)
yourdata[1:10,]
warcode date weeklyrt
130001 16-Dec-05 -0.0043
230001 23-Dec-05 0.1313
330001 30-Dec-05 -0.0844
430001 6-Jan-06 0.0097
530001 13-Jan-06 -0.1009
630001 20-Jan
> -Original Message-
> From: [EMAIL PROTECTED]
> [mailto:[EMAIL PROTECTED] On Behalf Of tzsmile
> Sent: Thursday, April 24, 2008 12:30 PM
> To: r-help@r-project.org
> Subject: [R] a simple question of importing data
>
>
> i just want to read data from Excel and i copied it and
> pasted
First off there are multiple definitions of "best", you need to decide
which "best" is best for you.
Second, for reasonable definitions of "best", deciding between, linear,
polynomial, and ... Requires backgroud knowledge and real thought.
R can fit many different models and give you numerical an
try:
var <- read.table("weekly.txt", sep="\t", header=TRUE)
Charles
On Thu, Apr 24, 2008 at 3:29 PM, tzsmile <[EMAIL PROTECTED]> wrote:
>
> i just want to read data from Excel and i copied it and pasted into a txt
> file.
> then i want to use "read.table" to read it. but however i tried, it do
i just want to read data from Excel and i copied it and pasted into a txt
file.
then i want to use "read.table" to read it. but however i tried, it doesn't
work.
can someone help me?
data is attached.
thanks http://www.nabble.com/file/p16851853/weekly.txt weekly.txt
--
View this message in conte
If DF is your data frame then create a time column like this:
DF$time <- ave(DF$DaysAgo, DF$PID, FUN = seq_along)
and now use the reshape command on DF (or melt/cast
from the reshape package).
On Thu, Apr 24, 2008 at 2:13 PM, Tubin <[EMAIL PROTECTED]> wrote:
>
> hi, I'm a total noob who is havin
Achim Zeileis wu-wien.ac.at> writes:
> However, I guess that it will be hard to select a qualitative
> palette with 18 distinct colors...I couldn't imagine a plot where it would
> be sufficiently easy for humans to decode that. But maybe you can combine
> that with some sequential or diverging pal
Great, thanks, that was helpful.
Andrew
On Thu, Apr 24, 2008 at 2:15 PM, Achim Zeileis <[EMAIL PROTECTED]>
wrote:
> On Thu, 24 Apr 2008, Andrew Yee wrote:
>
> > I've found RColorBrewer useful for its qualitative palettes, but wished
> that
> > it could generate more than 12 qualitative palettes
Hi all,
I was wondering if anyone in here is familiar with
the command "image.smooth".
How can set the arguments dx, dy and theta so my
resulting matrix is not smoothed out too much?
Thank you.
___
I have used smooth.ppp in spatstat to create a smoothed surface plot based
on randomly selected depths across a lake (as marks).
I wonder if based on the smoothed surface plot if I can calculate the
average depth for each 10x10 grid square across the lake.
I can't see any obvious way of doing thi
hi, I'm a total noob who is having to ramp up to full speed very quickly due
to an unfortunate abrupt staffing change at my job :)
I have longitudinal data that looks like this:
PID OBSDATEDaysAgo CleanValue
NAME
1 1410164934000610 8/17/2004 13:03:38
On Thu, 24 Apr 2008, Andrew Yee wrote:
> I've found RColorBrewer useful for its qualitative palettes, but wished that
> it could generate more than 12 qualitative palettes (e.g. with Set3). Any
> suggestions for alternative color palette generators that can handle e.g. 18
> distinctive colors? (
Andrew Yee gmail.com> writes:
>
> I've found RColorBrewer useful for its qualitative palettes, but wished that
> it could generate more than 12 qualitative palettes (e.g. with Set3). Any
> suggestions for alternative color palette generators that can handle e.g. 18
> distinctive colors? (I'm a
I've found RColorBrewer useful for its qualitative palettes, but wished that
it could generate more than 12 qualitative palettes (e.g. with Set3). Any
suggestions for alternative color palette generators that can handle e.g. 18
distinctive colors? (I'm aware of using rainbow(), but this doesn't
g
Given a data set and a set of predictors and a response in the data,
we would like to find a model that fits the data set best.
Suppose that we do not know what kind of model (linear, polynomial
regression,... ) might be good, we are wondering if there is R-package(s)
can auctomatically do this.
Ot
Try:
cof <- full.t.ag$cov$coeff
gsub("t.ag.X", "", names(cof))
On Thu, Apr 24, 2008 at 1:47 PM, Summer Nitely <[EMAIL PROTECTED]>
wrote:
> Hi,
>
> I would like to truncate row names. Is that possible?
>
> I ran a regression with the covariates in a matrix, and in the results the
> coefficients
Hi,
I would like to truncate row names. Is that possible?
I ran a regression with the covariates in a matrix, and in the results the
coefficients have the matrix name concatenated with the variable name:
> full.t.ag <- Icens(lfirst_well, llast_well, lfirst_ill, formula=~ t.ag.X,
breaks=t.ag.in
The function is
f(s,t) = ( a1*t + a2*t^2 + ( a3*t + a4*t^2 )*d1*s^2*t^2 + (a5*t +
a6*t^2)*d1^2*b^4*t^4 )*exp(-0.5*d1*s^2*t^2) + a8
where a1,a2,...,a8,d1 are some constants (can be positive or negative),
which are computed from a given set of data.
The objective is to find the minimum value of s
On 4/24/2008 12:08 PM, Martin Maechler wrote:
> Hmm,
>
>> "KeBe" == Beck, Kenneth (STP) <[EMAIL PROTECTED]>
>> on Thu, 24 Apr 2008 10:12:19 -0500 writes:
>
> KeBe> OK I've spent a lot of time with the core
> KeBe> documentation, and I never found anything as simple as
> Ke
If you have a variable that is supposed to be numeric, but R thinks
it is a factor, then you may (probably) have something in your input
that is not a number. You changed the commas to decimal points at
Prof. Ripley's suggestion. You still have the same error message.
Therefore, you probably st
Hmm,
> "KeBe" == Beck, Kenneth (STP) <[EMAIL PROTECTED]>
> on Thu, 24 Apr 2008 10:12:19 -0500 writes:
KeBe> OK I've spent a lot of time with the core
KeBe> documentation, and I never found anything as simple as
KeBe> their table 2.1, which elucidated the difference
KeB
Hi Jojje,
Try this:
ekob=data.frame(PAP=seq(0,4,by=0.5),CAP=seq(1,5,by=0.5),FAP=seq(4,8,by=0.5))
attach(ekob)
# Option 1
ekob[PAP<1.5 & PAP>0.5 & CAP>1 & CAP<3 & FAP>4,]
# Option 2
subset(ekob,PAP<1.5 & PAP>0.5 & CAP>1 & CAP<3 & FAP>4)
HTH,
Jorge
On Thu, Apr 24, 2008 at 11:31 AM, Jojje Ande
On Thu, 24 Apr 2008, Jojje Andersson wrote:
Hello!
Thanks!
I changed the "," to "." in both datafile and code but the problem remains
identical.
Then we will need a reproducible example, as requested in the message
footer.
BTW, your example is a perfect illustration of the problem. Un
Hi,
Thanks for help.
The ecdf has jumps of 1/n but I need jumps of 1. How do I do this? if I can
do this then I think Ic an plot the data properly. I am plotting epidemic
simulation and want to plot my 2 processes Infection and Symptoms (after
incubation) on the same chart to compare them.
Atul.
On Thu, 24 Apr 2008, Atul Kulkarni wrote:
> Hi,
>
> Thanks for help.
>
> The ecdf has jumps of 1/n but I need jumps of 1. How do I do this? if I can
See
?stepfun
HTH,
Chuck
> do this then I think Ic an plot the data properly. I am plotting epidemic
> simulation and want to plot my 2 pro
Hello!
Thanks!
I changed the "," to "." in both datafile and code but the problem remains
identical.
Cheers!
Jojje
> Date: Thu, 24 Apr 2008 09:16:38 +0100> From: [EMAIL PROTECTED]> To: [EMAIL
> PROTECTED]> CC: r-help@r-project.org> Subject: Re: [R] Factor to numeric> >
> The decimal poi
> Help files with alias or concept or title matching 'data type' using
> fuzzy matching:
> character-class(methods)
> Classes Corresponding to Basic Data Types
> sqlTypeInfo(RODBC) Request Information about DataTypes in an ODBC
> Database
>
just a comment to "fstat"...
Using package "distrEx" one also has "fstat" functionality
library(distrEx)
E(Fd(df1 = 3, df2 = 3))
E(Fd(df1 = 4, df2 = 4))
E(Fd(df1 = 5, df2 = 5))
var(Fd(df1 = 5, df2 = 5))
for more functionals (skewness, kurtosis, IQR, ...) see for instance
help("var", package = "d
OK I've spent a lot of time with the core documentation, and I never
found anything as simple as their table 2.1, which elucidated the
difference between a vector, matrix and array first, then the higher
level structures, frame and list. Maybe I'm not a good searcher, but
believe me for every init
Jennifer Balch wrote:
> Dear all,
>
> I'm looking for a function that calls the inverse F-distribution.
> Something equivalent to FINV in matlab or excel.
>
> Does anyone know if such a function already exists for R? (I haven't
> been able to find one.)
>
> Thanks for any leads.
>
>
I would
Hi Jennifer,
Try ?qf or help.search("F distribution")
HTH,
Jorge
On Thu, Apr 24, 2008 at 10:49 AM, Jennifer Balch <[EMAIL PROTECTED]>
wrote:
> Dear all,
>
> I'm looking for a function that calls the inverse F-distribution.
> Something equivalent to FINV in matlab or excel.
>
> Does anyone kno
Dear all,
I'm looking for a function that calls the inverse F-distribution.
Something equivalent to FINV in matlab or excel.
Does anyone know if such a function already exists for R? (I haven't
been able to find one.)
Thanks for any leads.
Best,
Jennifer
___
On 4/24/2008 10:22 AM, Beck, Kenneth (STP) wrote:
> Agree that terseness is good, but I also agree with other posters that
> better cross referencing or maybe an index of synonyms would be good.
>
> So far, the best suggestion is the pdf at this link
>
> (http://www.medepi.net/epir/epir_chap02.
Hi,
I'm not sure where to look for help with this problem. I don't even know the
right search terms for it.
First let me describe the analysis.
I have land use data from satellite imagery (individual pixels or cells) for
years 1985 and 1990. I am recoding development as 1 and non-development
(f
Hi useRs,
I am trying to compare several distance matrices obtained from subsets
of variables from the same experiment. I put all the subsets in a list
and then calculated the distance matrices with lapply. In order to do a
mantel test between them I wrote a function the returns a list with the
> Great suggestions. I tested the code on an example and the run time was
> reduced from 1 min 12 sec to 3 sec. Also, I like the suggestion to look at
> the quantiles. I will see what insight it provides in terms of detecting
> masked interactions.
Well that's a decent speed up :)
> I have a c
Try this:
dat <- data.frame(x = gl(4, 5), y = gl(5, 4), z = rnorm(20))
res <- dat[sapply(dat, is.factor)]
On Thu, Apr 24, 2008 at 9:14 AM, Serge Merzliakov <[EMAIL PROTECTED]>
wrote:
> Hi All,
>I have attempted to extract only the factor columns from an
> existing data set inside a loop wi
Agree that terseness is good, but I also agree with other posters that
better cross referencing or maybe an index of synonyms would be good.
So far, the best suggestion is the pdf at this link
(http://www.medepi.net/epir/epir_chap02.pdf).
Is there a way to pop at least part of this into the R
---begin included message
Hi,
i need to use pspline. In this pspline function coxph.wtest was used. When I
try to make some change to this function by pulling out the pspline
function, it turns out R gave me an error msg, saying coxph.wtest cannot be
found. Even if i dont change anything in ps
Hi Zack.
I don't know how helpful this is but I use R on 64bit ubuntu for the analysis
of large microarray datasets. R may well be taking advantage of all your memory
but the objects it's creating are too big. You could run 'top' in another
terminal window to examine this behaviour. If R is usi
Hello,
I would be extremely grateful if anyone is able to provide any (rather obscure)
advice on using R with Condor. I think I'm following Xianhong Xie's
instructions (R News 5(2) 13-15) correctly, but my job just stays held in the
queue (for days / months). I've checked condor_status to m
Hi,
We just installed a HP Integrity Superdome (an Itanium-based SMP
machine with 64 cores + 128 GB memory, running Red Hat AS 4 update 6)
at our National Center for High Performance Computing.
I would like to run R on it and use the hardware optimally. What is
the best way? Is there a para
Hi!
I have 2 matrices of numbers m1 and m2 with the same number of columns
and rows. I would like to compute m2', the permutation of the rows of m2
such that the distance (e.g., sum(m1-m2') or sum((m1-m2')^2)) is
minimized. Do you know of any function/algorithm to obtain such a
permutation?
Be
Hi, I'm new to the glm and logit world... and I'm reading some lecture notes
and examples. I would like to try and generate the same result in R.. but I
don't seem to be able to find the proper way to specify the formula
let's say i have
Desire Using Drugs Not Using Drugs
Yes
Hi All,
I have attempted to extract only the factor columns from an
existing data set inside a loop without success . I tried the transform
function which worked, but not inside the loop (attempts with cbind did
not work either - inside a loop). Here is my function:
getcatcolumns<-function
On 4/24/2008 9:01 AM, Ardia David wrote:
> Dear all,
> How can I pass '-Inf' and 'Inf' values from R to C code using the
> function '.C(...)'. When running my code, I get an error since C does
> not recognize -Inf and Inf values. Of course, I could use instead a very
> low (or high) number, but
Great suggestions. I tested the code on an example and the run time was
reduced from 1 min 12 sec to 3 sec. Also, I like the suggestion to look at
the quantiles. I will see what insight it provides in terms of detecting
masked interactions.
I have a couple questions about your code.
First, why
Dirty solution: switched off x-axis plotting via 'par' and added it the
personalized way with 'axis'.
Joh
Johannes Graumann wrote:
> Hm, now I have trouble using additional "vioplot" parameters.
>
> mu<-2
> si<-0.6
> bimodal<-c(rnorm(1000,-mu,si),rnorm(1000,mu,si))
> uniform<-runif(2000,-4,4)
>
Dear Dr. Vardhan,
From where can we download this package and its details.
Thanks
HAve a nice day.
On Wed, 23 Apr 2008 16:39:50 -0400 "Ravi Varadhan" wrote:
> Hi,
>
>
>
> We (Paul Gilbert and I) have just released a new R package on CRAN called
> "BB" (stands
Dear all,
How can I pass '-Inf' and 'Inf' values from R to C code using the
function '.C(...)'. When running my code, I get an error since C does
not recognize -Inf and Inf values. Of course, I could use instead a very
low (or high) number, but I was wondering whether a more elegant
solution ex
?sprintf
> x <- "1.085714e-01"
> sprintf("%.4f", as.numeric(x))
[1] "0.1086"
>
On Thu, Apr 24, 2008 at 6:28 AM, orkun <[EMAIL PROTECTED]> wrote:
> hello
>
> How can I change a vector's float value from ("1.085714e-01") format to
> a simpler format (like 0.1xxx).
>
> regards
>
>
> --
> Ahmet Tem
Hello,
I have aggregate a data.frame from 16MB (Object size). After some
minutes I get the error message "cannot allocate vector of size 64.5MB".
My computer has a physical memory of 4GB under Windows Vista.
I have test the same command on another computer with the same OS and
2GB RAM. In nearly
Try this:
x <- "1.085714e-01"
format(as.numeric(x), scientific = F)
On Thu, Apr 24, 2008 at 7:28 AM, orkun <[EMAIL PROTECTED]> wrote:
> hello
>
> How can I change a vector's float value from ("1.085714e-01") format to
> a simpler format (like 0.1xxx).
>
> regards
>
>
> --
> Ahmet Temiz
> Jeo.
On Wed, Apr 23, 2008 at 8:51 PM, Kevin Lu <[EMAIL PROTECTED]> wrote:
> I need to find the minimum value of the parameter, s, such that the function
> f(s,t) > 0 (where -Inf < t < Inf)
>
> I've looked into optim, constrOptim and others but they don't seem to do
> this. Des anyone have some sugge
I can read that version, thanks.
The issue is probably that getwd() and filename are in different
encodings, and file.path() was not expecting that. You need R-patched >=
r45490 for that fix.
On Thu, 24 Apr 2008, Javier Muñoz wrote:
Sorry,
this is the same test with R-patched (same results
Sorry,
this is the same test with R-patched (same results as well).
> setwd("c:/jml/Valoración F1/04 BA/")
> filename <- "Valoración F1 PAG.mdb"
> file.path(getwd(), filename)
[1] "c:/jml/Valoración F1/04 BA/Valoración F1 PAG.mdb" <<- wrong
> file.path("c:/jml/Valoración F1/04 BA", filename)
[1
hello
How can I change a vector's float value from ("1.085714e-01") format to
a simpler format (like 0.1xxx).
regards
--
Ahmet Temiz
Jeo. Müh.
Afet İşleri Gen. Md.lüğü
Deprem Ar. D.
Ahmet Temiz
Geo. Eng.
General Dir. of
Disaster Affairs
--
This message has been scanned for viruses and
See my reply to Erik Jørgensen yesterday.
Can you please send that again with a marked encoding, so we have a chance
of reading it? Or attach it as a plain text file and tell us what
encoding the file is in?
Yes, there probably is a bug here but we do need to be able to reproduce
it.
On T
Javier Muñoz wrote:
> Hello,
>
> This is what i get in R-2.7.0 (with the completely internal file.path()):
>
>
>> setwd("c:/jml/Valoraci�n F1/04 BA/")
>> filename <- "Valoraci�n F1 PAG.mdb"
>>
>
>
>> file.path(getwd(), filename)
>>
> [1] "c:/jml/Valoración F1/04 BA/Valoraci�n F1 PAG
Hello,
This is what i get in R-2.7.0 (with the completely internal file.path()):
> setwd("c:/jml/Valoración F1/04 BA/")
> filename <- "Valoración F1 PAG.mdb"
> file.path(getwd(), filename)
[1] "c:/jml/Valoración F1/04 BA/Valoración F1 PAG.mdb" <<- worng
> file.path("c:/jml/Valoración F1/04 BA"
There is little theory about significance and testing for PLSR (and, I
would guess, GPLSR). Many practicioners use Jackknife variance
estimates as a basis for significance tests. Note, however, that these
variance estimates are known to be biased (in general), and their
distribution is (to my kno
or using the %in% operator...
?"%in%"
data[data$label %in% flist,]
regards,
Sean
Applejus wrote:
>
> Hi,
>
> You are right the == doesn't work, but there's a workaround using regular
> expressions:
>
> flist<-"fun|food"
> grep(flist, data$label)
>
> will give you the vector [2 4] which ar
Roslina Zakaria wrote:
> Dear r-expert,
> I would like to generate 30 sets of random numbers from uniform distribution
> (0,1). Each set of random numbers should have 90 data. Here is my code:
> rand.no <- function(n,itr)
> { for (i in 1:itr)
> {rand.1 <- runif(n,0,1)
> if (i ==1) rand.2 <-
Dear r-expert,
I would like to generate 30 sets of random numbers from uniform distribution
(0,1). Each set of random numbers should have 90 data. Here is my code:
rand.no <- function(n,itr)
{ for (i in 1:itr)
{rand.1 <- runif(n,0,1)
if (i ==1) rand.2 <- rand.1
else rand.2 <- cbind(rand.2,r
The decimal point in R is always '.', never ','.
On Thu, 24 Apr 2008, Jojje Andersson wrote:
>
> Hello!
> I have a problem whith a data.frame. I want to make a subset where some of
> the variables have values within ceartain limits.
> The variables are proportions like 1,00, 0,54, 0,00 etc.
> I
Hello!
I have a problem whith a data.frame. I want to make a subset where some of the
variables have values within ceartain limits.
The variables are proportions like 1,00, 0,54, 0,00 etc.
I don't get it right as R take the variables for factors.
> ekobsub1 <- subset(ekob, PAP>0,25 & PAP<0,6
1 - 100 of 103 matches
Mail list logo