Can any one help me to solve problem in my code? I am actually trying to
find the stopping index N.
So first I generate random numbers from normals. There is no problem in
finding the first stopping index.
Now I want to find the second stopping index using obeservation starting
from the one after t
Hello,
at some places I read about good interaction of
LaTeX and R.
Can you give me a starting point, where I can find
information about it?
Are there special LaTeX-packages for the support,
or does R have packages for support of LaTeX?
Or will an external Code-Generator be used?
TIA,
Oliver
Dear R users
I have a similarity matrix of 100X100 (chemical similarity). Now I want to
create a dendrogram from it and export this dendrogram as GML format
network. Can you please reply How I can do this stuff?
Thanks in advance.
Dinesh
--
Dinesh Kumar Barupal
Junior Specialist
Metabolomics Fie
Hi guys,
I've been struggling to find a solution to the following issue:
I need to change strings in .ini files that are given in input to a program
whose output is processed by R. The strings to be changed looks like:
"instance = /home/TSPFiles/TSPLIB/berlin52.tsp"
I normally use Sed for this
Thanks for the answer.
Surely, without popping windows it is a lot more convenient.
This piece of code seems to work for me.
def.par <- par(no.readonly = TRUE)
png(file="thisPlot.png", width = 1200, height = 800, units = "px",
pointsize = 12)
plot(blah-blah...
dev.off()
par(def.par)
best
g
My two matrices are roughly the sizes of m1 and m2. I tried using two apply and
cor.test to compute the correlation p.values. More than an hour, and the codes
are still running. Please help to make it more efficient.
m1 <- matrix(rnorm(10), ncol=100)
m2 <- matrix(rnorm(1000), ncol=100
Try Exporting to a Large pdfpdf("filename.pdf", width=20, height=20)
You can adjust the width and height accordingly
On Wed, Nov 26, 2008 at 4:30 AM, David Winsemius <[EMAIL PROTECTED]>wrote:
> I could not tell from the help file whether rotation of the x labels is
> supported in heatmap.2. I di
Hi wizards
I have the following code for a Kolmogorov-Smirnov Test:
z<-c(1.6,10.3,3.5,13.5,18.4,7.7,24.3,10.7,8.4,4.9,7.9,12,16.2,6.8,14.7)
ks.test(z,"pexp",1/10)$statistic
The Kolmogorov-Smirnov statistic is:
D
0.293383
However, I have calculated the Kolmogorov-Smirnov statistic with t
Dear PDXRugger,
If I understand correctly, try this:
# Data
x = c(1:5,1,4,7)
# Option 1
require(car)
y1=recode(x, "1=4; 4=2; 7=3")
# Option 2
y2=ifelse(x==1,4,
ifelse(x==4,2,
ifelse(x==7,3,x)))
# Are y1 and y2 equals?
all.equal(y1,y2)
# Ordering x by y1
x[order(y1)]
See ?order, ?ifelse and ?r
See
?is.element
?order
and perhaps
?ave
and
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.
as it says on your own email!
What you provide is not reproducible.
HTH,
I have consulted the intro and nabble but have not found an answer to what
should be a simple question, here goes:
I am doing a crosscheck of a data frame and pulling out a single value based
on an inputted value ie based on x i will select y, or if x =2 then my code
returns 7.
xy
1 4
2
Hi All,
I am trying to copy portions of tables from one SQL database to another,
using sqlCopy in the RODBC package.
RemoteChannel = connection to remote database
LocalChannel = connection to local database
LocalTable = table in my local database to receive data from the remote
database
query <-
Hi Roger,
i solved the problem installing rgdal-dev and its dependencies.
i'm so glad for enjoying this list. in the future i hope help other users
too.
thanks a lot!
Thybério
Roger Bivand wrote:
>
> Please do provide the output of sessionInfo(). Without this, any further
> advice will be
> g
Thanks!
really a simple solution.
regards
Jorge Ivan Velez wrote:
>
> Dear Chris82,
> Try this:
>
> input <- readLines(con,n=224)
> x <- "\004"
> input[225]<-x
> input
>
> HTH,
>
> Jorge
>
>
>
>
> On Tue, Nov 25, 2008 at 4:37 PM, Chris82 <[EMAIL PROTECTED]> wrote:
>
>>
>> hello R use
Mike Prager <[EMAIL PROTECTED]> wrote:
> The function is.nan() does not operate like is.na(). One could
> consider that a design deficiency in R.
I meant to write, "design inconsistency".
--
Mike Prager, NOAA, Beaufort, NC
* Opinions expressed are personal and not represented otherwise.
* Any
Dear Chris82,
Try this:
input <- readLines(con,n=224)
x <- "\004"
input[225]<-x
input
HTH,
Jorge
On Tue, Nov 25, 2008 at 4:37 PM, Chris82 <[EMAIL PROTECTED]> wrote:
>
> hello R users,
>
> I didn't find a solution for a simple problem I think.
>
> I read 224 lines from a file
>
> input <- re
hello R users,
I didn't find a solution for a simple problem I think.
I read 224 lines from a file
input <- readLines(con,n=224)
and now I create a string x <- "\004" which should be line 225 of input.
So I have input and x and want to add x to input, so that it looks like
this:
[1] "string"
One simple way is to use na.action=na.exclude rather than na.omit. This will
still fit the regression without the missing rows, but if you use the resid
function to extract the residuals, it will fill the deleted values with NA so
that the vector is the same length (and correctly matches) the o
On Mon, 24 Nov 2008, Robert Wilkins wrote:
Hi,
Where can I find information ( freely available on the Internet , and also
books or other sources ) on how having sampling weights changes the
calculation of the standard error (of means and proportions)?
Alan Zaslavsky keeps a comprehensive lis
> -Original Message-
> From: William Dunlap
> Sent: Tuesday, November 25, 2008 9:16 AM
> To: '[EMAIL PROTECTED]'
> Subject: Re: [R] Efficient passing through big data.frame and
> modifying select fields
>
> > Johannes Graumann johannes_graumann at web.de
> > Tue Nov 25 15:16:01 CET 2008
On Mon, 24 Nov 2008, Andrew Choens wrote:
I need to do some fairly deep tables, and ftable() offers most of what I
need, except for the weighting. With smaller samples, I've just used
replicate to let me have a weighted data set, but with this data set,
I'm afraid replicate is going to make my d
The bootstrap that Greg Snow suggested is probably the best approach, but
it is possible to estimate the variance of the proportion.
The total T number of yes reponses is the sum of twenty totals for blocks,
and these are independent, so the variance of Y is 20 times the variance
of these tw
I've fit a linear model to my data set using the function. One of the
outputs of that function is a vector of the residuals. I would like to do a
residual plot of this data versus a predictor variable, but the length of
the residual vector is shorter than the length of the predictor variable
vecto
"Spilak,Jacqueline [Edm]" <[EMAIL PROTECTED]> wrote:
> I need help with replacing NaN with zero (the value '0') in my dataset.
> The reason is that I can't get it to graph because of the NaN in the
> dataset. I have tried:
> data[is.nan(data)] <- 0
> that others have suggested in the help archiv
I see. Thank you very much.
On Mon, Nov 24, 2008 at 10:12 AM, Stefan Evert <[EMAIL PROTECTED]> wrote:
>
>> I'm sorry but I don't quite understand what "not running solve() in
>> this process" means. I updated the code and it do show that the result
>> from clusterApply() are identical with the res
I could not tell from the help file whether rotation of the x labels
is supported in heatmap.2. I did read (perhaps in one of the linked to
help files) that rotation was only allowed with "text" and I was
left wondering if it might be possible to use text(x) and list(rot=90)
as arguments
The default link function for the glm poisson family is a log link, which means
that it is fitting the model:
log(mu) ~ b0 + b1 * x
But the data that you generate is based on a linear link. Therefore your glm
analysis does not match with how the data was generated (and therefore should
not ne
FWIW
Indeed! And IMHO such a nice example of the power and beauty of the R
language.
-- Bert
-Original Message-
From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] On
Behalf Of Greg Snow
Sent: Tuesday, November 25, 2008 10:00 AM
To: [EMAIL PROTECTED]; r-help@r-project.org
Subject: Re: [R]
Hi,
This page is now Ok.
Thanks, Faheem.
On Mon, 24 Nov 2008, Faheem Mitha wrote:
Hi,
I'm getting an error from
http://stat.ethz.ch/R-manual/
linked from http://www.r-project.org/
as follows
**
Forbidden
You
I reread the question, and I don't actually see the problem too much.
If you plot the transformed dataset from most of your models, the
problem is quite obvious : if you transform your predictor to the
logscale, the result of a a linear regression on those outcomes will
be naturally an exponential
Thanks Deepayan,
The following code solves the problem until 'ylab.right' will be
implemented.
Robbie
library(lattice)
library(grid)
g1<-textGrob("axis title at right", x = unit(0.5, "npc"), y = unit(0.5,
"npc"),
just = "centre", hjust = NULL, vjust = NULL, rot = 270,
check
It would be safer to substitute "all.equal(x,v) == TRUE" for every
instance where "identical(x,v)" appears below.
Identical does not behave as I believed it did, and all.equal needs to
be tested in order to to properly handle situations in which it
returns a character value.
> apply(t,2,f
You could try
system.time( expr )
EC
On Tue, Nov 25, 2008 at 9:22 PM, Brigid Mooney <[EMAIL PROTECTED]> wrote:
> Hi All,
>
> I was wondering if there was a function in R that would output the total run
> time for various scripts.
>
> For now I have the following workaround:
>
> b
Dear R users,
I have a question regarding how to make row labels readable in a heat map.
I have successfully made a heat map using function "heatmap.2" in the package
"gplots". However, as there are many rows in the heat map, I have difficulties
labeling them (heatmap.2 provides a parameter "la
Hi All,
I was wondering if there was a function in R that would output the total run
time for various scripts.
For now I have the following workaround:
begTime <- Sys.time()
... the rest of the R script...
runTime <- Sys.time()-begTime
Is there another function that I don't know about that w
> I wish to have inward-pointing ticks on my contourplot graph, but the
> colored background produced by the "region=TRUE" statement covers the
> ticks up, is there any way around this? Sample code below. --Seth
>
> library(lattice)
>
> model <- function(a,b,c,d,e, f, X1,X2) # provide
Hi ronggui,
I tried to install your package under linux (Kubuntu Intrepid), but the
depending package RGtk2 does not install under Linux, some kind of an error
Warning message:
In install.packages(c("DBI", "RSQLite", "RGtk2", "gWidgets", "gWidgetsRGtk2"))
installation of package 'RGtk2' had n
dave fournier wrote:
Hi All,
Following Mike Praeger's posting on this list,
I'm happy to pass on that AD Model Builder is now freely available from
the ADMB Foundation.
http://admb-foundation.org/
Two areas where AD Model builder would be especially useful to R users
are multi-parmater sm
There have been several posts about optimizations where the parameters
for the objective function are bounds-constrained. Brian Ripley took my
1990 "Compact numerical methods for computers" codes and p2c'd them to
give the CG and BFGS and (possibly, I should check!) the Nelder Mead
code. Howeve
Greg's solution is most elegant (I think). This is more of an illustrative
approach:
set.seed(1) #to replicate identical sampling if you use this code
x=sample(1:100,replace=T)
x=rev(sort(x)) #reverse order
sum(x)/2 # what is the mean of x: 2613.5
cumsum(x) # the cumulative sums for x=1:i
al
On 11/25/08, [EMAIL PROTECTED] <[EMAIL PROTECTED]> wrote:
> Dear R-users,
>
> After adding the secondary y-axis at the right side of a lattice xyplot
> (cfr. Lattice: Multivariate Data Visualization with R - figures 8.4 and 8.6,
> from http://lmdvr.r-forge.r-project.org/figures/figures.html), I
Dear Jon,
See FAQ 7.31 at
http://cran.r-project.org/doc/FAQ/R-FAQ.html#Why-doesn_0027t-R-think-these-numbers-are-equal_003f
HTH,
Jorge
On Tue, Nov 25, 2008 at 1:05 PM, Jon Zadra <[EMAIL PROTECTED]> wrote:
> Hi,
>
> This is really strange. Can anyone help explain what's going on here (on 3
>
Hi All,
Following Mike Praeger's posting on this list,
I'm happy to pass on that AD Model Builder is now freely available from
the ADMB Foundation.
http://admb-foundation.org/
Two areas where AD Model builder would be especially useful to R users
are multi-parmater smooth optimization as in
Hi,
This is really strange. Can anyone help explain what's going on here
(on 3 and 7)?
> targets <- seq(from=.1, to=.9, by=.1)
> targets[1]==.1
[1] TRUE
> targets[2]==.2
[1] TRUE
> targets[3]==.3
[1] FALSE
> targets[4]==.4
[1] TRUE
> targets[5]==.5
[1] TRUE
> targets[6]==.6
[1] TRUE
>
sorry, you are completely right!
sps is not the extension for portable file! sorry for the time I make you
spend.
I try to make my problem more clear.
I exporting a dataset from limesurvey (a free software for internet survey).
It works very fine and it allow to export in different format such as c
Hi,
for example:
today=rnorm(100)
yesterday=rnorm(100)
up=today[today>yesterday]
down=today[todayyesterday)
index.down=which(todaymailto:[EMAIL PROTECTED] Im
Auftrag von [EMAIL PROTECTED]
Gesendet: Tuesday, November 25, 2008 12:35 PM
An: r-help@r-project.org
Betreff: [R] Line color based on data
Try this:
tmp <- with(iris, seq( min(Petal.Length)-1, max(Petal.Length)+1, length.out=6))
with( iris, plot( Sepal.Width, Sepal.Length, col=topo.colors(5)[
cut(Petal.Length,tmp) ] ) )
hope this helps,
--
Gregory (Greg) L. Snow Ph.D.
Statistical Data Center
Intermountain Healthcare
[EMAIL PROTECT
2008/11/25 Jeremy Leipzig <[EMAIL PROTECTED]>:
>> Given a set of integers of different values how do I calculate the
>> minimum number of the largest of integers that are required, when
>> summed, to equal 50% of the total sum of the the set?
>>
> Actually I need the value of the smallest member su
Try this:
> tmp <- sample( 100, 50 )
>
> tmp2 <- rev( sort(tmp) )
>
> tmp3 <- cumsum(tmp2) <= sum(tmp)/2
>
> sum(tmp3) # number needed
[1] 14
>
> tmp2[ sum(tmp3) ] # the smallest value
[1] 78
>
> sum(tmp2[tmp3]) / sum(tmp) # check
[1] 0.4894614
> sum(tmp2[ 1:(sum(tmp3)+1) ]) / sum(tmp)
[1] 0.51951
Thank very much ! Indexing is exactly the problem. I think it´s not
because somebody wrote a bad doc, it´s rather because being an amateur
PHP etc scripter i did not realize the power of this indexing yet.
Sometime ago i surely would have used a lot more loops instead of
proper indexing.
> Given a set of integers of different values how do I calculate the
> minimum number of the largest of integers that are required, when
> summed, to equal 50% of the total sum of the the set?
>
Actually I need the value of the smallest member such that the
sum of all members equal or greater to th
Given a set of integers of different values how do I calculate the
minimum number of the largest of integers that are required, when
summed, to equal 50% of the total sum of the the set?
For example,
> length(myTable$lgth)
[1] 303403
> sum(myTable$lgth)
[1] 4735396
I know through brute force that
Hi all
Does anyone know if it is possible when plotting a line or scatter plot, to
selectively color the data points based on the data value? i.e. if plotting say
the percentage change in stock price movements, to color +ve points in green
and -ve points in red? And extending this to a user-def
How about something like:
censor_choose <- function(fr)
do.call(rbind,
lapply( split( fr, fr$id),
function(sub)
sub[which.max( if (max(sub$censor))
sub$censor
else sub$time)
,] ) )
Using your data,
itc <-
data.frame(id=c(1,1,1,2
Default kernel density estimation is poorly suited for this sort of
situation.
A better alternative is logspline -- see the eponymous package -- you
can
specify lower limits for the distribution as an option.
url:www.econ.uiuc.edu/~rogerRoger Koenker
email[EMAIL PROTECTED
Try the 'portfolio' package; it has a basic treemap.
On Tue, Nov 25, 2008 at 10:18 AM, Jacques Wagnor
<[EMAIL PROTECTED]> wrote:
> Dear List,
>
> Does there exist a function that produces a heat map like this one
> (image 3 of 4):
>
> http://www.tdameritrade.com/tradingtools/options360.html?a=HDY&
I am using density() to plot a density curves. However, one of my variables
is truncated at zero, but has most of its density around zero. I would like
to know how to plot this with the density function.
The problem is that if I do this the regular way density(), values near zero
automatically ge
what about ?sub and ?ifelse
Spilak,Jacqueline [Edm] wrote:
> I need help with replacing NaN with zero (the value '0') in my dataset.
> The reason is that I can't get it to graph because of the NaN in the
> dataset. I have tried:
> data[is.nan(data)] <- 0
> that others have suggested in the help
On Tue, Nov 25, 2008 at 9:18 AM, Jacques Wagnor
<[EMAIL PROTECTED]> wrote:
> Dear List,
>
> Does there exist a function that produces a heat map like this one
> (image 3 of 4):
>
> http://www.tdameritrade.com/tradingtools/options360.html?a=HDY&referrer=http%3A%2F%2Fquery.nytimes.com%2Fsearch%2Fsite
If the reason you want to pause is so that you can save a copy of the graph, or
go back and look at earlier ones, then it may be easiest to just write all the
graphs to a file to start with. Use the pdf device, write all the graphs to
the pdf file, close the file using dev.off, then open the pd
I don't have a good reference for you, but here are a couple of things that you
could try:
1. Do a bootstrap estimation of p by resampling the blocks of 5 (rather than
the individual observations) and see if the hypothesized p is in the confidence
interval.
2. Simulate data using the hypothe
Dear Peter,
But even if I change things as you suggested, the question still remains the same: why do the glm
models perform so poorly on this dataset? And what would your advice to the students be?
Best wishes
Christoph
# by the way, I disagree on taking logs on the explanatory; the explan
I think your problem is more with indexing than with function writing. The
main confusion is in how to use '$', this is a shortcut to make certain things
easier, but you are trying to use the shortcut like going from France to
Germany by way of New York City because you know a great shortcut th
On Tue, 25 Nov 2008, Uwe Ligges wrote:
See ?help and its argument "offline"
Uwe Ligges
Thanks, that's very helpful. This doesn't exactly print, but saves the
help page to a ps file, which is as good as.
help(write.table, offline=TRUE)
No latex file is available: shall I try to create i
Dear All,
Currently I'm using the "segmented" package. While using the package,
(i.e segmented - version 0.2-4) to investigate a possible change point
(around X = 2) in my data, I had the following error message:
dat00<-read.table("data.txt",header=T)
library(segmented)
glm.Y<-glm(Y~X,data=dat00)
Why not just plot all your graphs directly to a file rather than sending them
to a windows device that you are not looking at, then copying from there. The
pdf and postscript devices will hold several plots in a single file, or the png
device (and others) will automatically create 1 file per gr
Christoph Scherber wrote:
> Dear all,
>
> For an introductory course on glm?s I would like to create an example to
> show the difference between glm and transformation of the response. For
> this, I tried to create a dataset where the variance increases with the
> mean (as is the case in many ecol
I'm not sure on what kind of dataset would be most appropriate, but
following code I used to create a dataset with a linear response and
an increasing variance (the megaphone type, common in ecological
datasets if I'm right) :
beta0 <- 10
beta1 <- 1
x <- c(1:40)
y <- beta0 + x*beta1 +rnorm(40,0,1
Your reading of the referenced page is completely different than
mine, ... but IANAL.
In particular:
Q3: Can an open source software project combine and distribute any of
Sun’s GPL-licensed MySQL software with other open source software
under the FOSS License Exception?
A: Open source
Dear List,
Does there exist a function that produces a heat map like this one
(image 3 of 4):
http://www.tdameritrade.com/tradingtools/options360.html?a=HDY&referrer=http%3A%2F%2Fquery.nytimes.com%2Fsearch%2Fsitesearch%3Fquery%3Dheatmaptype%3Dnyt
In addition to colors, two other main features I
Hi,
I have more questions about the fft. The application in Excel is very
limited.
In Excel I can adjust graphs and calibrate the x and y-axis. The input and
process, however, is limited compared to R.
With a Dataset table where one column is the hour difference and the second
are the values wi
Hi guys,
I have one problem when I use R+Weka+Xmeans. I set the parameters for Xmeans
as follows:
xc.control <- Weka_control(I=10,M=1000,J=1000,L=1000,H=2000,B=1.0,
C=0.5,D="weka.core.EuclideanDistance",S=20)
xc <- XMeans(data,control=xc.control)
Here I set the L as 1000. according to Xmeans in
On examining non-linearity of Cox coefficients with penalized splines - I
have not been able to dig up a completely clear description of the test
performed in R or S-plus.
>From the Therneau and Grambsch book (2000 - page 126) I gather that the test
reported for "linear" has as its null hypothesi
Here's one way
x <- sapply(seq(1,ncol(DF),by=2), function(i) DF[,i:(i+1)], simplify=F)
-Original Message-
From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED]
On Behalf Of Lauri Nikkinen
Sent: Tuesday, November 25, 2008 9:00 AM
To: [EMAIL PROTECTED]
Subject: [R] How to split DF into a list
Dear R-users,
After adding the secondary y-axis at the right side of a lattice xyplot (cfr.
Lattice: Multivariate Data Visualization with R - figures 8.4 and 8.6, from
http://lmdvr.r-forge.r-project.org/figures/figures.html), I'm trying to add a
title to that second y-axis (which has to be dif
Hello,
I'm trying to split my DF into a list using incremental loop. How can
I avoid NULL elements in this list?
DF <- data.frame(var1 = 1:10, var2 = 11:20, var3 = 21:30, var4 = 31:40)
x <- list()
i <- 1
while (i <= ncol(DF)-1) {
x[[i]] <- DF[, i:c(i+1)]
i <- i + 2
}
x
Many
Similarly tis and chron have nearly the identical function:
> library(tis)
> 365 + isLeapYear(2000:2010)
[1] 366 365 365 365 366 365 365 365 366 365 365
> isLeapYear
function (y)
y%%4 == 0 & (y%%100 != 0 | y%%400 == 0)
> library(chron)
> 365 + leap.year(2000:2010)
[1] 366 365 365 365 366 365
Hi Mark,
similar questions were at least two times during the last weeks, see
http://tolstoy.newcastle.edu.au/R/e5/help/08/11/6722.html
http://tolstoy.newcastle.edu.au/R/e5/help/08/11/6736.html
or
http://tolstoy.newcastle.edu.au/R/e5/help/08/11/7790.html
or search the archives yourself:
htt
On Tue, Nov 25, 2008 at 4:16 PM, Rob Carnell <[EMAIL PROTECTED]> wrote:
> Rainer M Krug gmail.com> writes:
>
>>
>> Hi
>>
>> I want to du a sensitivity analysis using Latin Hypercubes. But my
>> parameters have to fulfill two conditions:
>>
>> 1) ranging from 0 to 1
>> 2) have to sum up to 1
>>
>>
Dear list,
I hope the topic is of sufficient interest, because it is not
R-related. I have N=100 yes/no-responses from a psychophysics
paradigm (say Y Yes and 100-Y No-Responses). I want to see
whether these yes-no-responses are in line with a model
predicting a certain amount p of yes-responses.
Dear all,
For an introductory course on glm?s I would like to create an example to show the difference between
glm and transformation of the response. For this, I tried to create a dataset where the variance
increases with the mean (as is the case in many ecological datasets):
poisson
Didn't this question get asked and answered within a week or two?
Daren Tan, meet John Baron's help search page:
http://search.r-project.org/
( and it apparently gets repeatedly asked and answered over the
years.)
--
David Winsemius
On Nov 25, 2008, at 7:14 AM, Daren Tan wrote:
Why not write it yourself?
days_in_year <- function(year) {
365 + (year %% 4 == 0) - (year %% 100 == 0) + (year %% 400 == 0)
}
This should work for any year in the Gregorian calendar.
Hadley
On Mon, Nov 24, 2008 at 1:25 PM, Felipe Carrillo
<[EMAIL PROTECTED]> wrote:
> Hi:
> Is there a functio
Hi all,
I have relatively big data frames (> 1 rows by 80 columns) that need to be
exposed to "merge". Works marvelously well in general, but some fields of the
data frames actually contain multiple ";"-separated values encoded as a
character string without defined order, which makes the fi
Rainer M Krug gmail.com> writes:
>
> Hi
>
> I want to du a sensitivity analysis using Latin Hypercubes. But my
> parameters have to fulfill two conditions:
>
> 1) ranging from 0 to 1
> 2) have to sum up to 1
>
> So far I am using the lhs package and am doing the following:
>
> library(lhs)
>
Not sure that solution properly focuses the unique function on the
first column, and even when I tried to do so, my code using did not
produce what I expected. The unique function does not return a logical
vector.
Try:
ships[!duplicated(ships$type), ]
And Rajasekaramya, please include code
David Winsemius wrote:
> I am having difficulty thinking that you cannot find general material by
> doing a Google search, but can tell you from memory that the US National
> Center for Health Statistics publishes on the WWW quite a bit of
> information about their survey methods.
>
> For an R-cen
Jim,
I learned how to use textConnection today
thanks
On Tue, Nov 25, 2008 at 8:17 AM, jim holtman <[EMAIL PROTECTED]> wrote:
> Does this do it for you:
>
>> x <- read.table(textConnection("abc 123 345
> + abc 345 456
> + lmn 567 345
> + hkl 568 535
> + lmn 096 456
> + lmn 768 094"))
>>
>> x
>
sharon Wandia wrote:
> Does anyone know how to weight cases in a data frame using a frequency
> vector?
> I'm trying to run tabulations on R , on a data set that first needs to have
> weighted cases before i run the tabulations.
>
> In SPSS & SAS its quite simple, but i'm unable to do it in R.
x
I am having difficulty thinking that you cannot find general material
by doing a Google search, but can tell you from memory that the US
National Center for Health Statistics publishes on the WWW quite a bit
of information about their survey methods.
For an R-centric answer: Have you looked
Does anyone know how to weight cases in a data frame using a frequency
vector?
I'm trying to run tabulations on R , on a data set that first needs to have
weighted cases before i run the tabulations.
In SPSS & SAS its quite simple, but i'm unable to do it in R.
[[alternative HTML version
Given the fact that the mailserver appeared to be down for 12 hours
yesterday , I wouldn't be surprised if some major work needed to be
done at ETH. Try again. I do not get the same error message..
--
David Winsemius
Heritage Labs
On Nov 24, 2008, at 1:07 PM, Faheem Mitha wrote:
Hi,
I'm
Not tested since you did not provide any data:
results <- lapply(split(df, df$id), function(.data){
.which <- which(.data$censor == 1)[1] # get first one
if (length(.which) > 0) return(.data[.which,])
else return(.data[nrow(.data),])
})
On Tue, Nov 25, 2008 at 2:45 AM, gallon li <[EM
As yearmon represents year/month as year + fraction of year
adding 1 gives next year. The first output below
gives an answer of class "difftime" whereas the the second
solution is "numeric":
> library(zoo)
> year <- 2000:2010
> d <- as.Date(as.yearmon(year)+1) - as.Date(as.yearmon(year))
> d
Time
Does this do it for you:
> x <- read.table(textConnection("abc 123 345
+ abc 345 456
+ lmn 567 345
+ hkl 568 535
+ lmn 096 456
+ lmn 768 094"))
>
> x
V1 V2 V3
1 abc 123 345
2 abc 345 456
3 lmn 567 345
4 hkl 568 535
5 lmn 96 456
6 lmn 768 94
> x[!duplicated(x$V1),]
V1 V2 V3
1 abc 123
Daren Tan wrote:
>
>
> I forgot the reshape equivalent for converting from wide to long format.
> Can someone help as my matrix is very big. The following is just an
> example.
>
>> m <- matrix(1:20, nrow=4, dimnames=list(LETTERS[1:4], letters[1:5]))
>
Gabor's solution is uses more basic f
Dear list:
Before posting this message, I read the posting guide, examined the manuals,
searched the R help files, and examined in detail the books that I have
available on R.
I am using R version 2.8.0 and WinXP.
I want to pause the R Graphics window to permit the use of the File,
Hist
#handy if you like ggplot2 as this is required
#I think this is what you want
library(reshape)
m <- matrix(1:20, nrow=4, dimnames=list(LETTERS[1:4], letters[1:5]))
melt(m)
On Tue, Nov 25, 2008 at 8:01 AM, Gabor Grothendieck
<[EMAIL PROTECTED]> wrote:
> Try this:
>
>> as.data.frame.table(m)
> Var
you will get more help if you provide code that can be copied and
pasted into an R session.
?dput
#untested to say the least
foo[unique(foo),]
On Mon, Nov 24, 2008 at 5:36 PM, Rajasekaramya <[EMAIL PROTECTED]> wrote:
>
> hi there
>
> I have a dataframe
>
> abc 123 345
> abc 345 456
> lmn 567 34
Try this:
> as.data.frame.table(m)
Var1 Var2 Freq
1 Aa1
2 Ba2
3 Ca3
4 Da4
5 Ab5
6 Bb6
7 Cb7
8 Db8
9 Ac9
10Bc 10
11Cc 11
12Dc 12
13Ad 13
14B
1 - 100 of 138 matches
Mail list logo