Hi Doug,
I see what the problem is now. When your Excel file is read in with
read.xlsx2, the DateTimeStamp is read as days since Microsoft's time epoch
(see earlier posts on this). As these values are numeric, they cannot be
converted in the same way as a human readable date/time string. The easies
Hi Sergio,
I couldn't get your example data to read in, so I have used the example in
the help page:
fm1 <- aov(breaks ~ wool + tension, data = warpbreaks)
hsd.fit<-TukeyHSD(fm1, "tension", ordered = TRUE)
hsd.fit$tension[order(hsd.fit$tension[,4]),]
difflwr upr p adj
L-
Hi Alnazar,
I looked at your question yesterday and was unable to find what a
"majority guessing" function is. I think it may be related to the
"Pandemonium" model of decision making, but that doesn't get me very
far. Could you give us a hint as to what this function is?
Jim
On Wed, Feb 24, 2016
sMPG <- sample(1:length (categories-1, nrow(testingData),replace=T)
> return(GuessMPG)
>
>
>
> On Wed, Feb 24, 2016 at 8:18 PM, Jim Lemon wrote:
>>
>> Hi Alnazar,
>> I looked at your question yesterday and was unable to find what a
>> "majority gues
Hi Fabio,
If you have more than a few dates on the X axis you may get
overlapping tick labels. As an example, take a plot of the winning
parties of by-elections held in Australia in the 21st century by the
dates of the elections:
be_dates<-as.Date(c("5/12/2015","19/09/2015","8/2/2014",
"5/12/2009
Hi Ashta,
This does not seem too difficult:
DF$flag<-"n"
for(thisname in unique(DF$Name)) {
if(any(DF$year[DF$Name == thisname] %in% c(2014,2015) &
DF$tag[DF$Name == thisname]))
DF$flag[DF$Name == thisname]<-"y"
}
Jim
On Sun, Feb 28, 2016 at 1:23 PM, Ashta wrote:
> Hi all,
>
> I have a d
Works on linux
R version 3.2.3 (2015-12-10)
Platform: x86_64-redhat-linux-gnu (64-bit)
Running under: Fedora 23 (Twenty Three)
Jim
On Mon, Feb 29, 2016 at 1:45 AM, Boris Steipe wrote:
> Works for me on Mac OS...
>
> R version 3.2.2 (2015-08-14)
> Platform: x86_64-apple-darwin13.4.0 (64-bit)
> R
Hi Adrienne,
I'm not sure if this will help, but lengthKey in the plotrix package
will display a scale showing the relationship of vector length to
whatever numeric value is being displayed. However, you do have to
sort of the scaling manually.
Jim
On Tue, Mar 1, 2016 at 7:30 AM, Adrienne Wootte
Hi Maurice,
If you have used "source" to run an R script, perhaps you could try
something like this:
get.running.file<-function(pattern="source",...) {
file1 <- tempfile("Rrawhist")
savehistory(file1)
rawhist <- readLines(file1)
unlink(file1)
last.source<-grep(pattern,rev(rawhist),value=TRUE)
Hi Jinggaofu,
Try this:
for(u in 1:9) {
pdffile<-paste(g[u],".pdf",sep="")
pdf(pdffile)
...
mtext(g[u],side=4)
If you index the vector g, it will return one or more character strings.
Jim
On Thu, Mar 3, 2016 at 9:52 AM, Jinggaofu Shi wrote:
> Hi, there
> I am new to R, here is an urgent que
Hi Fabio,
It is possible that your remaining "numeric" variable is a factor. What does:
class(my_numeric_variable)
say? (where you substitute the name of your "numeric" variable)
Jim
On Fri, Mar 4, 2016 at 2:25 AM, Fabio Monteiro
wrote:
> Hello, my name is Fábio and I'm a Marine Ecology stude
led trait3 to my variable.
>
> Is this what i'm suppose to wright? class(trait3), or class
> (my_trait3_variable?
>
> both give error
>
> 2016-03-03 23:42 GMT+00:00 Jim Lemon :
>>
>> Hi Fabio,
>> It is possible that your remaining "numeric" variable i
] "factor
>>
>>
>> Yes is a factor. And now?
>>
>> Thank you
>>
>> Kind Regards
>> Fábio
>>
>> 2016-03-04 7:15 GMT+00:00 Jim Lemon :
>>>
>>> Hi Fabio,
>>> You should write:
>>>
>>> class(
Hi catalin,
I think what you are trying to do is to retrieve the original
observations from the cumulated values. In that case Olivier's
suggestion will do what you want:
c(x[1],diff(x))
Jim
On Sat, Mar 5, 2016 at 1:59 AM, catalin roibu wrote:
> I mean the first row value
>
> În Vin, 4 mar. 20
The values in a$x do look numeric. What do you get from:
class(a$x)
If the result is "factor", as it was for your ft$trait3 variable (and
I hope that a$x is the same variable with a different name), then at
least one of those values must have been read in as non-numeric. The
possible reasons for
Hi Tanvir,
I think what you want is:
lapply(e,"[",1)
lapply(e,"[",2)
Jim
On Tue, Mar 8, 2016 at 11:47 AM, Mohammad Tanvir Ahamed via R-help
wrote:
> Hi,
>
> a <- c(1:5)b <- c(1:3)
> c <- 1
> d <- 5
> e <- list(a,b,c,d)
>
> # To extract every 1st element
> lapply(e,"[[",1)
>
> ## Out-put
> [[1]
Hi carol,
You could use the "I" function, which will just return what you pass to it.
Jim
On Thu, Mar 10, 2016 at 12:28 AM, carol white via R-help
wrote:
> What should be FUN in aggregate as no function like mean, sum etc will be
> applied
> Carol
>
> On Wednesday, March 9, 2016 1:59 PM, S
Hi Ragia,
If you have read in your data frame with read.table or similar and not
specified stringsAsFactors=FALSE, the two columns will already be
factors. However, unless they both contain the same number of unique
values, the numbers associated with those levels won't be the same.
Probably the ea
Hi Peter,
Have you tried:
a_fetchdata<-format(a_fetchdata,"%Y-%m-%d %H:%M:%S")
before writing the data?
Jim
On Thu, Mar 10, 2016 at 10:14 PM, Peter Neumaier
wrote:
> Hi all, sorry for double/cross posting, I have sent an initial, similar
> question
> accidentally to r-sig-finance.
>
> I am wri
Hi Rainer,
You can use the text.width argument and override the calculated legend
text widths.
Jim
On Sat, Mar 12, 2016 at 1:01 AM, Rainer M Krug wrote:
> Hi
>
> assume the following code:
>
> --8<---cut here---start->8---
> plot(1,1)
> legend(x="topleft", le
Hi Axel,
It seems to me that cluster analysis could be what you are seeking.
Identify the clusters of different combinations of fatty acids in the
oils. Do they correspond to location? If so, is there a method to
predict the cluster membership of a new set of measurements? Have a
look at the cluste
Hi Ragia,
Improving the efficiency of a program usually requires detailed
analysis of what it is doing and how those operations can be performed
more rapidly. That is to say, without knowing what the program is
supposed to accomplish and how it is doing it now, very little help
can be provided. One
Hi Kwabena,
Try this:
kfa.df<-read.table(text="Date/Time,PR,SW,TP,SM,SHF,CO2
28.11.2011 17:39:49,978.4,13.15,30.5,20,NA,NA
28.11.2011 17:50:00,978.5,13.11,30.4,20,NA,NA
28.11.2011 18:00:00,978.8,13.14,30.3,20,NA,NA
28.11.2011 18:10:00,979,13.07,30.1,20,NA,NA
28.11.2011 18:20:00,979.2,13.1,30,20,NA
Hi Andre,
Try this:
plot(x= Index, y=Values, ylim= c(-16,16), pch= 19,
col = "blue",yaxt="n")
points (Log, pch = 19, col="green")
axis(2,at=seq(-16,16,by=4))
Jim
On Thu, Mar 17, 2016 at 8:32 AM, André Luis Neves wrote:
> Dear all:
>
>
> I was wondering how I modify the plot command below so th
Hi santib2002,
If you only have the XLS file, you could install the "xlsx" package
and read that into R.
install.packages("xlsx")
read.xls(table1.xls)
If you can load it into Excel and export it as CSV format, you can
read it with the read.csv function:
read.csv("table1.csv")
Jim
On Sun, Mar
Hi Eliza,
I think you only need to change the margins and the placement of the
right axis label:
colours <- c("black", "grey")
par(mar=c(5,4,4,4))
barplot(prop,ylab = "Numbers", cex.lab = 1.5, cex.main = 1.4,
beside=TRUE, col=colours,ylim=c(0,250))
axis(side=3,xlim=c(0,45), at=c(6,12,18,24,30,36,
Hi Christian,
This untested script might get you going (assuming you want a CSV format):
for(affdf in 1:length(out)) {
names(out[[affdf]])<-lapply(strsplit(names(out[[affdf]]),"[.]"),"[",2)
write.csv(out[[affdf]],file=paste("affymetrix",affdf,".txt",sep=""))
}
Jim
On Wed, Mar 23, 2016 at 6:32
euroscience Theme
> Department of Neurosurgery
> Department of Radiation Oncology
> UAB | The University of Alabama at Birmingham
> Hazelrig-Salter Radiation Oncology Center | 1700 6th Ave S | Birmingham, AL
> 35233
> M: 919.724.6890 | ctsta...@uab.edu | cstackho...@uabmc.edu |
> ct
cology Center | 1700 6th Ave S | Birmingham, AL
> 35233
> M: 919.724.6890 | ctsta...@uab.edu | cstackho...@uabmc.edu |
> ctsta...@gmail.com
>
> uab.edu
> Knowledge that will change your world
>
>
>
> From: Jim Lemon
> S
Alabama at Birmingham
> Hazelrig-Salter Radiation Oncology Center | 1700 6th Ave S | Birmingham, AL
> 35233
> M: 919.724.6890 | ctsta...@uab.edu | cstackho...@uabmc.edu |
> ctsta...@gmail.com
>
> uab.edu
> Knowledge that will change your world
>
>
> ___
GBS Neuroscience Theme
> Department of Neurosurgery
> Department of Radiation Oncology
> UAB | The University of Alabama at Birmingham
> Hazelrig-Salter Radiation Oncology Center | 1700 6th Ave S | Birmingham, AL
> 35233
> M: 919.724.6890 | ctsta...@uab.edu | cstackho...@uabmc.e
ingham
> Hazelrig-Salter Radiation Oncology Center | 1700 6th Ave S | Birmingham, AL
> 35233
> M: 919.724.6890 | ctsta...@uab.edu | cstackho...@uabmc.edu |
> ctsta...@gmail.com
>
> uab.edu
> Knowledge that will change your world
>
>
> ___
Hi Burhan,
As all of your values seem to be character, perhaps:
country.df<-as.data.frame(matrix(temp.data,ncol=22,byrow=TRUE)[,2:21])
if there really are 2 country names and 20 values for each country. As
Boris has pointed out, there are different numbers of values following
the country names in
Hi Jennifer,
This is very hacky, but I think it does most of what you want. I can't
really work out what "Sample Size" is supposed to be:
MOERS<-data.frame(MOE[MOE$SEX_P=="Males_0.1",c("Meff","Proportion")],
Males_0.1=MOE[MOE$SEX_P=="Males_0.1","MOE"],
Females_0.1=MOE[MOE$SEX_P=="Females_0.1","M
Hi Hamid,
This looks a bit like a repeated measures analysis, but for a simple
introduction to ANCOVA using R see the latter part of the following:
http://www.stat.columbia.edu/~martin/W2024/R8.pdf
Jim
On Mon, Mar 28, 2016 at 3:52 AM, HAMID REZA ASHRAFI via R-help
wrote:
> HiI have a set of dat
Hi Jessy,
I had a look at:
http://www.rdocumentation.org/packages/PerformanceAnalytics/functions/Modigliani
and it doesn't include a "Value" section, so I don't know what the
return value should be. Have you tried running the examples to see
what they return?
Jim
On Sat, Mar 26, 2016 at 12:10 A
Hi Farnoosh,
Despite my deep suspicion that this answer will solve a useless
problem, try this:
last_subject<-0
keep_deps<-c("B","D","F")
keep_rows<-NULL
for(rowindex in 1:dim(df)[1]) {
if(df[rowindex,"Subject"] != last_subject) {
last_subject<-df[rowindex,"Subject"]
start_keeping<-0
}
if(d
Hi Norman,
To check whether all values of an object (say "x") fulfill a certain
condition (==0):
all(x==0)
If your object (X) is indeed a data frame, you can only do this by
column, so if you want to get the results:
X<-data.frame(A=c(0,1:10),B=c(0,2:10,9),
C=c(0,-1,3:11),D=rep(0,11))
all_z
; work.
>
> I hope you understood.
>
> Thanks a lot
>
> Cheers
>
>
> On Thu, Mar 31, 2016 at 1:13 PM, Jim Lemon wrote:
>>
>> Hi Norman,
>> To check whether all values of an object (say "x") fulfill a certain
>> condition (==0):
>>
Perhaps if you go back to the example that I sent, you will notice
that those vectors of logical values (which_cols, which_rows) were
among the results. Have you tried:
names(X)[which_cols]
to see whether it is what you want?
Jim
On Thu, Mar 31, 2016 at 2:42 PM, Norman Pat wrote:
> Hi Jim,
>
>
Hi Pedro,
This may not be much of an improvement, but it was a challenge.
selvec<-as.vector(matrix(c(nsel,unlist(by(toy$diam,toy$group,length))-nsel),
ncol=2,byrow=TRUE))
TFvec<-rep(c(TRUE,FALSE),length.out=length(selvec))
toynsel<-rep(TFvec,selvec)
by(toy[toynsel,]$diam,toy[toynsel,]$group,mean)
Hi message,
What you can do is this:
barpos<-barplot(axes=FALSE, ann=FALSE, horiz=TRUE,
testbarplot[,2], ylab='group', xlab= '(x values)',
xlim=c(0,10),space=c(1,0,0,0, 1,0,0,0))
text(testbarplot[,2],barpos,
c('a', 'b', 'c', 'd', 'c', 'e','f', 'g'), pos=4)
as I think you want the values displa
Hi emeline,
I think there may be a minor language problem. If you mean the
"variation" rather than the "variance" in survival, you may simply
want a test of proportions.
Jim
On Mon, Apr 4, 2016 at 7:48 PM, emeline mourocq wrote:
> Hello,
>
>
>
> I investigate survival until the following year (0
Hi sst,
You could set up your sample sizes as a matrix if you don't want all
of the combinations:
sample_sizes<-matrix(c(10,10,10,25,25,25,...),nrow=2)
and then use one loop for the sample sizes:
for(ss in 1:dim(sample_sizes)[2]) {
ss1<-sample_sizes[1,ss]
ss2<-sample_sizes[2,ss]
then step thr
Okay, here is a more complete example:
sample_sizes<-
matrix(c(10,10,10,25,25,25,25,50,25,100,50,25,50,100,100,25,100,100),
nrow=2)
# see what it looks like
sample_sizes
ssds<-c(4,4.4,5,6,8)
nssds<-length(ssds)
results<-list()
# first loop steps through the sample
for(ss in 1:dim(sample_sizes)[2
Hi Nils,
I don't have the GMD library, but this looks like some axis labels are
being ignored to avoid overlapping. If heatmap.3 uses base graphics
you can probably get your labels by passing empty strings to heatmap.3
and then displaying the axis with staxlab (plotrix).
Jim
On Wed, Apr 6, 2016
RUE)$p.value
> t_unequal[i]<-t.test(x_norm1,y_norm2,var.equal=FALSE)$p.value
> mann[i] <-wilcox.test(x_norm1,y_norm2)$p.value
>
> ##store the result into matrix defined before
> matrix_Equal<-t_equal
> matrix_Unequal<-t_unequal
> matrix_mann<-mann
>
> ##print res
Hi Matthias,
It looks to me as though you could do this with a couple of loops:
temps<-rnorm(400,14,0.05)
ttind<-NULL
for(ti in 1:(length(temps)-9)) {
if(temps[ti]-temps[ti+9] >= 0.1 && max(temps[ti]-temps[ti+1:9]) > -0.05)
ttind<-c(ttind,ti)
}
cat("\t\t",paste("Year",1:10,sep=""),"\n")
for(ti
Hi John,
First, apply isn't guaranteed to work on data frames. There are two
easy ways to do something like this, but we had better have a data
frame:
guppy<-data.frame(taste=rnorm(10,5),
crunch=rnorm(10,5),satiety=rnorm(10,5))
If you just want to apply a function to all or a subset of columns o
Hi Miluji,
Try this:
arrows(-100,-140,100,-140,code=3)
Jim
On Fri, Apr 8, 2016 at 10:24 PM, Miluji Sb wrote:
> I am trying to draw maps for the world using:
>
> library(rworldmap)
> library(maptools)
> library(RColorBrewer)
>
>
> tmp2<- dput(head(pece,10))
> structure(list(iso3 = c("AUS", "AUT
Hi Fabien,
I was going to send this last night, but I thought it was too simple.
Runs in about one millisecond.
df<-data.frame(freq=runif(1000),
strings=apply(matrix(sample(LETTERS,1,TRUE),ncol=10),
1,paste,collapse=""))
match.ind<-grep("DF",df$strings)
match.ind
[1] 2 11 91 133 169 444
Hi Stefano,
As the help page says:
"The default for the format methods is "%Y-%m-%d %H:%M:%S" if any
element has a time component which is not midnight, and "%Y-%m-%d"
otherwise. This is because when the result is printed, it uses the
default format. If you want a specified output representation:
Hi Milu,
I just realized that by "the bottom of the map" you may mean "beneath
the map", in which case you should use:
par(xpd=TRUE)
arrows(...)
par(xpd=FALSE)
Jim
On Mon, Apr 11, 2016 at 11:50 PM, Miluji Sb wrote:
> Dear David,
>
> Thank you very much for your replies! I didn't know about par(
Hi Milu,
There is a two-headed arrow on the image you sent, and it seems to be
where you specified. Did you want it beneath the map, as:
par(xpd=TRUE)
arrows(-22,54.75,-22,74,code=3)
par(xpd=FALSE)
Jim
On Tue, Apr 12, 2016 at 7:58 PM, Miluji Sb wrote:
> Dear Jim,
>
> Thanks again! I do want the
hanks again. I am getting the two-headed arrow but I cannot seem to get the
> coordinates right for the arrow to appear beneath the map. These coordinates
> puts the arrow on the left hand side. Thanks again!
>
> Sincerely,
>
> Milu
>
> On Tue, Apr 12, 2016 at 1:15 PM, Jim Le
Hi Tom,
What you want is a list rather than a data frame. So:
df<-read.table(text=" Dat1 Dat2 Dat3
1154
2779
3335
42 NA 5
59 NA NA",
header=TRUE)
dflist<-as.list(df)
na.remove<-function(x) return(x[!is.na(x)])
sapply(dflist,na.remove)
Jim
O
Hi Dr Singh,
The object mtcars is a data frame and the mean is not defined for a
data frame. If you try it on a component of the data frame for which
mean is defined:
by(mtcars$mpg,mtcars$am,mean)
mtcars$am: 0
[1] 17.14737
mtcars$am: 1
Hi Elahe,
When you want to include a usable toy data frame, it's better to use
something like:
dput(mydata[1:100])
So if we have a data frame like this:
mydata<-data.frame(RE=sample(5:50,100,TRUE),
LU=sample(1500:4500,100),
COUNTRY=factor(sample(c("DE","FR","JP","AU"),100,TRUE)),
Light=fac
Hi tan sj,
It is by no means easy to figure out what you want without the code,
but If I read your message correctly, you can run the loops either
way. When you have nested loops producing output, it is often a good
idea to include the parameters for each run in the output as well as
the result so
Hi John,
Both the "right" and "include.lowest" arguments are usually useful
when there are values equal to those in "breaks". A value equal to a
break can fall on either side of the break depending upon these
arguments:
> nums<-1:100
> table(cut(nums,breaks=seq(0,100,by=10)))
(0,10] (10,20] (2
Hi Jyoti,
>From what you and others have written I am going to guess that you are
using Windows, that Windows has hidden the extension on the file you
are trying to read, and the filename is actually:
galenv.gal
I may be wrong, but this problem has beset others before you.
Jim
>>> I tried to
Hi Ansley,
Without your data file (or a meaningful subset) we can only guess, but
you may be trying to define groups on the columns rather than the rows
of the data set. Usually rows represent cases and each case must have
a value for the grouping variable.
Jim
On Tue, Apr 19, 2016 at 6:33 AM, A
Hi Jeem,
First, please send questions like this to the help list, not me.
I assume that you are in a similar position to sjtan who has been
sending almost exactly the same questions.
The problem is not in the loops (which look rather familiar to me) but
in your initial assignments at the top. For
Hi Si Jie,
Again, please send questions to the list, not me.
Okay, I may have worked out what you are doing. The program runs and
produces what I would expect in the rightmost columns of the result
"g".
You are storing the number of each test for which the p value is less
than 0.05. It looks to m
Hi Michael,
At a guess, try this:
iqr<-function(x) {
return(paste(round(quantile(x,0.25),0),round(quantile(x,0.75),0),sep="-")
}
.col3_Range=iqr(datat$tenure)
Jim
On Tue, Apr 19, 2016 at 11:15 AM, Michael Artz wrote:
> Hi,
> I am trying to show an interquartile range while grouping values
Hi Catalina,
The error message is pretty clear. min(diff(breaks)/100) evaluates to
a negative number. Perhaps the sort order for the values in "breaks"
has changed.
Jim
On Tue, Apr 19, 2016 at 9:35 AM, Catalina Aguilar Hurtado
wrote:
> Hi I am trying to understand what happen with the heatmap.2
Hi pele,
There are probably more elegant ways to do this using some function,
but this might help:
psdat<-read.table(text="ID DATE ITEM
1 1/1/2014P1
1 1/15/2014 P2
1 1/20/2014 P3
1 1/22/2014 P4
1 3/10/2015 P5
2 1/13/2015 P1
2 1/20/2015 P2
2 1/28/2015 P3
2
Hi sri,
As your problem involves a few logical steps, I found it easier to
approach it in a stepwise way. Perhaps there are more elegant ways to
accomplish this.
svdat<-read.table(text="Count id name type
117 335 sally A
19 335 sally A
167 335 sally B
18 340 susan A
56 340 susan A
22 340 susan B
5
to achieve my
> expected output.
> Problems: 1) x, y are coming as logical rather than values as I mentioned in
> my post
>2) The values that I get for Max A and Max B not correct
>3) It looks like a pretty big data, but I just need to
> concatenate th
In R, square brackets [] are called "extraction operators" as they are
interpreted so as to "extract" the parts of an object specified by the
information within them. Your message contained only part of the line
below:
AltB<-svdatstr[row,indicesA][svdatstr[row,indicesA] wrote:
> Dear Jim,
>
> I ho
AM
> Subject: Fwd: clock24.plot
> To: "Ogbos Okike"
> Cc:
>
>
> -- Forwarded message --
> From: Ogbos Okike
> Date: Fri, Apr 22, 2016 at 5:28 AM
> Subject: Re: clock24.plot
> To: Jim Lemon
>
>
> Dear Jim,
> Thank you for your time. I am work
Hi Carolien,
There was a recent request involving a change in the functionality of
R that may be relevant to your problem. The usual trigger for the
"missing value where TRUE/FALSE needed" error is a conditional
expression that doesn't evaluate because of an NA value. As some
coercion or summary fu
Hi Edward,
I'm not really sure that this is what you want as I can't figure out
what the "earn" factor is, but:
epdat[order(epdat$Var2,epdat$Freq,decreasing=TRUE),]
Jim
On Sat, Apr 23, 2016 at 4:08 AM, Patzelt, Edward wrote:
> Hi R-Help,
>
> data at bottom
>
> I've been struggling with a probl
Hi Adrian,
This is probably taking a long time. I first tried with 7x10^6 times
and values and it took several minutes. The following code does what I
expected:
amdat<-data.frame(time=1:70,value=rnorm(70,-4))
amdat$value[amdat$value<0]<-0
sum(amdat$value)
[1] 5.07101
plot(amdat$time,amdat$
Hi Sunny,
Try this:
# notice that I have replaced the fancy hyphens with real hyphens
end<-c("2001-","1992-","2013-","2013-","2013-","2013-",
"1993-2007","2010-","2012-","1984-1992","1996-","2015-")
splitends<-sapply(end,strsplit,"-")
last_bit(x) return(x[length(x)])
sapply(splitends,last_bit)
J
Hi Georg,
You could just use this:
Umsatz_2011<-c(1,2,3,4,5,NA,7,8,NA,10)
Kunde_2011<-rep(0:1,5)
Check_Kunde_2011<-
c("OK","Check")[as.numeric(is.na(Umsatz_2011) & Kunde_2011 == 1)+1]
Check_Kunde_2011 will be a vector of strings.
Jim
On Tue, Apr 26, 2016 at 6:09 PM, wrote:
> Hi All,
>
> I n
Hi jpm miao,
You can get CSV files that can be imported into Excel like this:
library(prettyR)
sink("excel_table1.csv")
delim.table(table(df[,c("y","z")]))
sink()
sink("excel_table2.csv")
delim.table(as.data.frame(table(df[,c("y","z")])),label="")
sink()
sink("excel_table3.csv")
delim.table(as.mat
Hi Atte,
I'm not sure that this actually works, and it's very much a quick hack:
sums_x<-function(x,addends=1,depth=1) {
if(depth==1) {
addends<-rep(addends,x)
addlist<-list(addends)
} else {
addlist<-list()
}
lenadd<-length(addends)
while(lenadd > 2) {
addends<-c(addends[depth]+1,add
b
>
> A B Total
> A 8 10 18
> B 7 5 12
> C 9 11 20
> Total 24 26 50
>
>> sink("temp_table3.csv")
>> delim.xtab(alphatab,pct=NA,interdigitate=TRUE)
>> sink()
>> sink("temp_table3.csv", append=TRUE)
>> delim.table(alphatab)
>>
Hi Lars,
A mystery, but for the bodgy characters in your error message. Perhaps
there is a problem with R trying to read a different character set
from that used in the package.
Jim
On Sat, Apr 30, 2016 at 8:22 PM, Lars Bishop wrote:
> Hello,
>
> I can’t seem to be able to install packages on a
I can do at the moment.
Jim
On Sun, May 1, 2016 at 11:19 AM, jpm miao wrote:
> Thanks.
> Could we print the row/column names, "alpha1" and "alpha2" to the csv file?
>
> 2016-04-30 17:06 GMT-07:00 Jim Lemon :
>>
>> Hi jpm miao,
>> I think you ca
Hi Yasil,
If you look at what happens to a[,3] after the "strsplit" it is easy:
> a[,3]
[1] "a,b" "c,d"
Here a[,3] is two strings
a$c <- strsplit(a$c, ",")
> a[,3]
[[1]]
[1] "a" "b"
[[2]]
[1] "c" "d"
Now a[,3] is a two element list. What R probably did was to take the
first component of a[,3]
Hi Steven,
If this is just a one-off, you could do this:
grepl("age",x) & nchar(x)<4
returning a logical vector containing TRUE for "age" but not "age2"
Jim
On Wed, May 4, 2016 at 3:45 PM, Steven Yen wrote:
> Dear all
> In the grep command below, is there a way to identify only "age" and
> no
Hi Andreas,
Try installing plyr, arm, scales and mi separately. If you get an
error message about a version mismatch, that's where your problem is.
_Sometimes_ upgrading R will fix it, if the problem is that the
version you are downloading is too new for your R version.
Jim
On Thu, May 5, 2016 at
Hi Emily,
I haven't tested this exhaustively, but it seems to work:
df<-data.frame(id=2001:3300,yrssmoke=sample(1:40,1300,TRUE),
cigsdaytotal=sample(1:60,1300,TRUE),yrsquit=sample(1:20,1300,TRUE))
dfNA<-sapply(df$id,"%in%",c(2165,2534,2553,2611,2983,3233))
# create your NA values
df[dfNA,c("yrsqu
Hi Prasad,
You are probably looking for linear modelling of some sort. The first
thing to do is to read the data into R (if you haven't already done
so). You will almost invariably have a _data frame_ in which the
columns will contain values for at least year and profit.
Then plot the profits of A
Hi Luca,
The function readHTMLtable is in the XML package, not httr. Perhaps
that is the problem as I don't see a dependency in httr for XML
(although xml2 is suggested).
Jim
On Tue, May 10, 2016 at 2:58 PM, Luca Meyer wrote:
> Hello,
>
> I am trying to run a code I have been using for a few ye
Hi Georg,
I don't suppose that you have:
1) checked that the file "all.Rout" exists somewhere?
2) if so, looked at the file with Notepad, perhaps?
3) let us in on the secret by pasting the contents of "all.Rout" into
your message if it is not too big?
At a guess, trying:
close(zz)
might get
the error persists.
>
> To me it looks like R is still accessing the file and not releasing the
> connection for other programs. close(zz) should have solved the problem
> but unfortantely it doesn't.
>
> What else could I try?
>
> Kind regards
>
> Georg
>
>
Hi Shashi,
The assumption that anyone on the list apart from yourself knows what
"some calculation" involves is incorrect. I suspect that "what is
wrong" may be one of two things:
1) "some calculation" includes a very large number of operations,
perhaps leading to "disk-thrashing" when your 16GB o
; 0)
> {
> out <- sum / ((sqrt(sums1) * sqrt(sums2)))
> }else
> {
> out <-0
> }
> End Calculation
>
> vec1 <- append(vec1,out);
> vec1 <-append(vec1, "1")
> vec2
Hi Witold,
You could try Ben Bolker's "clean.args" function in the plotrix package.
Jim
On Wed, May 11, 2016 at 6:45 PM, Witold E Wolski wrote:
> Hi,
>
> I am looking for a documentation describing how to manipulate the
> "..." . Searching R-intro.html gives to many not relevant hits for
> "...
Hi Jan,
This might be helpful:
chop_string<-function(x,ends) {
starts<-c(1,ends[-length(ends)]-1)
return(substring(x,starts,ends))
}
Jim
On Thu, May 12, 2016 at 7:23 AM, Jan Kacaba wrote:
> Here is my attempt at function which computes margins from positions.
>
> require("stringr")
> require
Hi again,
Sorry, that should be:
chop_string<-function(x,ends) {
starts<-c(1,ends[-length(ends)]+1)
return(substring(x,starts,ends))
}
Jim
On Thu, May 12, 2016 at 10:05 AM, Jim Lemon wrote:
> Hi Jan,
> This might be helpful:
>
> chop_string<-function(x,ends) {
>
Hi Kristi,
Multiply the standard error by the square root of the sample size.
Jim
On Tue, May 17, 2016 at 8:09 PM, Kristi Glover
wrote:
> Dear R User,
>
> I have a data with a mean and Standard Error (SE) but no sample size, I am
> wondering whether I can compute the standard deviation (SD) wi
Hi again,
Sorry, didn't read that correctly. No.
Jim
On Tue, May 17, 2016 at 8:48 PM, Jim Lemon wrote:
> Hi Kristi,
> Multiply the standard error by the square root of the sample size.
>
> Jim
>
>
> On Tue, May 17, 2016 at 8:09 PM, Kristi Glover
> wrote:
>&g
Hi Shailaja,
If you just want a line of words, it's not too difficult if you have
the word frequencies:
# take a common sentence
sentence<-"The quick brown fox jumps over the lazy dog"
words<-unlist(strsplit(sentence," "))
# make up some word frequencies
wordfreq<-c(10,1,2,2,3,4,10,6,5)
library(pl
Hi John,
I may be misunderstanding what you want, but this seems to produce the
output you specify:
A<-sample(-10:100,100)
i<-rep(1:10,c(5:13,19))
# replace the last value of x with the maximum
max_last<-function(x) return(c(x[-length(x)],max(x)))
as.vector(unlist(by(A,i,max_last)))
and this is w
Hi Maryam,
Your labels have been "greeked" as the font is too small to be
displayed properly. If you must use PNG format, specify your image
file at least twice as high.
png("pheatmap.png",width=1254,height=5000)
PDF would be a better choice as you can just zoom in and scroll down.
Jim
On Mon,
Hi Syela,
Are the values in ASFR monotonically increasing with year?
Jim
On Tue, Oct 4, 2016 at 4:23 AM, Syela Mohd Noor wrote:
> Hi all, I had a problem with the parameter estimation of the Brass Gompertz
> model for my dissertation. I run the code for several times based on
> different years
801 - 900 of 3432 matches
Mail list logo