Hi Luigi
Try this
library(lattice)
library(latticeExtra)
with( dflu,
useOuterStrips(
strip = strip.custom(par.strip.text = list(cex = 0.75)),
strip.left = strip.custom(par.strip.text = list(cex = 0.75)),
dotplot(
average ~ type|target+cluster,
my.data,
horizontal = FALSE,
gr
Hello!
I found the answer in a very fine book by Prof. Norm Matloff. There is a
section on Rmpi on this exact topic for Linux on pages 192-193. So I
followed those instructions, and I'm set.
Thanks though!
--
Erin Hodgess
Associate Professor
Department of Mathematical and Statistics
Universi
Thank you Rudi and Ulrik.
Rudi, your option worked for the small data set but when I applied to
the big data set it taking long and never finished and have to kill
it. I dont know why.
Ulrik's option worked fine for the big data set (> 1.5M records)
and took less than 2 minutes.
These two ar
Hello!
I'm trying to run a very simple test with Rmpi via the mpirun function
outside of R. Here is the script file:
Es-MacBook-Pro:~ emhodgess$ cat bb.in
library(Rmpi)
x <- 5
mpi.remote.exec(rnorm(x))
mpi.finalize()
And here is the output:
Es-MacBook-Pro:~ emhodgess$ mpirun -np 4 Rscr
> On Mar 17, 2017, at 6:30 AM, PIKAL Petr wrote:
>
> Is this what you want?
>
> http://opensourceconnections.com/blog/2016/09/17/expanding-data-frequency-table-r-stata/
Or perhaps (assuming the table's name is "tbl":
dtbl <- as.data.frame(tbl)
xpd <- dtbl[ rep(row.names(dtbl), dtbl$Freq), ]
> On Mar 18, 2017, at 9:52 AM, David Winsemius wrote:
>
>
>> On Mar 17, 2017, at 11:33 AM, Alicia Ellis wrote:
>>
>> am cleaning some very messy health record lab data. Several of the rows
>> in the VALUE column have text entries and they need to be converted to
>> numeric in the NUMERIC_VAL
> On Mar 17, 2017, at 11:33 AM, Alicia Ellis wrote:
>
> am cleaning some very messy health record lab data. Several of the rows
> in the VALUE column have text entries and they need to be converted to
> numeric in the NUMERIC_VALUE column based on the values in VALUE and
> DESCRIPTION. For exa
If you are strict about your data formatting then the following is a fast way
of calculating the differences, based on reshaping the data column:
A = matrix(mydata$rslt, nrow=2)
data.frame(exp=1:ncol(A), diff=A[2,]-A[1,])
alternatively, if the 'exp' values are not guaranteed to be sequential you
Using dplyr:
library(dplyr)
# Counting unique
DF4 %>%
group_by(city) %>%
filter(length(unique(var)) == 1)
# Counting not duplicated
DF4 %>%
group_by(city) %>%
filter(sum(!duplicated(var)) == 1)
HTH
Ulrik
On Sat, 18 Mar 2017 at 15:17 Rui Barradas wrote:
> Hello,
>
> I believe this do
Respected Sirs/Madam,
Good Morning
As part of a project I'm using elasticnet package in R for sparse pca. I
would be great help for me if you can advice me on how to select Optimum
number of principal components and number of observation in elasticnet
package (k, para ) in R.
--
*Thanks & Re
am cleaning some very messy health record lab data. Several of the rows
in the VALUE column have text entries and they need to be converted to
numeric in the NUMERIC_VALUE column based on the values in VALUE and
DESCRIPTION. For example:
df <- data.frame(VALUE = c("<60", "Positive", "Negative",
Thanks very much. I suspect 50% of my time in R is spent translating
from what I know how to do in SAS (25+ years of heavy use), to what is
equivalent in SAS. So far, I haven't found anything I can do in SAS that
I can't do in R, with some help. ;-)
Cheers...
On 3/17/2017 1:51 PM, Bert Gunter
On 3/17/2017 1:19 PM, Bert Gunter wrote:
Evan:
You misunderstand the concept of a lagged variable.
Well, lag in R, perhaps (and by my own admission). In SAS, thats exactly
how it works.:
data test;
input exp rslt;
cards;
*;
data test2; set test; by exp;
diff=rslt-lag(rslt);
On 3/17/2017 12:58 PM, Ulrik Stervbo wrote:
> Hi Evan
>
> you can easily do this by applying diff() to each exp group.
>
> Either using dplyr:
> library(dplyr)
> mydata %>%
> group_by(exp) %>%
> summarise(difference = diff(rslt))
>
> Or with base R
> aggregate(mydata, by = list(group = mydata
Hello,
I believe this does it.
sp <- split(DF4, DF4$city)
want <- do.call(rbind, lapply(sp, function(x)
if(length(unique(x$var)) == 1) x else NULL))
rownames(want) <- NULL
want
Hope this helps,
Rui Barradas
Em 18-03-2017 13:51, Ashta escreveu:
Hi all,
I am trying to find a
Hi all,
I am trying to find a city that do not have the same "var" value.
Within city the var should be the same otherwise exclude the city from
the final data set.
Here is my sample data and my attempt. City1 and city4 should be excluded.
DF4 <- read.table(header=TRUE, text=' city wk var
city1
Hello,
Is this what you want?
fun <- function(x, a){
y <- as.numeric(as.character(x))
z <- paste(a, names(x))
data.frame(z, y)
}
dat2 <- as.data.frame(TSmodelForecast)
tmp <- lapply(seq_along(dat2), function(i) fun(dat2[[i]], names(dat2)[i]))
result <- do.call(rbind, t
Hi Paul,
A more educated guess is:
print(data.frame(Date=paste(rep(month.abb,32),
rep(1986:2017,each=12), sep="-")[-(378:384)],
Forecast=mf5$x))
Someone else may be able to tell you how to extract the dates from the
time series object mf5$x
Jim
On Sat, Mar 18, 2017 at 8:23 AM, Paul Bernal w
18 matches
Mail list logo