Dear R users,
I am using ltm and eRm package in analysing my polytomous data for fitting
item response theory and Rasch model, respectively.
I'm having a problem with the error as below;
> library(eRm)
> library(ltm)
> HT <- read.csv("C:/Dropbox/Analysis R_2023/HT.csv")
> response_columns <- HT[
... and just for fun, here is a non-string version (more appropriate for
complex state labels??):
gvec <- function(ntimes, states, init, final, repeats = TRUE)
## ntimes: integer, number of unique times
## states: vector of unique states
## init: initial state
## final: final state
{
On Mon, 4 Sep 2023, Ivan Calandra wrote:
Thanks Rui for your help; that would be one possibility indeed.
But am I the only one who finds that behavior of aggregate() completely
unexpected and confusing? Especially considering that dplyr::summarise() and
doBy::summaryBy() deal with NAs differe
Well, if strings with repeats (as you defined them) are to be excluded, I
think it's simple just to use regular expressions to remove them.
e.g.
g <- function(ntimes, states, init, final, repeats = TRUE)
## ntimes: integer, number of unique times
## states: vector of unique states
## init
My initial response was buggy and also used a deprecated function.
Also, it seems possible that one may want to rule out any strings where the
same state appears consecutively.
I say that such a string has a repeat.
myExpand <- function(v, n) {
do.call(tidyr::expand_grid, replicate(n, v, simplif
Sorry, my last line should have read:
If neither this nor any of the other suggestions is what is desired, I
think the OP will have to clarify his query.
Bert
On Mon, Sep 4, 2023 at 12:31 PM Bert Gunter wrote:
> I think there may be some uncertainty here about what the OP requested. My
> inter
I think there may be some uncertainty here about what the OP requested. My
interpretation is:
n different times
k different states
Any state can appear at any time in the vector of times and can be repeated
Initial and final states are given
So modifying Tim's expand.grid() solution a bit yields:
Does this work for you?
t0<-t1<-t2<-LETTERS[1:5]
al2<-expand.grid(t0, t1, t2)
al3<-paste(al2$Var1, al2$Var2, al2$Var3)
al4 <- gsub(" ", "", al3)
head(al3)
Tim
-Original Message-
From: R-help On Behalf Of Eric Berger
Sent: Monday, September 4, 2023 10:17 AM
To: Christofer Bogaso
Cc: r-h
В Mon, 04 Sep 2023 12:05:38 +
Christophe Bousquet пишет:
> I will try compiling R from source when I am back from holidays, and
> ask you if I need assistance.
Make sure to compile with DEBUG=1 so that the compiler flags needed to
emit debugging information will be enabled. Good luck!
--
B
This is a great find for those of us lurking on this thread. Thanks for
sharing Greg (and of course Paul).
On 8/30/2023 3:52 PM, Greg Snow wrote:
Stephen, I see lots of answers with packages and resources, but not
book recommendations. I have used Introduction to Data Technologies
by Paul Mur
Ivan:
Just one perhaps extraneous comment.
You said that you were surprised that aggregate() and group_by() did not
have the same behavior. That is a misconception on your part. As you know,
the tidyverse recapitulates the functionality of many base R functions; but
it makes no claims to do so in
The function purrr::cross() can help you with this. For example:
f <- function(states, nsteps, first, last) {
paste(first, unlist(lapply(purrr::cross(rep(list(v),nsteps-2)),
\(x) paste(unlist(x), collapse=""))), last, sep="")
}
f(LETTERS[1:5], 3, "B", "E")
[1] "BAE" "BBE" "BCE" "BDE" "BEE"
HTH
Thank you very much for all the responses; especially Duncan's guidance.
I will add some further ideas on workflows below.
There were quite a few views on GitHub; but there is not much to see, as
there is absolutely no documentation. I have added in the meantime a
basic example:
https://gith
> siddharth sahasrabudhe via R-help
> on Sun, 3 Sep 2023 09:54:28 +0530 writes:
> I want to access the .csv file from my github
> repository. While connecting to the Github repository I am
> getting the following error:
> Error in curl::curl_fetch_memory(file) : Timeo
Let say I have 3 time points.as T0, T1, and T2.(number of such time
points can be arbitrary) In each time point, an object can be any of 5
states, A, B, C, D, E (number of such states can be arbitrary)
I need to find all possible ways, how that object starting with state
B (say) at time T0, can be
Haha, got it now, there is an na.action argument (which defaults to
na.omit) to aggregate() which is applied before calling mean(na.rm =
TRUE). Thank you Rui for pointing this out.
So running it with na.pass instead of na.omit gives the same results as
dplyr::group_by()+summarise():
aggregate(
> If you're up to compiling R from source [] and using a symbolic
> debugger [**] to step through Rcmd.exe, we could try to do that.
> Murphy's law says that the copy of Rcmd.exe you'll build from source
> will work well and refuse to reproduce the problem for you to
> investigate. (Beyond that, th
Às 12:51 de 04/09/2023, Ivan Calandra escreveu:
Thanks Rui for your help; that would be one possibility indeed.
But am I the only one who finds that behavior of aggregate() completely
unexpected and confusing? Especially considering that dplyr::summarise()
and doBy::summaryBy() deal with NAs d
Thanks Rui for your help; that would be one possibility indeed.
But am I the only one who finds that behavior of aggregate() completely
unexpected and confusing? Especially considering that dplyr::summarise()
and doBy::summaryBy() deal with NAs differently, even though they all
use mean(na.rm
Às 10:44 de 04/09/2023, Ivan Calandra escreveu:
Dear useRs,
I have just stumbled across a behavior in aggregate() that I cannot
explain. Any help would be appreciated!
Sample data:
my_data <- structure(list(ID = c("FLINT-1", "FLINT-10", "FLINT-100",
"FLINT-101", "FLINT-102", "HORN-10", "HORN
> Duncan Murdoch
> on Mon, 4 Sep 2023 04:51:32 -0400 writes:
> On 03/09/2023 10:47 p.m., Jeff Newmiller wrote:
>> Leonard... the reason roxygen exists is to allow markup
>> in source files to be used to automatically generate the
>> numerous files required by standard
> Jeff Newmiller
> on Sun, 03 Sep 2023 19:47:32 -0700 writes:
> Leonard... the reason roxygen exists is to allow markup in
> source files to be used to automatically generate the
> numerous files required by standard R packages as
> documented in Writing R Extensions.
I want to access the .csv file from my github repository. While connecting
to the Github repository I am getting the following error:
Error in curl::curl_fetch_memory(file) :
Timeout was reached: [raw.githubusercontent.com] Failed to connect to
raw.githubusercontent.com port 443 after 5250 ms: Tim
Thanks Iago for the pointer.
It then means that na.rm = TRUE is not applied in the same way within
aggregate() as opposed to dplyr::group_by() + summarise(), right? Within
aggregate, it behaves like na.omit(), that is, it excludes the
incomplete cases (whole rows), whereas with group_by() + su
It seems that the issue are the missings. If in #1 you use the dataset
na.omit(my_data) instead of my_data, you get the same output that in #2 and in
#4, where all observations with missing data are removed since you are
including all the variables.
The second dataset has no issue since it ha
Dear useRs,
I have just stumbled across a behavior in aggregate() that I cannot
explain. Any help would be appreciated!
Sample data:
my_data <- structure(list(ID = c("FLINT-1", "FLINT-10", "FLINT-100",
"FLINT-101", "FLINT-102", "HORN-10", "HORN-100", "HORN-102", "HORN-103",
"HORN-104"), Edge
On 03/09/2023 10:47 p.m., Jeff Newmiller wrote:
Leonard... the reason roxygen exists is to allow markup in source files to be
used to automatically generate the numerous files required by standard R
packages as documented in Writing R Extensions.
If your goal is to not use source files this wa
27 matches
Mail list logo