Here is another possibility
library(stringr)
readterm <- function(term, text) {
lapply(str_split(text, fixed(term))[[1]][-1],
fread, skip = 4, nrows = 5
)
}
easymethod <- function(whalines) {
whalines <- str_c(whalines, collapse = "\n")
lapply(c(srchStr1, srchSt
Sorry, I was on my phone and did not see that you were already using but
completely missing the vectorized nature of these functions.
Consider the following:
#
# after executing your sample code
slowmethod <- function( whalines ) {
lines <- whalines
mc_list <- NULL
for (i in
?readLines
?grep
?textConnection
On July 24, 2019 11:54:07 AM PDT, "Morway, Eric via R-help"
wrote:
>The small reproducible example below works, but is way too slow on the
>real
>problem. The real problem is attempting to extract ~2920 repeated
>arrays
>from a 60 Mb file and takes ~80 minutes.
Hello,
Instead of read.table use
data.table::fread
It's an order of magnitude faster and all you have to do is to change
the function, all arguments are the same (in this case).
Hope this helps,
Rui Barradas
Às 20:18 de 24/07/19, Rui Barradas escreveu:
Hello,
This is far from a complete
Hello,
This is far from a complete answer.
A quicky one: no loops.
mc_list2 <- grep(srchStr1, lines)
tmp_list2 <- grep(srchStr2, lines)
identical(mc_list, mc_list2)# [1] TRUE
identical(tmp_list, tmp_list2) # [1] TRUE
Another one: don't extend lists or vectors inside loops, reserve memor
5 matches
Mail list logo