If anyone are interested, I found a solution for lazy lists. A simplified 
version of their construction and access looks like this:

nil <- function() NULL
cons <- function(car, cdr) {
  force(car)
  force(cdr)
  function() list(car = car, cdr = cdr)
}

is_nil <- function(lst) is.null(lst())
car <- function(lst) lst()$car
cdr <- function(lst) lst()$cdr

An invariant is that a list is always a thunk that evaluates to either NULL or 
a list tha contains car and cdr where cdr is another list (i.e. a thunk).

Operations on lists can be made lazy by wrapping them in a thunk that returns 
an evaluated promise. The laziness comes from wrapping an expression in a 
promise and by evaluating this promise we make it behave like the un-wrapped 
list would do.

So we can, for example, implement lazy reversal and concatenation like this:

reverse <- function(lst) {
  do_reverse <- function(lst) {
    result <- nil
    while (!is_nil(lst)) {
      result <- cons(car(lst), result)
      lst <- cdr(lst)
    }
    result
  }

  force(lst)
  lazy_thunk <- function(lst) {
    function() lst()
  }
  lazy_thunk(do_reverse(lst))
}

cat <- function(l1, l2) {
  do_cat <- function(l1, l2) {
    rev_l1 <- nil
    while (!is_nil(l1)) {
      rev_l1 <- cons(car(l1), rev_l1)
      l1 <- cdr(l1)
    }
    result <- l2
    while (!is_nil(rev_l1)) {
      result <- cons(car(rev_l1), result)
      rev_l1 <- cdr(rev_l1)
    }
    result
  }

  force(l1)
  force(l2)
  lazy_thunk <- function(lst) {
    function() lst()
  }
  lazy_thunk(do_cat(l1, l2))
}


As an example of how this laziness works, we can test concatenation. 
Concatenating two lists is a fast operation, because we don’t actually evaluate 
the concatenation, but when we access the list afterward we pay for both the 
concatenation and the access.

vector_to_list <- function(v) {
  lst <- nil
  for (x in v) lst <- cons(x, lst)
  reverse(lst)
}

l1 <- vector_to_list(1:10000)
l2 <- vector_to_list(1:10000)

library(microbenchmark)
microbenchmark(lst <- cat(l1, l2), times = 1) # fast operation
microbenchmark(car(lst), times = 1) # slow operation
microbenchmark(car(lst), times = 1) # faster operation


Of course, such a lazy list implementation is just a slow way of implementing 
lists, but it makes it possible to exploit a combination of amortised analysis 
and persistent data structures to implement queues 
http://www.westpoint.edu/eecs/SiteAssets/SitePages/Faculty%20Publication%20Documents/Okasaki/jfp95queue.pdf


Cheers

On 24 Apr 2017, 16.35 +0200, Thomas Mailund <thomas.mail...@gmail.com>, wrote:
> Hi, I’m playing around with ways of implementing lazy evaluation of 
> expressions. In R, function arguments are evaluated as promises but 
> expressions are evaluated immediately, so I am trying to wrap expressions in 
> thunks—functions with no arguments that evaluate an expression—to get 
> something the resembles lazy evaluation of expressions.
>
> As an example, consider this:
>
> lazy <- function(value) {
>   function() value
> }
>
> f <- lazy((1:100000)[1])
>
> If we evaluate f we have to create the long vector and then get the first 
> element. We delay the evaluation to f so the first time we call f we should 
> see a slow operation and if we evaluate it again we should see faster 
> evaluations. If you run this benchmark, you will see that this is indeed what 
> we get:
>
> library(microbenchmark)
> microbenchmark(f(), times = 1)
> microbenchmark(f(), times = 1)
> microbenchmark(f(), times = 1)
> microbenchmark(f(), times = 1)
>
> Now, I want to use this to implement lazy linked lists. It is not 
> particularly important why I want to do this, but if you are interested, it 
> is because you can implement persistent queues with amortised constant time 
> operations this way, which is what I am experimenting with.
>
> I have this implementation of linked lists:
>
> list_cons <- function(elem, lst)
>   structure(list(head = elem, tail = lst), class = "linked_list")
>
> list_nil <- list_cons(NA, NULL)
> empty_list <- function() list_nil
> is_empty.linked_list <- function(x) identical(x, list_nil)
>
>
> You can implement it simpler using NULL as an empty list, but this particular 
> implementation lets me use polymorphism to implement different versions of 
> data structures — the reasoning is explained in chapter 2 of a book I’m 
> working on: https://www.dropbox.com/s/qdnjc0bx4yivl8r/book.pdf?dl=0
>
> Anyway, that list implementation doesn’t evaluate the lists lazily, so I am 
> trying to wrap these lists in calls to lazy().
>
> A simple implementation looks like this:
>
>
> lazy_empty_list <- lazy(empty_list())
> lazy_cons <- function(elm, lst) {
>   lazy(list_cons(elm, lst()))
> }
>
> Now, this works fine for adding an element to an empty list:
>
> lst <- lazy_cons(2, lazy_empty_list)
> lst()
>
> It also works fine if I add another element to an expression for constructing 
> a list:
>
> lst <- lazy_cons(1, lazy_cons(2, lazy_empty_list))
> lst()
>
> I can construct lists as long as I want, as long as I explicitly give the 
> lazy_cons() function an expression for the list:
>
> lst <- lazy_cons(1, lazy_cons(2, lazy_cons(3, lazy_empty_list)))
> lst()
>
>
> However, if I save intermediate lists in a variable, it breaks down. This 
> code:
>
> lst <- lazy_cons(2, lazy_empty_list)
> lst <- lazy_cons(1, lst)
> lst()
>
> gives me this error:
>
>  Error in lst() :
>   promise already under evaluation: recursive default argument reference or 
> earlier problems?
>
> Now, I am particularly dense today, it being Monday and all, so there is 
> likely to be something very obvious I am missing, but I would think that the 
> “lit” variable, when passed to lazy_cons(), would be interpreted as a promise 
> to be evaluated in the parent environment, so I don’t see why it is 
> considered a circular definition of it.
>
> If I force the list to be evaluated, it all works, and the first evaluation 
> is more expensive than the following:
>
> lazy_cons <- function(elm, lst) {
>   force(lst)
>   lazy(list_cons(elm, lst()))
> }
> lst <- lazy_cons(1, lazy_empty_list)
> lst <- lazy_cons(2, lst)
> lst <- lazy_cons(3, lst)
> microbenchmark(lst(), times = 1)
> microbenchmark(lst(), times = 1)
> microbenchmark(lst(), times = 1)
>
> But if I do the exact same thing in a for-loop, it breaks again—this does not 
> work and I get the same error as earlier:
>
> lst <- lazy_empty_list()
> for (e in 1:3) {
>   lst <- lazy_cons(e, lst)
> }
> microbenchmark(lst(), times = 1)
> microbenchmark(lst(), times = 1)
> microbenchmark(lst(), times = 1)
>
> I really can’t see what the difference is between the loop version and the 
> explicitly unwrapping of the loop, but R certainly sees a difference…
>
> I would really love to hear if any of you guys have any insights to what is 
> going on here...
>
>
> Cheers
>
> [[alternative HTML version deleted]]
>
> ______________________________________________
> R-help@r-project.org mailing list -- To UNSUBSCRIBE and more, see
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
> and provide commented, minimal, self-contained, reproducible code.

        [[alternative HTML version deleted]]

______________________________________________
R-help@r-project.org mailing list -- To UNSUBSCRIBE and more, see
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.

Reply via email to