> Simon Marlow wrote:
>> The biggest problem with doing this kind of thing is that in order to
>> revert a thunk to its unevaluated state,...
>
Shelby Moore wrote:
> I am suggesting the algorithm would apply only to unevaluated thunks...
>
> I understand you are implying hysteresis, because paging load may not
> feedback until too many thunk closures have already been discarded (an
> overshoot)...
>
>
>> ...you have to retain the thunk's
>> free variables, which itself may cause a space leak, though perhaps by
>> reverting thunks recursively in the free variables you could limit the
>> damage.  The case where this isn't an issue is top-level thunks (CAFs).
>
> Reverting top-level thunks could be an option...>
> Also for sub-top-level thunks, and within some range of the algorithm's
> target decision metric, perhaps closures could be retained along with the
> evaluated value?

A research paper I haven't read yet:
http://lists.seas.upenn.edu/pipermail/types-list/2004/000346.html

"run eagerly until [...8<....] some resource bound is exceeded
(in which case create a thunk)."

I realize that was in context of running eagerly, but there may be some
useful experience there in terms of resources bounds and creating thunks.


>> Measuring the things you mention is quite problematic and might add
>> further overheads.
>
> Perhaps I am mistaken, but I thought operating systems count page faults.
> Wouldn't it be as simple as reading that value and divided by the time
> since the value was last read (and perhaps with some smoothing filter)?

I did not address the computation of duration and frequency of execution
under the thunk tree trunk, as my proposed algorithm required.  The point
was to not keeping re-thunking (discard the evaluated value and
re-evaluate) on fragments that run very relatively much more slowly and/or
much more often.  I agree this would be a much more fine grained statistic
(to access system clock, etc) and probably significant overhead.

One idea for a solution to that computation overhead might be to temporary
retain all closures with the evaluated value together in the GC-GC
nursery[1] (a pointer will do), then set a timer of sufficient period that
its evaluation will not be significant overhead, in order to get a "per
entry" average time for items added to nursery since last timeout.  In
other words, handle it statistically and orthogonally at GC-- we are
striving for system determinism (i.e. graceful performance degradation
instead of discontinuous F.U.B.A.R. events).

[1]
http://haskell.org/haskellwiki/?title=GHC/Memory_Management&oldid=28772#Garbage_collection

_______________________________________________
Cvs-ghc mailing list
Cvs-ghc@haskell.org
http://www.haskell.org/mailman/listinfo/cvs-ghc

Reply via email to