> On 31/10/2009 21:49, Shelby Moore wrote:
>> http://www.haskell.org/pipermail/cvs-ghc/2009-October/050928.html
>> Shelby Moore wrote:
>>> One possible runtime optimization to provide an upper bound for cache
>>> control, might be to not cache thunks when the runtime computation time
>>> times number of cache hits, is less than the round-trip paging time
>>> multiplied by recent system paging load.
>>>
>>> Is this novel?  Is it useful?
>>
>> Clarify and improve:
>>
>> One possible idea for runtime optimization to provide a more
>> deterministic
>> upper bound on paging load for cache control, might be to not cache
>> thunk-wise when for each thunk, the total tree of computation time under
>> the thunk multiplied by the number of cache hits on the thunk, is less
>> than the round-trip paging time of thunks under the tree multiplied by a
>> chosen factor of recent system paging load.  Or toggle the test on and
>> off
>> on a chosen threshold of recent system paging load.


Thanks for reply, I had written more about the positive motivation here:

http://www.haskell.org/pipermail/haskell-cafe/2009-November/068436.html

Afaics and imho, unbounded space indeterminism gives too much power to
"user error" and thus makes unteneable the proposition of HaskelScript as
a mainstream scripting language (e.g. for improved games engine
composability and thus scripting productivity).  What I mean by "unbounded
space indeterminism" is that changing one argument can (perhaps rarely)
cause paging load to go F.U.B.A.R. in "mysterious ways", i.e. with very
little reasoned correlation without profiling (profiling is a
time-intensive optimization activity that many "simpleton" programmers
have never heard of).


Simon Marlow wrote:
> The biggest problem with doing this kind of thing is that in order to
> revert a thunk to its unevaluated state,...

I am suggesting the algorithm would apply only to unevaluated thunks, i.e.
non-discarded closures (not converted to normal or WHNF form).

The goal being that paging load will degrade less discontinuously.

I understand you are implying hysteresis, because paging load may not
feedback until too many thunk closures have already been discarded (an
overshoot).  In this case, my goal for "resolution independence" could be
relaxed, and the decision factor (on whether to discard closure) could be
based on a fixed target size for the Haskel heap.  I am thinking of that
Flash in browser let's the user set the maximum local cache size, and this
seems more teneable than unbounded page load, especially in scripting
applications where performance is less critical than
ease-of-use/determinism.  Thus, an optimization minimization for the
target audience.


> ...you have to retain the thunk's
> free variables, which itself may cause a space leak, though perhaps by
> reverting thunks recursively in the free variables you could limit the
> damage.  The case where this isn't an issue is top-level thunks (CAFs).

Reverting top-level thunks could be an option to the algorithm above if it
is allowed to overshoot its target decision metric.  However, this could
result in cycling.

Also for sub-top-level thunks, and within some range of the algorithm's
target decision metric, perhaps closures could be retained along with the
evaluated value?


> Measuring the things you mention is quite problematic and might add
> further overheads.

Perhaps I am mistaken, but I thought operating systems count page faults. 
Wouldn't it be as simple as reading that value and divided by the time
since the value was last read (and perhaps with some smoothing filter)?

Also I am suggesting above that an alternative is to let user set a fixed
heap (cache) target size (or some mix of paging load and heap size
target).


> Running in a low-virtual-memory situation is not something that GHC does
> well at all, though I do wonder how useful it is in practice.

Perhaps I am misunderstanding some key way you are viewing the problem,
because afaics, the problem set is not an upper bound on virtual memory,
but an upper bound on physical memory and thus paging load.

If you replace the word "virtual" with "physical", then I agree with the
first part of your assertion, and I have explained above that I think the
utility is to not have paging load go F.U.B.A.R. for reasons mysterious to
simpleton programmers, so that Haskel has a chance of being a mainstream
programming language.  It is an opportunity cost minimization problem,
because your average "mainstream" programmer will not do profiling.  You
basically have to "dummy proof" the thing.  And actually "dummy" is not
the correct word, because even the most expert Haskell programmers still
run into mysterious space issues that have to be tracked down with
profiling.

So my suggestion is about allowing the programmer/user to choose whether
to trade-off (a compiler option) absolute peak performance with risk of
random F.U.B.A.R. performance, for a more deterministic and continuous
degradation of performance when space errors are committed.  Thus, I view
any variant of my suggested algorithm as a smoothing function on space
performance.

KEY POINT: afaics, by definition of the problem set, the discontinuity can
be smoothed because the evaluation of thunks is very granular (i.e. the
number of thunks is large), in correlatation to _EACH_ offending error in
Haskell code.


>  I suspect
> that people won't be willing to pay a performance overhead for the
> possibility of better behaviour when the heap grows larger than real
> memory, so anything you do has to be essentially free in the common case.

If there is a high paging load, there is no performance :).  With perfect
correlation, then I need not say more.

That is why I was suggesting to correlate to paging load, because even if
overshoot and get more paging load than target decision metric, it may be
better than undershooting (not converting enough of the thunks to normal
and WHNF).

To the degree that paging load can be not be well correlated to the
algorithm and we use some absolute factor for upper heap size target, does
your logic not refute a large chunk of the motivation for using Haskell? 
We are (in most cases) motivated to use Haskell not because it gives the
best performance (*see caveat), rather because it improves productivity in
composability, concurrency, and algorithmic development.  *caveat: those
advantages combine to improve performance

Afaics that until the programmer is ready to do profile optimization, then
my suggested algorithm is better than having the program go F.U.B.A.R.
paging load.  It should allow the programmer(s) to better separate the
development of the algorithms from the space profile optimization.  I have
read that one of the big development benefits of lazy evaluation pure FP,
is you can write code in fragments from the bottom-up, top-down, and any
mix of the two styles.  Afaics, it would be nice if we didn't have
unbounded space determinism interrupting that development process at
seemingly random (reasoning hard to correlate) code edits.  I am trying to
suggest/think of a way to make space optimization orthogonal to the other
stages of development process when programming in Haskell.

Apologies if I am too head-strong (I get passionate about things that I
think are potentially huge paradigm shifts), it is just the first thing I
heard (so far) from major developer when I mention Haskel is "what about
the space leaks?".  In addition to the development stage orthogonality
mentioned above, I envision that the core of a commercial program could be
written in Haskel and well space optimized before release, so then users
of the program could gain composability benefits of pure FP by using
HaskelScript without sending the page load F.U.B.A.R. due to the space
errors they commit in casual scripting.

Corrections and flames welcome.


>
> Don't let me put you off though!

Thanks for reading.  You too my utmost cheers, Shelby.

>
> Cheers,
>       Simon
>

_______________________________________________
Cvs-ghc mailing list
Cvs-ghc@haskell.org
http://www.haskell.org/mailman/listinfo/cvs-ghc

Reply via email to