On Sat, 3 May 2014 14:23:10 -0400 Peter Schaffter <pe...@schaffter.ca> wrote:
> > A straightforward way to pull this off would be to actualize the > > notional copies of groff by forking. There would be one copy going > > forward from each line break. That would evaluate the cost of > > breaking at each word (or hyphenation point) on that line. At each > > line break the copies would rendezvous to see which process should > > be cloned to continue. Output of each process, both to standard > > output and standard error, would be treasured up and only the > > ultimate winner's output would finally be released. > > > > This model is somewhat formidable. > > No kidding. And I fear it might break one of groff's greatest > strengths, which is minimal demand on system resources. Are you really concerned about minimal resource requirements, or speed of processing? Doug's description retains a one-pass algorithm, which is key for speed. A parallel approach (forking) will only work better and better as Moore's Law continues to make our machines increasingly parallel. Meanwhile, memory requirements remain minimal because the fundamental size of the input -- the paragraph -- will not change. However formidable, fork-and-join is well suited to the problem and to the foreseeable evolution of hardware. > I suspect I'm a voice crying in the wilderness here Nah, not as long as you stay subscribed! :-) > but we need to consider that a greedy algorithm is almost always > faster than a dynamic programming solution; I wouldn't assume that, or assume that it matters. --jkl