James -- On Sun, May 04, 2014, James K. Lowden wrote: > Are you really concerned about minimal resource requirements, or > speed of processing?
You're quite right, the issue is speed. > Doug's description retains a one-pass algorithm, which is key for > speed. A parallel approach (forking) will only work better and better > as Moore's Law continues to make our machines increasingly parallel. > Meanwhile, memory requirements remain minimal because the fundamental > size of the input -- the paragraph -- will not change. I suppose the only factor that risks incurring overhead is line length. The shorter the line length, the greater the number of phantom copies that would be generated by Doug's proposal, assuming I'm reading it correctly: "A straightforward way to pull this off would be to actualize the notional copies of groff by forking. There would be one copy going forward from each line break. That would evaluate the cost of breaking at each word (or hyphenation point) on that line. At each line break the copies would rendezvous to see which process should be cloned to continue. Output of each process, both to standard output and standard error, would be treasured up and only the ultimate winner's output would finally be released." But, as you point out, in 2014 and beyond, that probably isn't an issue. -- Peter Schaffter http://www.schaffter.ca