On 7/5/08, Samuel Thibault <[EMAIL PROTECTED]> wrote: > Dani Doni, le Sat 05 Jul 2008 13:52:07 +0200, a écrit : > > > On 7/5/08, Samuel Thibault <[EMAIL PROTECTED]> wrote: > > > Dani Doni, le Sat 05 Jul 2008 13:21:29 +0200, a écrit : > > > > > > > On 7/5/08, [EMAIL PROTECTED] <[EMAIL PROTECTED]> wrote: > > > > Hi folks, > > > > > > > > > On the other hand, the wiki seems to need a fast machine, so using > it > > > > > for the wiki exclusively would be a waste... > > > > > Not sure what the best approach is. Ideally, they should run in two > > > > > distinct VMs sharing the hardware :-) > > > > > > > > Maybe using web caching software like memcached[1] can help minimize > > > > disk access, as wiki content could be served from memory and only > > > > updates would hit the database triggering a cache update. > > > > > > > > > The problem is _not_ serving, it is updating. > > > > > > If the cpu is 100% busy during updates, then that's the cpu which is too > > > slow, not the disk. > > Maybe I am wrong, but updates on wiki content should trigger little > > bursts of activity, not sustained periods of 100% cpu load. > > > The wiki engine regenerates all the pages, that's what takes time. Oh... ok, I see...
-- Dani Doni