> Perhaps it is a false problem, because usually only a translator at a time
> uses a storeio. Probably I have to investigate more how storeio/libstore
> works... I'm wondering how can I share memory between the storeio process
> and the libstore process, or if there is a way to avoid it.
The on
On Tue, Nov 06, 2001 at 10:04:34PM +0100, Niels Möller wrote:
> > I think that storeio is the only possibility to put a cache mechanism in
> > user space. But I see some drawbacks:
> > - memory user for caching can be paged out
>
> If that happens, you're doing it wrong, I think.
Of course. T
On Tue, Nov 06, 2001 at 09:57:07PM +0100, Marcus Brinkmann wrote:
> > I see that storeio have a option "-e" that hide the device. I suppose that
> > using this option cause ext2fs to go through the storeio translator. So in
> > this case I can happily implement caching in storeio (even if we use
Diego Roversi <[EMAIL PROTECTED]> writes:
> I think that storeio is the only possibility to put a cache mechanism in
> user space. But I see some drawbacks:
> - memory user for caching can be paged out
If that happens, you're doing it wrong, I think. You want to use a
special pager for the cach
On Tue, Nov 06, 2001 at 09:21:19PM +0100, Diego Roversi wrote:
> I see that storeio have a option "-e" that hide the device. I suppose that
> using this option cause ext2fs to go through the storeio translator. So in
> this case I can happily implement caching in storeio (even if we use more CPU).
On Tue, Nov 06, 2001 at 10:26:59AM +0100, Marcus Brinkmann wrote:
> Data blocks don't go to storeio. The included libstore communicates with
> storeio about the storage type and does the actual reading/writing etc
> itself (see file_get_storage_info, store_create, store_encode and
> store_decode
[EMAIL PROTECTED] (Niels Möller) writes:
> Farid Hajji <[EMAIL PROTECTED]> writes:
>
> > The problem right now is that there is no memory sharing between normal
> > clients and the filesystem translators. Here, data is simply copied across
> > a costly IPC path, thus wasting a lot of CPU cycles.
On Tue, Nov 06, 2001 at 01:08:38AM +0100, Farid Hajji wrote:
> The main reaons for the slowness of the Hurd's file I/O, is that data is
> actually _copied_ more often than necessary between the clients and the
> file servers/translators. Just look at the sources of glibc and the hurd,
> starting e
Farid Hajji <[EMAIL PROTECTED]> writes:
> The problem right now is that there is no memory sharing between normal
> clients and the filesystem translators. Here, data is simply copied across
> a costly IPC path, thus wasting a lot of CPU cycles.
I thought Mach had some mechanism that allowed ipc
Hi Diego,
> I've noticed that access to files on Hurd is noticeable slower than using
> Linux. For example compiling hurd require a lot of disk access using hurd,
> while cross compiling hurd on linux uses few disk access.
I'm sure you're not comparing apples and oranges here, though it may
see
Hi,
I've noticed that access to files on Hurd is noticeable slower than using
Linux. For example compiling hurd require a lot of disk access using hurd,
while cross compiling hurd on linux uses few disk access.
The question is: there is some kind of caching on hurd or in gnumach? Can be
useful
11 matches
Mail list logo