On Mar 22, 2010, at 5:36 PM, Pawel Jakub Dawidek wrote:

> On Mon, Mar 22, 2010 at 08:23:43AM +0000, Poul-Henning Kamp wrote:
>> In message <4ba633a0.2090...@icyb.net.ua>, Andriy Gapon writes:
>>> on 21/03/2010 16:05 Alexander Motin said the following:
>>>> Ivan Voras wrote:
>>>>> Hmm, it looks like it could be easy to spawn more g_* threads (and,
>>>>> barring specific class behaviour, it has a fair chance of working out of
>>>>> the box) but the incoming queue will need to also be broken up for
>>>>> greater effect.
>>>> 
>>>> According to "notes", looks there is a good chance to obtain races, as
>>>> some places expect only one up and one down thread.
>>> 
>>> I haven't given any deep thought to this issue, but I remember us discussing
>>> them over beer :-)
>> 
>> The easiest way to obtain more parallelism, is to divide the mesh into
>> multiple independent meshes.
>> 
>> This will do you no good if you have five disks in a RAID-5 config, but
>> if you have two disks each mounted on its own filesystem, you can run
>> a g_up & g_down for each of them.
> 
> A class is suppose to interact with other classes only via GEOM, so I
> think it should be safe to choose g_up/g_down threads for each class
> individually, for example:
> 
>       /dev/ad0s1a (DEV)
>              |
>       g_up_0 + g_down_0
>              |
>            ad0s1a (BSD)
>              |
>       g_up_1 + g_down_1
>              |
>            ad0s1 (MBR)
>              |
>       g_up_2 + g_down_2
>              |
>            ad0 (DISK)
> 
> We could easly calculate g_down thread based on bio_to->geom->class and
> g_up thread based on bio_from->geom->class, so we know I/O requests for
> our class are always coming from the same threads.
> 
> If we could make the same assumption for geoms it would allow for even
> better distribution.

The whole point of the discussion, sans PHK's interlude, is to reduce the 
context switches and indirection, not to increase it.  But if you can show 
decreased latency/higher-iops benefits of increasing it, more power to you.  I 
would think that the results of DFly's experiment with 
parallelism-via-more-queues would serve as a good warning, though.

Scott

_______________________________________________
freebsd-current@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-current
To unsubscribe, send any mail to "freebsd-current-unsubscr...@freebsd.org"

Reply via email to