On 7 August 2010 19:03, Joshua Boyd <[email protected]> wrote:
> On Sat, Aug 7, 2010 at 7:57 AM, Ivan Voras <[email protected]> wrote:

>> It's unlikely they will help, but try:
>>
>> vfs.read_max=32
>>
>> for read speeds (but test using the UFS file system, not as a raw device
>> like above), and:
>>
>> vfs.hirunningspace=8388608
>> vfs.lorunningspace=4194304
>>
>> for writes. Again, it's unlikely but I'm interested in results you
>> achieve.
>>
>
> This is interesting. Write speeds went up to 40MBish. Still slow, but 4x
> faster than before.
> [r...@git ~]# dd if=/dev/zero of=/var/testfile bs=1M count=250
> 250+0 records in
> 250+0 records out
> 262144000 bytes transferred in 6.185955 secs (42377288 bytes/sec)
> [r...@git ~]# dd if=/var/testfile of=/dev/null
> 512000+0 records in
> 512000+0 records out
> 262144000 bytes transferred in 0.811397 secs (323077424 bytes/sec)
> So read speeds are up to what they should be, but write speeds are still
> significantly below what they should be.

Well, you *could* double the size of "runningspace" tunables and try that :)

Basically, in tuning these two settings we are cheating: increasing
read-ahead (read_max) and write in-flight buffering (runningspace) in
order to offload as much IO to the controller (in this case vmware) as
soon as possible, so to reschedule horrible IO-caused context switches
vmware has. It will help sequential performance, but nothing can help
random IOs.
_______________________________________________
[email protected] mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to "[email protected]"

Reply via email to