On Feb 1, 2011 10:37 AM, "T o n g" <mlist4sunt...@yahoo.com> wrote:
>
> Thanks everyone who replied.
>
> On Mon, 31 Jan 2011 00:43:46 -0500, shawn wilson wrote:
>
> > benchmarks cost too much time and money to do right and someone always
> > wants to argue with them. . .
>
> I know it's against vmware licensing policy to publish benchmark data.
> But all I was asking was kind of friend-to-friend recommendation, not
> something you publish seriously on you blog.

I would like to know this too. However....

In fact, all I wanted to
> know was a quick
>
>  /sbin/hdparm -t /dev/sda
>
> test result inside and outside vm.
>
> Using VirtualBox as an exmample, I don't think there will be a great
> difference between the result of an expert and mine, of the comparative
> figure between vm and host (but I could be wrong).
>

So, when you say benchmark, you're talking about some type io data over
time. This is all good until you consider being inside a vm. If you want to
know about timing inside of a vm, look at Google. Also look at Google
results for sending a fax with asterisk in a vm, or precision ntp servers in
a vm.

My point here is that you're looking at inaccurate benches.

> It's all evident based and it's a quick gauge, not some serious
> measure. . . Anyway, it's totally OK for someone not give such result,
> just trying to explaining my point here.

I'm writing this hoping that someone will take the time to do some accurate
benchmarks.

The proper way of doing it would be to setup a default install on a host and
install one vm. You would need to setup the guest for each vm and store your
disk file (to be copied back for each test). You could of course convert
that image for the different host formats however I'm not sure what extra
baggage comes with that conversion.

You would need to have the host start a process on the guest and use the
host's polling facility to find your results.

A few more considerations are: guest OS, amount of processor time the host
gives the guest, where the guest's disk file is placed on the physical hdd
(inner or outer sector), making sure that each host does the same polling of
the guest (same interval, same data, etc). If you are doing a real world
test, you also need to make sure the 'tools' for that environment are
properly installed. It might be appropriate to make sure that the same types
of services and processes are running on each host. However, doing so might
make of it less of a real world test. Ie, if a service is a security
vulnerability on a host it will probably be disabled in production, if a
service is needed for centralized monitoring it might be enabled in
production.  Lastly, if you add hyperv into your tests, normalization
becomes harder since you might have to adopt different tools for sampling
your results.

... obviously you'd want each environment on the same hardware and maybe do
tests on amd vs Intel since they have slightly different instruction sets.

If anyone knows of any unbiased tests, I would be interested.

Reply via email to