Hi all.
I am at a client currently using SCO Unix 5.0.6 as a guest in a VMWare ESX hosted environment. We are moving to Hyper-V for reasons I will not detail. Our initial design to avoid complexities with porting the applications to 5.0.7V were successful in function using nested hypervisors. SCO Unix is hosted in VMWware Workstation running on Windows 2008 hosted on Hyper-V.
Everything function fine (with work) in nested hypervisors, but performance is suffering significantly with some pretty rudimentary file creation and copy tests - that translate to the application. We were expecting a performance hit, say arbitrarily 20%, but were seeing a 200-300% performance penalty. It is really bad (13x slower than ESX hosted) when creating a file with a small block size (dd if=/dev/zero of=/u/mytestfile.out bs=1 count=0), but as mentioned more reasonable block sizes of 512b or 4k result in 2 - 3 times slower operations. Perhaps relevant, %sys times (using sar) during operations are high. Typically pegged at 99% when using a 1 byte block size, but still in the 90's regardless. The same is true of file copies. 2 - 3 times slower and high %sys CPU.
Thinking 5.0.7V hosted natively on Hyper-V might hold the answer we downloaded and installed the 5.0.7V image for Hyper-V. Low and behold performance is worse. Typically 4-5 times slower out of the box.
There are no other competing processes on the box. We're presenting a single processor having read even with 5.0.7V let alone 5.0.6 that SMP is not supported in either virtualized environment. The physical host has 16 cores and there is nothing else going on with the box. When the CPU is high in the guest we can see the corresponding physical host CPU usage high.
Is there an expected performance penalty for SCO Unix 5.0.6, or the officially supported 5.0.7V version in ESX and/or Hyper-V versus bare metal - assuming a single CPU of the same characteristics?
Is there some tuning that should be expected to result a double or tripling of IO performance?
Thanks for any advice.