The devices in vdc* (virtual disk client) are virtualized disks, the c1d* devices are symbolic links to slices, partitions on disk (whether virtualized or not).
You are running in a logical (guest) domain and the real devices are connected to a control or i/o domain (another Solaris instance).
Hello jilliagre,
due to our security rule , i cannot use real name for the mount.
therefore, the real name replaced with </mount> for showing here
fstyp /</mount>
got
unknown_fstyp (no matches)
# id
uid=0(root) gid=0(root)
I am not an OS admin rather than application admin, and i have the root access.
I want to investigate I/O issue with iostat , need to know what these iostat's virtual disk vdc* mapped with the mount points which applications and databases are located.
I am no Solaris expert by any stretch, but some principles in performance tuning remain the same in every OS: does the production server have "real" disks or is it a virtual guest operating on virtual disks too? If the latter is the case it is probably the wrong place you are looking at anyway. Under the virtual disks there have to be some real devices - the LUNs on a storage box, members of a RAID in the host server, whatever. It is at these systems where you have to measure I/O, not on your virtualised guest.
Consider this (hypothetical) scenario: a server with 5 guests, g1-5 and a disk in this server where virtual disks for these guests reside. If g5 has heavy I/O this will influence the remaining available bandwidth which g1-4 could use. Therefore measurements on g1 because this guest has "intermittent performance issues" will tell you nothing about real issues, it will in fact only tell you when g5 has load peaks. You may not even know what you measure because maybe you don't know what g5 is doing and when.
It is a worthwhile effort to first get a detailed setup so that you can visualise the "flow" between the various interdependent parts of the machinery. Only then test/measure one component after the other to find out where the bottleneck is located.
I have had a little think about this problem during my break and I now realise that the information that you are looking for will not be easy to come by, the nature of ZFS makes it increasingly difficult as you add more disks to the zpool.
ZFS dynamically creates a block to vdev relationship based on block size (recordsize) and the number of disks in the pool. So if we create a pool with four disks and a block size of 128k (default), the blocks are allocated basically on a round robin basis across the four disks.
So identifying a file system to vdev relationship will not be easy, you could tackle it like this;
Now you have to go and have a look at the output and find what you want - but be warned;
root@fvssphsun01:~# cd /export/home/e415243/tmp
root@fvssphsun01:/export/home/e415243/tmp# ls -l
total 6153
-rw-r--r-- 1 root root 3072629 Oct 15 13:03 delete_me
root@fvssphsun01:/export/home/e415243/tmp#
Where I have a single line across the bottom (beginning with 0), your pool should show 5 lines - one for each vdev you should be able to see which vdev the output was written to. If you write a file bigger than 640K it will write at least one block to each - zfs manages that bit. As for the zfs file systems they are striped across however many disks are in the pool.
Can you tell us what the hardware is, this looks suspiciously like the view from inside an ldom.
Please post the output from echo | format (or a part of it if it's too big ) and if possible /usr/sbin/virtinfo -a this will give a good starting point.
Solaris 11 on x86 or SPARC ?
I'll presume it's SPARC as far as Oracle VM info goes ...
Try the following iostat command :
iostat -xcnzCTd 3 10
As manual states :
....
-x Report extended disk statistics. By default, disks are
identified by instance names such as ssd23 or md301. Com-
bining the x option with the -n option causes disk names to
display in the cXtYdZsN format, more easily associated with
physical hardware characteristics. Using the cXtYdZsN for-
mat is particularly helpful in the FibreChannel environ-
ments where the FC World Wide Name appears in the t field.
...
Outside of yourldom, on the control/service domain which is hosting that disk service, you will need to match the disks added in virtual disk service (vds) and ID chosen when disk is added to yourldom
Where N above is the number you see for that specified disk inside ldom on iostat/format/zpool commands and the numeration of disk(s) you see when doing ldm list -l yourldom .
This assumes you are not using ZVOLs or metadevices as disk backends on control/service domain.
If you do, more stuff will need to be done to match the physical to virtual disk.
But ZVOL as disk backend to ldom and then vxfs while having zfs filesystem as well in ldom sounds like a nightmare....
For further analysis, i would required output of following command, which can be quite long so can attach them or something.
# On control/service domain
ldm list-services
ldm list -l <yourldom>
ldm list
echo "::memstat" | mdb -k
tail -10 /etc/system
# On LDOM for start
echo | format