How to map device to mount point?

Solaris 11,
iostat -xncz 5


                    extended device statistics
 r/s    w/s   kr/s   kw/s wait actv wsvc_t asvc_t  %w  %b device


    6.0   18.2  226.7  205.6  0.0  0.0    0.0    1.7   0   3 vdc206
    6.8   39.6  252.6  341.0  0.0  0.1    0.0    1.5   0   4 vdc207
    0.2   30.0    9.4  266.9  0.0  0.0    0.0    1.1   0   1 vdc208
    6.6   19.4  242.2  330.1  0.0  0.0    0.0    1.9   0   4 vdc209
    6.8   36.4  230.9  371.2  0.0  0.1    0.0    1.4   0   4 vdc210


  

the mounts like,

df

...     
/S0T1          (ds/S0T1       ): 7815505 blocks  7815505 files     
/S0Q1          (ds/S0Q1       ): 9602502 blocks  9602502 files    
...
 

I may want to know how these devices, vdc* mapped to the mounts, /S0*?

Thanks!

Assuming this is ZFS, use:

zpool iostat -v

to get pool and underlying devices statistics and

zfs list 

to figure out which pool a file system use.

By the way, please use CODE tags, not QUOTE ones to show code samples.

1 Like
 zpool iostat -v
             capacity     operations    bandwidth
pool      alloc   free   read  write   read  write
--------  -----  -----  -----  -----  -----  -----
ds        2.99T  1.46T    113    147  13.4M  3.90M
  c1d212   667G   349G     23     30  2.90M   850K
  c1d213   666G   350G     23     30  2.89M   845K
  c1d214   668G   348G     23     30  2.90M   856K
  c1d215   666G   350G     23     30  2.89M   839K
  c1d216   398G  98.3G     18     25  1.84M   598K
--------  -----  -----  -----  -----  -----  -----
rpool     32.6G  26.9G      0      5  36.5K  36.7K
  c1d203  32.6G  26.9G      0      5  36.5K  36.7K
--------  -----  -----  -----  -----  -----  -----

I wanted the above results related the devices vdc* shown I/O performance -xncz , how is c1d* related to vdc*?

zfs list

there are too many shown :

ds/S0*

And no shown related to c1d* nor vdc*

Thanks!

What says:

fstyp /S0T1

?

The devices in vdc* (virtual disk client) are virtualized disks, the c1d* devices are symbolic links to slices, partitions on disk (whether virtualized or not).

You are running in a logical (guest) domain and the real devices are connected to a control or i/o domain (another Solaris instance).

See Introduction to Virtual Disks -
Oracle(R) VM Server for SPARC 3.6 Administration Guide

Hello jilliagre,
due to our security rule , i cannot use real name for the mount.

therefore, the real name replaced with </mount> for showing here

fstyp /</mount>

got
unknown_fstyp (no matches)

# id
uid=0(root) gid=0(root)

I am not an OS admin rather than application admin, and i have the root access.
I want to investigate I/O issue with iostat , need to know what these iostat's virtual disk vdc* mapped with the mount points which applications and databases are located.

Hmm, you might be using vxvm/vxfs, not my piece of cake...

Someone else, familiar with this product might help.

In any case, the iostat numbers you posted do not look to show any issue.

In any case, the iostat numbers you posted do not look to show any issue.

Correct, it is for test server, not the production server which has the performance issue. The test and the production are setup the same.

I wanted to show the people how to utilize iostat to identify the I/O with the mount points.
But I do not have the root access on production.

I am no Solaris expert by any stretch, but some principles in performance tuning remain the same in every OS: does the production server have "real" disks or is it a virtual guest operating on virtual disks too? If the latter is the case it is probably the wrong place you are looking at anyway. Under the virtual disks there have to be some real devices - the LUNs on a storage box, members of a RAID in the host server, whatever. It is at these systems where you have to measure I/O, not on your virtualised guest.

Consider this (hypothetical) scenario: a server with 5 guests, g1-5 and a disk in this server where virtual disks for these guests reside. If g5 has heavy I/O this will influence the remaining available bandwidth which g1-4 could use. Therefore measurements on g1 because this guest has "intermittent performance issues" will tell you nothing about real issues, it will in fact only tell you when g5 has load peaks. You may not even know what you measure because maybe you don't know what g5 is doing and when.

It is a worthwhile effort to first get a detailed setup so that you can visualise the "flow" between the various interdependent parts of the machinery. Only then test/measure one component after the other to find out where the bottleneck is located.

I hope this helps.

bakunin

Hi Sean,

I have had a little think about this problem during my break and I now realise that the information that you are looking for will not be easy to come by, the nature of ZFS makes it increasingly difficult as you add more disks to the zpool.

ZFS dynamically creates a block to vdev relationship based on block size (recordsize) and the number of disks in the pool. So if we create a pool with four disks and a block size of 128k (default), the blocks are allocated basically on a round robin basis across the four disks.

So identifying a file system to vdev relationship will not be easy, you could tackle it like this;

root@fvssphsun01:~# zfs list
NAME                              USED  AVAIL  REFER  MOUNTPOINT
rpool                             325G   224G  73.5K  /rpool
rpool/ROOT                       9.29G   224G    31K  legacy
rpool/ROOT/s11331                9.20G   224G  4.13G  /
rpool/ROOT/s11331/var            3.07G  60.9G  1.69G  /var
rpool/ROOT/solaris               92.9M   224G  2.93G  /
rpool/ROOT/solaris/var           7.08M  64.0G  1.32G  /var
rpool/S11.3_GA                     31K   224G    31K  /Shared/S11.3_GA
rpool/S11.3_REPO                  134G   224G   134G  /export/s11repo
rpool/S11.3_SRU_17.5               31K   224G    31K  /Shared/S11.3_SRU_17.5
rpool/VARSHARE                   9.47G   224G  36.8M  /var/share
rpool/VARSHARE/pkg               9.43G   224G    32K  /var/share/pkg
rpool/VARSHARE/pkg/repositories  9.43G   224G  9.43G  /var/share/pkg/repositories
rpool/VARSHARE/zones               31K   224G    31K  /system/zones
rpool/backup                      144K   224G   144K  /backup
rpool/backups                      31K   224G    31K  /backups
rpool/dump                        132G   228G   128G  -
rpool/export                     8.08G   224G    33K  /export
rpool/export/home                8.08G   224G  4.04G  /export/home
rpool/export/home/e400007        44.1M   224G  44.1M  /export/home/e400007
rpool/export/home/e415243        4.00G   224G  4.00G  /export/home/e415243
rpool/patrol                     2.84G   224G  2.84G  /usr/local/patrol
rpool/patroltmp                  1.27M   224G  1.27M  /usr/local/patrol/tmp
rpool/swap                       16.5G   225G  16.0G  -
rpool/swap2                      12.4G   225G  12.0G  -
root@fvssphsun01:~# dd if=/dev/zero of=/export/home/e415243/test bs=128k count=1
1+0 records in
1+0 records out
root@fvssphsun01:~# zdb -dddddddd rpool/export/home/e415243/test
zdb: can't find 'rpool/export/home/e415243/test': No such file or directory
root@fvssphsun01:~# zdb -dddddddd rpool/export/home/e415243 > /export/home/e415243/tmp/delete_me

Now you have to go and have a look at the output and find what you want - but be warned;

root@fvssphsun01:~# cd /export/home/e415243/tmp
root@fvssphsun01:/export/home/e415243/tmp# ls -l
total 6153
-rw-r--r--   1 root     root     3072629 Oct 15 13:03 delete_me
root@fvssphsun01:/export/home/e415243/tmp#

Getting the required output;

  Object  lvl   iblk   dblk  dsize  lsize   %full  type
        73    1    16K   128K   128K   128K  100.00  ZFS plain file (K=inherit) (Z=inherit)
                                        168   bonus  System attributes
        dnode flags: USED_BYTES USERUSED_ACCOUNTED
        dnode maxblkid: 0
        path    /test
        uid     0
        gid     0
        atime   Mon Oct 15 13:00:33 2018
        mtime   Mon Oct 15 13:02:29 2018
        ctime   Mon Oct 15 13:02:29 2018
        crtime  Mon Oct 15 13:00:33 2018
        gen     3333994
        mode    0100644
        size    131072
        parent  4
        links   1
        pflags  0x40800000204
Indirect blocks:
                 0 L0 0:0x6633a25a00:0x20000 0x20000L/0x20000P F=1 B=3334017/3334017 ---

                segment [000000000000000000, 0x0000000000020000) size  128K

Where I have a single line across the bottom (beginning with 0), your pool should show 5 lines - one for each vdev you should be able to see which vdev the output was written to. If you write a file bigger than 640K it will write at least one block to each - zfs manages that bit. As for the zfs file systems they are striped across however many disks are in the pool.

Can you tell us what the hardware is, this looks suspiciously like the view from inside an ldom.

Please post the output from echo | format (or a part of it if it's too big ) and if possible /usr/sbin/virtinfo -a this will give a good starting point.

Regards

Gull04

Solaris 11 on x86 or SPARC ?
I'll presume it's SPARC as far as Oracle VM info goes ...

Try the following iostat command :

iostat -xcnzCTd 3 10

As manual states :

....
     -x          Report  extended  disk  statistics.  By  default, disks are
                   identified by instance names such as ssd23 or  md301.  Com-
                   bining the x option with the -n option causes disk names to
                   display in the cXtYdZsN format, more easily associated with
                   physical  hardware characteristics. Using the cXtYdZsN for-
                   mat is particularly helpful in  the  FibreChannel  environ-
                   ments where the FC World Wide Name appears in the t field.
...

Outside of yourldom, on the control/service domain which is hosting that disk service, you will need to match the disks added in virtual disk service (vds) and ID chosen when disk is added to yourldom

ldm add-vdisk id=N backend-disk backend-disk@some-vds yourldom 

Where N above is the number you see for that specified disk inside ldom on iostat/format/zpool commands and the numeration of disk(s) you see when doing ldm list -l yourldom .

This assumes you are not using ZVOLs or metadevices as disk backends on control/service domain.
If you do, more stuff will need to be done to match the physical to virtual disk.

But ZVOL as disk backend to ldom and then vxfs while having zfs filesystem as well in ldom sounds like a nightmare....

For further analysis, i would required output of following command, which can be quite long so can attach them or something.

# On control/service domain 
ldm list-services
ldm list -l <yourldom>
ldm list
echo "::memstat" | mdb -k
tail -10 /etc/system
# On LDOM for start
echo | format

Hope that helps
Regards
Peasant.