Not able to increase ZFS file system on NGZ

I have Solaris-10 server running ZFS file-system. ctdp04_vs03-pttmsp01 is one of the non global zone. I wanted to increase a /ttms/prod file-system of zone, which is actually /zone/ctdp04_vs03-pttmsp01/ttms/prod on global server.
I have added a new disk of 9 GB, which is emcpower56a and now I can see all 4 disks in this zpool. All 4 disks are of 9 GB each.

root@ctdp04_vs03:/# zpool status pttmsp01_app_pool
  pool: pttmsp01_app_pool
 state: ONLINE
 scrub: none requested
config:
        NAME           STATE     READ WRITE CKSUM
        pttmsp01_app_pool  ONLINE       0     0     0
          emcpower51c  ONLINE       0     0     0
          emcpower52c  ONLINE       0     0     0
          emcpower53c  ONLINE       0     0     0
          emcpower56a  ONLINE       0     0     0
errors: No known data errors
root@ctdp04_vs03:/# df -h | grep -i pttmsp01_app_pool
pttmsp01_app_pool       26G    18K   1.6G     1%    /pttmsp01_app_pool
pttmsp01_app_pool/ttms   205M    21K   205M     1%    /zone/ctdp04_vs03-pttmsp01/ttms
pttmsp01_app_pool/ttms_apps    12G   6.5G   5.5G    55%    /zone/ctdp04_vs03-pttmsp01/ttms/apps
pttmsp01_app_pool/ttms_prod    21G   8.0G   5.6G    59%    /zone/ctdp04_vs03-pttmsp01/ttms/prod
root@ctdp04_vs03:/#
root@psapip03:/# zlogin ctdp04_vs03-pttmsp01 df -h /ttms/prod
Filesystem             size   used  avail capacity  Mounted on
/ttms/prod              14G   8.0G   5.6G    59%    /ttms/prod

When I login inside zone then it shows me 14 GB only. Why ? I thought below output should show increased size, which is 9*4 = 36GB

root@ctdp04_vs03:/# zpool list -o name,size | grep -i pttmsp01_app_pool
pttmsp01_app_pool   26.2G

Did someone enable quotas?

Yes I had set quota. But I think, I get the reason. I added emcpower56a instead of emcpower56c. I should have added third slice which is complete disk, instead of first partition. But now challenge is, how should I remove this disk. Can we remove disk from ZFS ? "zfs remove pool device" is not allowing.

I thiink you could try attaching proper LUN to the one you want to remove, then after synchronozation is done, detach it. Check "zpool attach".

Hi

Where are you with this?

what commands did you use to add the disk in the first place?

Gabriel Smith

zpool pttmsp01_app_pool emcpower56a
zfs quota=21G pttmsp01_app_pool/ttms_prod

This is critical file-system, which keep one important application. So we do not want to take any risk, which can break the system. May be we can take downtime of this and then try something

you probably meant you ran:

zpool add pttmsp01_app_pool emcpower56a

Do me a favor, just run a plain old zpool list and a zfs list.

#zpool list

#zfs list

This will a long list, so here I just grep part of that zone

root@ctdp04_vs03:/# zpool list | grep -i ttms
pttmsp01_app_pool   26.2G  14.6G  11.6G    55%  ONLINE  -
pttmsp01_root_pool  8.69G  4.04G  4.65G    46%  ONLINE  -
root@ctdp04_vs03:/#
root@ctdp04_vs03:/# zfs list | grep -i ttms
pttmsp01_app_pool            24.2G  1.57G    18K  /pttmsp01_app_pool
pttmsp01_app_pool/ttms         21K   205M    21K  /zone/ctdp04_vs03-pttmsp01/ttms
pttmsp01_app_pool/ttms_apps  6.51G  5.49G  6.51G  /zone/ctdp04_vs03-pttmsp01/ttms/apps
pttmsp01_app_pool/ttms_prod  8.05G  3.95G  8.05G  /zone/ctdp04_vs03-pttmsp01/ttms/prod
pttmsp01_root_pool           8.01G   554M    18K  /pttmsp01_root_pool
pttmsp01_root_pool/zone      4.03G  3.97G  4.03G  /zone/ctdp04_vs03-pttmsp01/root

Well the size of the zpool pttmsp01_app_pool is not 36GB. Is there a reason Why you believe that each disk is 9GB?

also what does the following command show?

zfs list -r | grep -i ttms

Yes Busi286, Storage team gave a lun of 9GB and that is emcpower56. All other disks are also of same size, when I check it from inq or format.

root@ctdp04_vs03:/# zfs list -r | grep -i ttms
pttmsp01_app_pool            24.2G  1.57G    18K  /pttmsp01_app_pool
pttmsp01_app_pool/ttms         21K   205M    21K  /zone/ctdp04_vs03-pttmsp01/ttms
pttmsp01_app_pool/ttms_apps  6.51G  5.49G  6.51G  /zone/ctdp04_vs03-pttmsp01/ttms/apps
pttmsp01_app_pool/ttms_prod  7.98G  4.02G  7.98G  /zone/ctdp04_vs03-pttmsp01/ttms/prod
pttmsp01_root_pool           8.01G   554M    18K  /pttmsp01_root_pool
pttmsp01_root_pool/zone      4.03G  3.97G  4.03G  /zone/ctdp04_vs03-pttmsp01/root
root@ctdp04_vs03:/# inq -nodots | grep -i emcpower56
/dev/rdsk/emcpower56c                :EMC     :SYMMETRIX       :5773  :17!jn000   :9144000
root@ctdp04_vs03:/#
root@ctdp04_vs03:/# zpool status pttmsp01_app_pool
  pool: pttmsp01_app_pool
 state: ONLINE
 scrub: none requested
config:
        NAME           STATE     READ WRITE CKSUM
        pttmsp01_app_pool  ONLINE       0     0     0
          emcpower51c  ONLINE       0     0     0
          emcpower52c  ONLINE       0     0     0
          emcpower53c  ONLINE       0     0     0
          emcpower56a  ONLINE       0     0     0
errors: No known data errors

For laughs and giggles, I created a pool with 4x 9GB disk.
I lost a about 200 MB, probably due to the superblock.

the mounted file system shows a loss of another 600MB. This is do to the fact that some of the space is reserved for handling reads and writes.

This is a far cry from from your 12GB loss. I think its time to have a talk with your storage team to see what their take on this is.

# cfgadm -al
Ap_Id                          Type         Receptacle   Occupant     Condition
c1                             scsi-bus     connected    configured   unknown
c1::dsk/c1t0d0                 disk         connected    configured   unknown
c1::dsk/c1t1d0                 disk         connected    configured   unknown
c1::dsk/c1t2d0                 disk         connected    configured   unknown
c1::dsk/c1t3d0                 disk         connected    configured   unknown
c1::dsk/c1t4d0                 disk         connected    configured   unknown
usb0/1                         unknown      empty        unconfigured ok
usb0/2                         unknown      empty        unconfigured ok
usb0/3                         unknown      empty        unconfigured ok
usb0/4                         unknown      empty        unconfigured ok
usb0/5                         unknown      empty        unconfigured ok
usb0/6                         unknown      empty        unconfigured ok
usb0/7                         unknown      empty        unconfigured ok
usb0/8                         unknown      empty        unconfigured ok
usb1/1                         unknown      empty        unconfigured ok
usb1/2                         unknown      empty        unconfigured ok
usb1/3                         unknown      empty        unconfigured ok
usb1/4                         unknown      empty        unconfigured ok
usb1/5                         unknown      empty        unconfigured ok
usb1/6                         unknown      empty        unconfigured ok
usb1/7                         unknown      empty        unconfigured ok
usb1/8                         unknown      empty        unconfigured ok

# zpool create data c1t1d0 c1t2d0 c1t3d0 c1t4d0
# zpool list
NAME   SIZE   USED  AVAIL    CAP  HEALTH  ALTROOT
data  35.8G  79.5K  35.7G     0%  ONLINE  -
# zfs list
NAME   USED  AVAIL  REFER  MOUNTPOINT
data    75K  35.2G    21K  /data
#

Hi, what happen with this issue? did you guys figure out the problem?
Was the storage team able to provide any insight?

busi, from storage team it was fine. After some struggle, we took downtime from application team. Took 5 disk of 9 GB each, mounted them as _new and resynched data. Later unmounted older one and renamed _new file-systems to match original one. Means, had to work on completely from scratch :slight_smile:
Storage team suggested that we should use ecmpower56c, as c represents 3 rd slice (whole disk).

I see. Glad to see you made progress with it.

On a side note if you have any new employees needing to learn ZFS, please refere them to my video

Zeta file system - YouTube

1 Like

Thanks Gabriel. Great tutorials and really helpful. Specially for company like mine, where admins go and come frequently.
I am going through you rest of your videos.

Thank you!!!

let me know If you need anything simple that's not hardware or OK prompt specific.
I am a bit busy these days, but I could definitely put your request on my to do list.

gabriel, I posted another question here - Sliced vx cds veritas format
Just in case, if you are familiar with Veritas Volume Manager. Otherwise, I am also trying to dig this.