Resize LUNs and zfs-pool on sun cluster

Hi,

I need to increase the size of a zfs filesystem, which lies on two mirrored san luns

root@xxxx1:/tttt/DB-data-->zpool list
NAME                    SIZE    USED   AVAIL    CAP  HEALTH     ALTROOT
xxxx-data-zpool      3.97G   2.97G   1.00G    74%  ONLINE     /
xxxx-logs-zpool      15.9G   3.42G   12.5G    21%  ONLINE     /

root@usxxxx1:/tttt/DB-data-->zpool status
  pool: xxxx-data-zpool
 state: ONLINE
 scrub: none requested
config:

        NAME                                       STATE     READ WRITE CKSUM
        xxxx-data-zpool                          ONLINE       0     0     0
          mirror                                   ONLINE       0     0     0
            c3t600A0B80001138280000A63C48183A82d0  ONLINE       0     0     0
            c3t600A0B800011384A00005A5548183AF1d0  ONLINE       0     0     0

errors: No known data errors

  pool: xxxx-logs-zpool
 state: ONLINE
 scrub: none requested
config:

        NAME                                       STATE     READ WRITE CKSUM
        xxxx-logs-zpool                          ONLINE       0     0     0
          mirror                                   ONLINE       0     0     0
            c3t600A0B8000115C2C0000A1F548182CFAd0  ONLINE       0     0     0
            c3t600A0B80001159220000610D48182893d0  ONLINE       0     0     0

errors: No known data errors


root@xxxx1:/tttt/DB-data-->zfs list
NAME                                      USED  AVAIL  REFER  MOUNTPOINT
xxxx-data-zpool                        2.97G   964M  26.5K  //xxxx-data-zpool
xxxx-data-zpool/tttt               2.97G   964M  24.5K  //xxxx-data-zpool/tttt
xxxx-data-zpool/tttt/DB-data       2.97G   547M  2.97G  //tttt/DB-data
xxxx-logs-zpool                        3.42G  12.2G  26.5K  //xxxx-logs-zpool
xxxx-logs-zpool/apache2-data            451M  1.56G   451M  ///tttt/apache2-data
xxxx-logs-zpool/tttt               2.98G  12.2G  24.5K  //xxxx-logs-zpool/tttt
xxxx-logs-zpool/tttt/DB-backups    2.81G  9.19G  2.81G  //tttt/DB-backups
xxxx-logs-zpool/tttt/DB-translogs   182M   118M   182M  //tttt/DB-translogs
14 substitutions on 9 lines

need to increase the luns from xxxx-data-zpool, and the fs //tttt/DB-data

root@xxxx1:/-->showrev
Hostname: xxxx1
Hostid: 84a8de3c
Release: 5.10
Kernel architecture: sun4v
Application architecture: sparc
Hardware provider: Sun_Microsystems
Domain:
Kernel version: SunOS 5.10 Generic_127127-11

Storage is an IBM DS4800

the Machine is part of a two-Node-Cluster with SUN-Cluster, in Case of failover, luns and zpool is taken online on the second node

on AIX you have to increase the luns on the storage, and then run chvg -g vgname, is there such a command for zfs pool on solaris, and is it possible while operating?

cheers funksen

perhaps any experience on this without cluster? so just with the extend of a LUN and mirrored zpool?

You can't do it with ZFS I think. Rather you need to add another LUN to the zpool.

I too had this problem and created a new LUN and added to the zpool (safe way and it works). In your case, it looks like you are creating mirror from 2 different controller/disk unit so you have to create 1 Lun in each and add to the zpool as mirror.

Another thought, but try it with file before implementing it:

  • Fail one drive (say c3t600A0B80001138280000A63C48183A82d0).
  • Delete this Lun from disk unit and recreate with bigger size.
  • Attach this newly created LUN to the same zpool (mirror mode).
  • Wait till it syncs
  • Fail the other LUN (c3t600A0B800011384A00005A5548183AF1d0)
  • Delete this Lun from the disk unit and recreate with bigger size
  • Attach the Lun to the same same pool

thank you houston

the database is very small, so just adding new luns would result in 20 luns after two years in the pool :slight_smile:

the mirror/unmirror method seems to be the best method I guess, the problem is, I can't test it, since that's our only san-attached solaris system

# mkfile 1g file1
# mkfile 1g file2
# zpool create zphouston mirror /tmp/file1 /tmp/file2
# df -h /zphouston
Filesystem size used avail capacity Mounted on
zphouston 984M 24K 984M 1% /zphouston
# mkfile 20m /zphouston/20megfile
# sum /zphouston/20megfile |tee /zphouston/sum
0 40960 /zphouston/20megfile
# zpool offline zphouston /tmp/file2
Bringing device /tmp/file2 offline
# zpool status zphouston
pool: zphouston
state: DEGRADED
status: One or more devices has been taken offline by the administrator.
Sufficient replicas exist for the pool to continue functioning in a
degraded state.
action: Online the device using 'zpool online' or replace the device with
'zpool replace'.
scrub: none requested
config:

    NAME            STATE     READ WRITE CKSUM
    zphouston       DEGRADED     0     0     0
      mirror        DEGRADED     0     0     0
        /tmp/file1  ONLINE       0     0     0
        /tmp/file2  OFFLINE      0     0     0

errors: No known data errors
# rm file2
# mkfile 2g file2
# zpool replace zphouston /tmp/file2 /tmp/file2
# zpool status
pool: zphouston
state: DEGRADED
scrub: resilver completed with 0 errors on Mon Feb 9 14:01:22 2009
config:

    NAME                  STATE     READ WRITE CKSUM
    zphouston             DEGRADED     0     0     0
      mirror              DEGRADED     0     0     0
        /tmp/file1        ONLINE       0     0     0
        replacing         DEGRADED     0     0     0
          /tmp/file2/old  UNAVAIL      0     0     0  cannot open
          /tmp/file2      ONLINE       0     0     0

errors: No known data errors
# (after couple of minutes)
# zpool status zphouston
pool: zphouston
state: ONLINE
scrub: resilver completed with 0 errors on Mon Feb 9 14:01:22 2009
config:

    NAME            STATE     READ WRITE CKSUM
    zphouston       ONLINE       0     0     0
      mirror        ONLINE       0     0     0
        /tmp/file1  ONLINE       0     0     0
        /tmp/file2  ONLINE       0     0     0

errors: No known data errors
# df -h /zphouston
Filesystem size used avail capacity Mounted on
zphouston 984M 20M 964M 3% /zphouston
# zpool detach zphouston /tmp/file1
# df -h /zphouston
Filesystem size used avail capacity Mounted on
zphouston 2.0G 20M 1.9G 1% /zphouston
# zpool status zphouston
pool: zphouston
state: ONLINE
scrub: resilver completed with 0 errors on Mon Feb 9 14:01:22 2009
config:

    NAME          STATE     READ WRITE CKSUM
    zphouston     ONLINE       0     0     0
      /tmp/file2  ONLINE       0     0     0

errors: No known data errors
# df -h /zphouston
Filesystem size used avail capacity Mounted on
zphouston 2.0G 20M 1.9G 1% /zphouston
# rm file1
# mkfile 2g file1
# zpool attach zphouston /tmp/file2 /tmp/file1
# zpool status zphouston
pool: zphouston
state: ONLINE
scrub: resilver completed with 0 errors on Mon Feb 9 14:12:38 2009
config:

    NAME            STATE     READ WRITE CKSUM
    zphouston       ONLINE       0     0     0
      mirror        ONLINE       0     0     0
        /tmp/file2  ONLINE       0     0     0
        /tmp/file1  ONLINE       0     0     0

errors: No known data errors
# df -h /zphouston
Filesystem size used avail capacity Mounted on
zphouston 2.0G 20M 1.9G 1% /zphouston
# sum /zphouston/20megfile
0 40960 /zphouston/20megfile
# cat /zphouston/sum
0 40960 /zphouston/20megfile
#

hey houston,
thanks a lot for this detailed guide, wondering that it's possible to create a zpool volume from a file, great filesystem
I'll let you know how we'll finally solve this problem

sorry for the late answer, was not at work the last week