mirroring root disk using svm - but no free slices for metadb's

Hi all,

we have an existing system that was configured using just one of the (two) internal disks. I want to mirror the disk using SVM, but have realised there is no free slice for creating the metadb's. Is there a workaround I can use for this?

In the past we have always kept slice 7 free - but this is not available:

AVAILABLE DISK SELECTIONS:
       0. c1t0d0 <SUN72G cyl 14087 alt 2 hd 24 sec 424>
          /pci@1f,700000/scsi@2/sd@0,0
       1. c1t1d0 <SUN72G cyl 14087 alt 2 hd 24 sec 424>
          /pci@1f,700000/scsi@2/sd@1,0
Specify disk (enter its number): ^D
# df -k
Filesystem            kbytes    used   avail capacity  Mounted on
/dev/dsk/c1t0d0s0    4133838 2560427 1532073    63%    /
/proc                      0       0       0     0%    /proc
mnttab                     0       0       0     0%    /etc/mnttab
fd                         0       0       0     0%    /dev/fd
/dev/dsk/c1t0d0s1    4133838  699047 3393453    18%    /var
swap                 11470176      48 11470128     1%    /var/run
swap                 11470128       0 11470128     0%    /tmp
/dev/dsk/c1t0d0s6    6198606  621245 5515375    11%    /opt
/dev/dsk/c1t0d0s3      46079    1041   40431     3%    /mdb
/dev/dsk/c1t0d0s5    5165838    5147 5109033     1%    /var/crash
/dev/dsk/c1t0d0s7    42645282 8677569 33541261    21%    /export/home
# cat /etc/vfstab
#device         device          mount           FS      fsck    mount   mount
#to mount       to fsck         point           type    pass    at boot options
#
fd      -       /dev/fd fd      -       no      -
/proc   -       /proc   proc    -       no      -
/dev/dsk/c1t0d0s4       -       -       swap    -       no      -
/dev/dsk/c1t0d0s0       /dev/rdsk/c1t0d0s0      /       ufs     1       no      -
/dev/dsk/c1t0d0s1       /dev/rdsk/c1t0d0s1      /var    ufs     1       no      -
/dev/dsk/c1t0d0s7       /dev/rdsk/c1t0d0s7      /export/home    ufs     2       yes     -
/dev/dsk/c1t0d0s3       /dev/rdsk/c1t0d0s3      /mdb    ufs     2       yes     -
/dev/dsk/c1t0d0s6       /dev/rdsk/c1t0d0s6      /opt    ufs     2       yes     -
/dev/dsk/c1t0d0s5       /dev/rdsk/c1t0d0s5      /var/crash      ufs     2       yes     -
swap    -       /tmp    tmpfs   -       yes     -

Are all those partitions useful ?
/var and /var/crash, latter does not seems to be necessary ...
/mdb ???

i'll have to check who using which partition.

so is the only option to get rid of a filesystem & install metadb's in the free space?

wipe out one of the partition and recreate the filesystem using newfs.
Then you can configure for metadb.

cool, many thanks. I was wondering what my options were. But it looks like i'll have to free up one partition.

using metaset you might be able to have metadb only on managed drive but I've never tried ...

The Solstice manual I used to teach myself included a bit about this situation; there is a small chance it won't work with the current version of SVM. It stated that you can use a swap partition to hold the metadbs IF you put the dbs on the partition before using it for swap. So you'd need to comment out the swap line in vfstab, reboot, add the metadbs, reboot, enable the swap, then reboot again.
I've never actually tried it and it's worth checking the current SVM doc on docs.sun.com.

seg mentioned using the swap partition for the metadb's.

This way works well, and at a previous place that I worked, all our solaris servers were set up this way.

A reboot is not needed to disable swap. All you need to do is a swap -d, then add the metadb's to the swap partition, then create metadevice/s for the swap partition, then add the metadevice as the swap partion. Update the vfstab file to reflect the fact that swap is now using a metadevice and not a direct partition.

Available swap space will be reduced by the amount of space allocated to the metadb's

many thanks TanDeeJay & Seg, that looks like the way to go. It looks a lot 'cleaner' than pinching space off swap, creating a new partition for metadb's etc etc.

I'll give that a go this week on 2 of our hosts & report back on how I get on.