Please help Disk Suite on Solaris 8 FS full!!!

I am new to Solaris so please bear with me. I have spent enough time searching to get somewhat of a grip here but I am not sure what to do next. I am trying to grow a file system on a Solaris 8 server.

B_root@server:>df -k
Filesystem kbytes used avail capacity Mounted on
/dev/md/dsk/d0 30257446 21037051 8917821 71% /
/proc 0 0 0 0% /proc
fd 0 0 0 0% /dev/fd
mnttab 0 0 0 0% /etc/mnttab
/dev/md/dsk/d5 20174761 6267861 13705153 32% /var
swap 8554064 24 8554040 1% /var/run
swap 8603824 49784 8554040 1% /tmp
/dev/md/dsk/d30 136261305 80675586 54223106 60% /oracle01
/dev/md/dsk/d7 11094316 10768394 214979 99% /server01

I need to make /server01 bigger by at least 2 GB....

When I look at metastat I see d7 has submirrors d17 and d27 and they are on 2 separate physical disks (c2t0d0s7, c2t1d0s7)

B_root@server:>metastat -p
d0 -m d10 d20 1
d10 1 1 c2t0d0s0
d20 1 1 c2t1d0s0
d1 -m d11 d21 1
d11 1 1 c2t0d0s1
d21 1 1 c2t1d0s1
d5 -m d15 d25 1
d15 1 1 c2t0d0s5
d25 1 1 c2t1d0s5
d7 -m d17 d27 1
d17 1 1 c2t0d0s7
d27 1 1 c2t1d0s7
d30 4 1 c1t0d0s0 \
1 c1t1d0s0 \
1 c1t2d0s0 \
1 c1t3d0s0

Here are is how they are partitioned:

c2t0d0s7
Part Tag Flag Cylinders Size Blocks
0 root wm 0 - 6037 29.30GB (6038/0/0) 61442688
1 swap wu 6038 - 7686 8.00GB (1649/0/0) 16780224
2 backup wm 0 - 14086 68.35GB (14087/0/0) 143349312
3 unassigned wm 7687 - 7727 203.72MB (41/0/0) 417216
4 unassigned wm 0 0 (0/0/0) 0
5 var wm 7728 - 11753 19.54GB (4026/0/0) 40968576
6 unassigned wm 0 0 (0/0/0) 0
7 unassigned wm 11754 - 13967 10.74GB (2214/0/0) 22529664

c2t1d0s7
Part Tag Flag Cylinders Size Blocks
0 root wm 0 - 6037 29.30GB (6038/0/0) 61442688
1 swap wu 6038 - 7686 8.00GB (1649/0/0) 16780224
2 backup wm 0 - 14086 68.35GB (14087/0/0) 143349312
3 unassigned wm 7687 - 7727 203.72MB (41/0/0) 417216
4 unassigned wu 0 0 (0/0/0) 0
5 var wm 7728 - 11753 19.54GB (4026/0/0) 40968576
6 unassigned wu 0 0 (0/0/0) 0
7 unassigned wm 11754 - 13967 10.74GB (2214/0/0) 22529664

This is all I know. I have no idea what to do now. I have more HDs and I have slots to put them into. But I just don't know what to do from here? Do I need to add (physically) more disks? Or can I change the partition table of the existing ones to make more room that way? Any help is appreciated I feel like I am drowning here....

if you can add more disks, just do that and copy the data. or has it to be done online with no downtime?

It can go down. First of all thank you so much for responding I am tired of being alone in the dark...

As I said I can take the system down if need be. I just don't know what to do to add a disk?

I mean I know shut it down, put in the disk and do a "reconfigure" reboot. Then the system will see the new disks right?

Do I have to use the "disksuite" to add it and make it usable?

Sorry for the stupidity, but I am new to this OS and the Solstace Disksuite.

Thank you again for your help!

you don't have to use disksuite but it's better to build a mirror for higher availabilty! after building the new mirror just mount it to /mnt and copy all data (maybe better use a "ufsdump" to copy the data) unmount from /mnt and edit /etc/vfstab with the new device. maybe you need to do it in "single user mode" so no application uses the data on you disk!

good luck...

you can do as dukenuke suggest which is a good safety method in which you create another bigger filesystem then ufsdump or tar the data from old filesystem to new filesystem.

cd /DATA
tar cvfp - * | (cd /mnt;tar xvfp -)

anything goes wrong later you still have the old filesystem to fall back upon. The not so pleasant thing abt this is you may need a long down time depending how much data you have to copy, at least few hours looking at the size of your current filesystem.

or you can try growfs

example from man pages.

d80 already contains 2 submirrors d9 and d10, add additional disk c0t2 and c0t3 to d9 and d10

# metattach d9 c0t2d0s5
# metattach d10 c0t3d0s5
# growfs -M /files /dev/md/rdsk/d80

check the man pages on how to do growfs, I think need to schedule some downtime to umount FS before you can do do a growfs. Make sure adequate backups are done prior to doing this.

I think man pages says you can do growfs online but a little extra 1-2hr of caution in a downtime is better than spending many sleepless hours trying to resolve a mistake if something went bad :wink:

So grow fs may work, I didn't think it would? Thanks I will look at that posibility again.

OK here is where I am:

I get onsite today finaly and I find the disks need to be installed into a D2 SCSI array...

It is a 280r server with a D2 array hanging off it.

The file system they need bigger is on internal disks using SDS as management. it is mirrored (d7) with 2 subs (d17, d27), one slice on each internal disk they use slice 7 hence the naming convention.

The array has 4 HDs in it and they are recognized by metastat to have a concatenated stripe running across all of them called d30 that has about 136 G all together.

The array will hold up to 12 drives. So 8 more HDs can be added.

I have enough 18.2 G sun HDs to fill it.

This is what I want to do to start:

  1. add 4 drives to the array.

  2. format them to put all space available on slice 6 of the drives (relatively 18 G per HD)

  3. use SDS to make a new metadevice, call it d6.

  4. make sub mirrors ... ... .. . This is where I get lost.

How do I make a mirrored metadevice using more than 2 drives?

I know I can do it like before and have 2x the redundancy/speed or whatever but I need to end up with a 36 G mirrored file system with 2 submirriors, NOT an 18 G mirrored twice over file system with 4 submirrors.....

Can someone please help me with the steps from here? The HDs will be:
c1t4d0
c1t5d0
c1t6d0
c1t7d0

Any help will be appreciated, thank you in advance.

Firstly - consult the documentation of Sun Volume Manager(see below whythis product) you GOT TO know what your doing before you execute commands in a business\corporate\enterprise environment

Secondly, I'll give you the secret of Disksuite (which was then renamed to Solstice, and now SUN Volume Manager)
Disksuite mirrors Metadata, which essential represents VOLUMES.
And as a volume can span more than on disk....( ah !)

# metainit d10 1 2 c1t4d0s3 c1t5d0s3 -i 32k
# metainit d20 1 2 c1t6d0s3 c1t7d0s3 -i 32k

d10 or d20 - metadevice name
1 - Total Number of Stripe
2- Number of Slices to be added to stripe followed by slice name .
-i chunks of data written alternatively on stripes.
s3 = slice 3 = represents the entire disk

# metainit d10 -m d21

d21 being the mirror of d10

-

Their's the essential bones, you have the documentation to construct the body...Go Frankenstein !

---------------
Ms Stevie

Correction:

# metainit d10 -m d20

(in this scanario)

Thank you very much.

I just about had it right.

This is what I was thinking:

Format the drives to have all their space on slice 6 then...

#metainit d16 1 2 c1t4d0s6 c1t5d0s6 -i 32k
#metainit d26 1 2 c1t6d0s6 c1t7d0s6 -i 32k

#metainit d6 -m d16

#metattach d6 d26

then make the new file system:

#newfs /dev/md/rdsk/d6

then edit /etc/vfstab

test this editing by using:

# mountall -v

to mount the new fs using the new parameters in the vfstab file on the new metadevice

write data to it.

Done

Is that very much different from what you are doing? Looks like you just know a way to do it in fewer steps.

Thank you for the advice/help,