Want to expand Solaris 10_x86 root UFS partition

OS: Solaris 10_x86.

Problem:

Server needs to be patched, but root "/" is near full.

/dev/dsk/c1t1d0s0 4.2G 3.9G 284M 94% /

The /exports/home dir has a lot more space, and I'd like to either move root "/" to it, or delete it all together:

/dev/dsk/c1t1d0s7 12G 4.7G 7.1G 40% /export/home

Note: Server *can* have downtime, but I *don't* have another disk in the system I can install onto or partition.

/etc/vfstab:

# cat /etc/vfstab 
#device         device          mount           FS      fsck    mount   mount
#to mount       to fsck         point           type    pass    at boot options
#
fd      -       /dev/fd fd      -       no      -
/proc   -       /proc   proc    -       no      -
/dev/dsk/c1t1d0s1       -       -       swap    -       no      -
/dev/dsk/c1t1d0s0       /dev/rdsk/c1t1d0s0      /       ufs     1       no      -
/dev/dsk/c1t1d0s7       /dev/rdsk/c1t1d0s7      /export/home    ufs     2       yes     -
/devices        -       /devices        devfs   -       no      -
ctfs    -       /system/contract        ctfs    -       no      -
objfs   -       /system/object  objfs   -       no      -
swap    -       /tmp    tmpfs   -       yes     -

Ideas?

looking for the output of format as far as the disk with the root partition on it. we can make a better assessment with that.

however, you would have to back up /export/home. change start cylinder of /export/home (shrinking this), change cylinders for swap and then change the end cylinder for root (thus making it larger). save, label etc.

once out growfs /dev/rdsk/c1t1d0s0. newfs /export/home. restore /export/home

something to this effect.

Any of these items need to be done in single user mode per se?

I'm following your thought here - makes sense - but I'm just trying to apply what I know.

Thanks for the insight. I will start doing some RTFM before I post more 'WTF'.

:wink:

I don't think there is anything to change with swap which should be located elsewhere assuming you used the default layout.

To be sure, post the output of this command:

# prtvtoc /dev/dsk/c1t1d0s2

However, there is an issue with these steps. The growfs command cannot be used with the root partition according to its documentation. That means this won't work even in single user mode.

One possible workaround would be to boot on an installation media and run all of that from a shell there.

In any case, you should really make a reliable backup of your system before attempting this.

interesting. i see what you mean jiliagre. thanks for pointing that out. i thought i had seen it before but i guess the documentation proves me wrong (i've actually never attempted to grow the root fs but i've done this with other FSs).

when i get home i want to try this out on the root fs though. see if it truly is the case.

# prtvtoc /dev/dsk/c1t1d0s2
* /dev/dsk/c1t1d0s2 partition map
*
* Dimensions:
*     512 bytes/sector
*      63 sectors/track
*     255 tracks/cylinder
*   16065 sectors/cylinder
*    2211 cylinders
*    2209 accessible cylinders
*
* Flags:
*   1: unmountable
*  10: read-only
*
* Unallocated space:
*       First     Sector    Last
*       Sector     Count    Sector 
*    35471520     16065  35487584
*
*                          First     Sector    Last
* Partition  Tag  Flags    Sector     Count    Sector  Mount Directory
       0      2    00    1092420   8964270  10056689   /
       1      3    01      16065   1076355   1092419
       2      5    00          0  35487585  35487584
       7      8    00   10056690  25414830  35471519   /export/home
       8      1    01          0     16065     16064

I found a huge vmcore file that was 600+MB. None of my lab people needed it, so I backed it up to another server, then rm -rf'ed it locally. Now I've got some space to work with, and the patching seems to be coming along just fine.

I'm keeping my eye on this thread anyhow, since you make some valid points.

Thanks again for sharing your knowledge.

P.S. The command I used to find the vmcore file:

find / -size +100000

I guess that file was located in the /var/crash/<your host name>/ directory.

If that is the case, that would mean you had a kernel panic.

As I was supposing, your / and /export/home filesystems are contiguous so growing the root filesystem might still be an option.

This is a reason that many people simply set the file size of core files and dumps to zero (or simply disable completely), as they don't debug them (since they are not application or kernel developers).

If you don't use them and do not actively look for or manage them, they can really cause problems, as you experienced.

So, if you are not using them (or do not know how to use them), just disable them.....

You can always turn them back on if you start to see problems where you might need a core file, etc

Edit: See also:

See for example, this example, in the link above:

Below is a typical scenario, which shows the current system configuration for core dumps:

A kernel panic is worth investigating.
Even if you haven't the skills to analyze them, reporting the issue and sending core/crash dumps on request to the software provider is what I would expect in a professional environment.

Does Solaris actually need the /export/home directory?

Sadly, even with the large junk files removed, it appears that I just don't have enough space in root "/" and "/var" - the script bails complaining that there's not enough space. Argh!

:-\

/export/home/ is not necessary. but good to have so that your users home dirs could be all in there..

what script? whatever you are saving... save it to /export/home.

i feel like i am missing something here????

Maybe need to look into the script itself :stuck_out_tongue: ??

are you meaning when you go to patch the os it fails?

What you "expect" is not necessarily the requirement of the user.

Professionally, it is better to give options, and let the user decide what is right, based on their configuration and business model.

Many folks turn off dumps, that is why the option is available. You are making the common mistake (again) of expecting everyone else to follow your best practices without knowledge of their business and/or operational model.

good point as usually neo.

Teach the n00b here:

1) I download the patches, first being the 'Sun Patch Alert Cluster'. This is the one I've been starting with on all the boxes I've been patching.

2) Place .zip archive in /tmp, (since the /tmp system seems to be fairly large on each system) and unzip -q them there.

3) 'cd' to the new directory, read the CLUSTER*README, or README files, which contain the instructions and passcodes necessary to continue.

4) Run the ./installer script, watch lots of 'Error code 1 - failed' and 'Error code 0 succeeded' messages go by, sometimes waiting up to 2 hours for the patch to finish.

Does this sound right? Steer me straight here.

Thanks!

Short anwer: this sounds right.

A side note: Beware that /tmp available space can be misleading. /tmp is by default (and on your system) backed by virtual memory (a.k.a. swap, a concept sometimes misunderstood). Using this filesystem will be much faster than using a physical disk as long as you have a lot of RAM available. However, if you fill the RAM, then you might have serious performance issue. You can check how much ram is available in KB with this command:

vmstat 1 2 | tail -1 | nawk '{print $5}'

Most of the patchadd failures are harmless. Common ones are:

  • the package for which the patch is intended is not installed so there is nothing to patch
  • this patch is already installed
  • a newer version of the patch is already there

Before Solaris 10, error codes helped figuring out what was the root cause but it's no more the case unless you use "patchadd -t" option (or edit the installer script to use "patchadd -t" in that case).

Patching can be a lengthy operation. A lot of work is done to make sure the patch installation is consistent and to allow backing out the patch if necessary. If you have zones, the process is even longer as zones have to be activated if not yet booted and patched sequentially.

the only thing i prefer to do (and just to clarify, this my opinion) is to run all patching from single user.

if you perform patching in single user, obviously don't put the patch cluster in /tmp :smiley:

ARGH! Still running out of space. :slight_smile:

It's just a lab machine, so I'm going for a full-blown re-install.

Meh.