looking for the output of format as far as the disk with the root partition on it. we can make a better assessment with that.
however, you would have to back up /export/home. change start cylinder of /export/home (shrinking this), change cylinders for swap and then change the end cylinder for root (thus making it larger). save, label etc.
once out growfs /dev/rdsk/c1t1d0s0. newfs /export/home. restore /export/home
I don't think there is anything to change with swap which should be located elsewhere assuming you used the default layout.
To be sure, post the output of this command:
# prtvtoc /dev/dsk/c1t1d0s2
However, there is an issue with these steps. The growfs command cannot be used with the root partition according to its documentation. That means this won't work even in single user mode.
One possible workaround would be to boot on an installation media and run all of that from a shell there.
In any case, you should really make a reliable backup of your system before attempting this.
interesting. i see what you mean jiliagre. thanks for pointing that out. i thought i had seen it before but i guess the documentation proves me wrong (i've actually never attempted to grow the root fs but i've done this with other FSs).
when i get home i want to try this out on the root fs though. see if it truly is the case.
I found a huge vmcore file that was 600+MB. None of my lab people needed it, so I backed it up to another server, then rm -rf'ed it locally. Now I've got some space to work with, and the patching seems to be coming along just fine.
I'm keeping my eye on this thread anyhow, since you make some valid points.
This is a reason that many people simply set the file size of core files and dumps to zero (or simply disable completely), as they don't debug them (since they are not application or kernel developers).
If you don't use them and do not actively look for or manage them, they can really cause problems, as you experienced.
So, if you are not using them (or do not know how to use them), just disable them.....
You can always turn them back on if you start to see problems where you might need a core file, etc
Edit: See also:
See for example, this example, in the link above:
Below is a typical scenario, which shows the current system configuration for core dumps:
A kernel panic is worth investigating.
Even if you haven't the skills to analyze them, reporting the issue and sending core/crash dumps on request to the software provider is what I would expect in a professional environment.
Does Solaris actually need the /export/home directory?
Sadly, even with the large junk files removed, it appears that I just don't have enough space in root "/" and "/var" - the script bails complaining that there's not enough space. Argh!
What you "expect" is not necessarily the requirement of the user.
Professionally, it is better to give options, and let the user decide what is right, based on their configuration and business model.
Many folks turn off dumps, that is why the option is available. You are making the common mistake (again) of expecting everyone else to follow your best practices without knowledge of their business and/or operational model.
1) I download the patches, first being the 'Sun Patch Alert Cluster'. This is the one I've been starting with on all the boxes I've been patching.
2) Place .zip archive in /tmp, (since the /tmp system seems to be fairly large on each system) and unzip -q them there.
3) 'cd' to the new directory, read the CLUSTER*README, or README files, which contain the instructions and passcodes necessary to continue.
4) Run the ./installer script, watch lots of 'Error code 1 - failed' and 'Error code 0 succeeded' messages go by, sometimes waiting up to 2 hours for the patch to finish.
A side note: Beware that /tmp available space can be misleading. /tmp is by default (and on your system) backed by virtual memory (a.k.a. swap, a concept sometimes misunderstood). Using this filesystem will be much faster than using a physical disk as long as you have a lot of RAM available. However, if you fill the RAM, then you might have serious performance issue. You can check how much ram is available in KB with this command:
vmstat 1 2 | tail -1 | nawk '{print $5}'
Most of the patchadd failures are harmless. Common ones are:
the package for which the patch is intended is not installed so there is nothing to patch
this patch is already installed
a newer version of the patch is already there
Before Solaris 10, error codes helped figuring out what was the root cause but it's no more the case unless you use "patchadd -t" option (or edit the installer script to use "patchadd -t" in that case).
Patching can be a lengthy operation. A lot of work is done to make sure the patch installation is consistent and to allow backing out the patch if necessary. If you have zones, the process is even longer as zones have to be activated if not yet booted and patched sequentially.