flarcreate disk error 28

I want to image solaris 8 with flarcreate like ghost in windows. the error come like errno 28, No space left?
this is error message

absapp@nepalabs1 # flarcreate -n "sol8utl" -S -R / -x /var/tmp /var/tmp/s8.utl.061222
Determining which filesystems will be included in the archive...
Creating the archive...
cpio: problem writing to tmpfile /var/tmp/cpioVhayw5, errno 28, No space left on device
1 errors
Archive creation complete.

It looks like you're lacking enough space in /var/tmp, try creating the flash archive somewhere else.

I try different folder also.
all thing has error.

I guess I have to encapsulated root disk in VxVM.
this is really hard for me.

What kind of space to you have remaining under /var/tmp?

What is the output from the following command

df -k

Filesystem kbytes used avail capacity Mounted on
/dev/dsk/c1t0d0s0 10080200 10072392 0 100% /
/proc 0 0 0 0% /proc
fd 0 0 0 0% /dev/fd
mnttab 0 0 0 0% /etc/mnttab
swap 7787888 136 7787752 1% /var/run
/dev/dsk/c1t1d0s4 20645791 18699829 1739505 92% /oracle
/dev/dsk/c1t2d0s5 20645791 13964517 6474817 69% /work
swap 7788440 688 7787752 1% /tmp
/dev/dsk/c1t4d0s6 35009161 14362368 20296702 42% /oper2
/dev/dsk/c1t3d0s6 35009161 5064115 29594955 15% /oper1
/dev/did/dsk/d6s4 288509 4525 255134 2% /global/.devices/node@1
/dev/vx/dsk/utldg/oradata1 10321884 2433607 7785059 24% /oradata1
/dev/vx/dsk/utldg/oradata2 10321884 5102382 5116284 50% /oradata2
/dev/vx/dsk/utldg/oradata4 10321884 734421 9484245 8% /oradata3
/dev/vx/dsk/utldg/absdata1 51609487 5173737 45919656 11% /absdata1
/dev/vx/dsk/utldg/absdata2 51609487 41031377 10062016 81% /absdata2
/dev/vx/dsk/utldg/absdata3 40254451 5162473 34689434 13% /absdata3
/dev/vx/dsk/utldg/absdata4 51609487 33419515 17673878 66% /absdata4
/dev/vx/dsk/utldg/absdata5 51609487 51217 51042176 1% /absdata5
/dev/vx/dsk/utldg/absdata6 51609487 51217 51042176 1% /absdata6
/dev/vx/dsk/utldg/absdata7 51609487 4486385 46607008 9% /absdata7
/dev/vx/dsk/utldg/absindex1 30965686 20520817 10135213 67% /absindex1
/dev/vx/dsk/utldg/absindex2 30965686 30737 30625293 1% /absindex2
/dev/vx/dsk/utldg/absindex3 30965686 30737 30625293 1% /absindex3
/dev/vx/dsk/utldg/orabackup 10321884 8120756 2097910 80% /orabackup
/dev/vx/dsk/utldg/operdata1 51609487 32162386 18931007 63% /operdata1
/dev/vx/dsk/utldg/operdata2 51609487 39924869 11168524 79% /operdata2
/dev/vx/dsk/utldg/operdata3 51609487 10907848 40185545 22% /operdata3
/dev/did/dsk/d34s4 288509 4430 255229 2% /global/.devices/node@2

I don't know how much space left in directory.
some site said unencapsulate root is the first in VxVM.
but system is in use, can't not unencapsulate, our server has to run
24/7, I don't want to touch vxvm right now.

total 8158
drwxrwxrwt 4 root sys 11776 Dec 28 11:17 .
drwxr-xr-x 33 root sys 512 Apr 7 2003 ..
drwxrwxrwx 2 nobody nobody 512 Dec 17 13:43 .flexlm
drwxrwxrwx 2 oracle dba 512 Dec 17 13:45 .oracle
-rw-r--r-- 1 root other 936 Feb 10 2003 111909-06.log.26518
-rw-rw-r-- 1 oracle dba 20603 Feb 9 2003 BEQ1451
-rw-rw-r-- 1 oracle dba 0 Feb 9 2003 DCE1451
-rw-rw-r-- 1 oracle dba 0 Feb 9 2003 DEC1451
-rw------- 1 oracle dba 0 Jul 19 2004 Ex2haqMy
-rw------- 1 oracle dba 0 Sep 17 2004 ExVYaaOt
-rw------- 1 absopr #absopr 24576 Jun 30 2006 ExlyaajG
-rw-rw-r-- 1 oracle dba 0 Feb 9 2003 ISPX1451
-rw-rw-r-- 1 oracle dba 0 Feb 9 2003 ITCP1451
-rw-rw-r-- 1 oracle dba 0 Feb 9 2003 LU621451
-rw-rw-r-- 1 oracle dba 0 Feb 9 2003 NMP1451
-rw-rw-r-- 1 oracle dba 0 Feb 9 2003 RAW1451
-rw------- 1 absapp absapp 8192 May 17 2003 Rx.saWE5
-rw-r--r-- 1 root other 0 Dec 22 13:00 S8.utl.061222
-rw-rw-r-- 1 oracle dba 0 Feb 9 2003 SPX1451
-rw-rw-r-- 1 oracle dba 32659 Feb 9 2003 TCP1451
-rw-rw-r-- 1 oracle dba 29492 Feb 9 2003 TCPS1451
-rw-rw-r-- 1 oracle dba 17638 Feb 9 2003 US1451
-rw-rw-r-- 1 oracle dba 0 Feb 9 2003 VI1451
prw------- 1 root root 0 Dec 17 13:43 _vmsa_cmd_
-rw-r--r-- 1 root other 72 Dec 18 2003 aaaCUaOM1
-rw-r--r-- 1 absapp absapp 1338 May 9 2005 aaaDeaaj3
-rw-r--r-- 1 absapp absapp 28 Nov 18 2003 aaaGaaG3g
-rw-r--r-- 1 absapp absapp 2769 Dec 18 2003 aaaK0aWhg
-rw-r--r-- 1 absapp absapp 458 Dec 18 2003 aaaLOa4OX
-rw-r--r-- 1 absapp absapp 28 Nov 18 2003 baaIaaG3g
-rw-r--r-- 1 absapp absapp 2769 Dec 18 2003 baaM0aWhg
-rw-r--r-- 1 absapp absapp 2769 Dec 18 2003 baaNOa4OX
-rw-r--r-- 1 nobody nobody 3052 Dec 25 07:43 license_log
-rw-r--r-- 1 nobody nobody 2619 Mar 25 2003 license_log.old.030325100215
-rw-r--r-- 1 nobody nobody 2619 Mar 25 2003 license_log.old.030325100444
-rw-r--r-- 1 nobody nobody 2867 Mar 25 2003 license_log.old.030325100735
-rw-r--r-- 1 nobody nobody 2579 Mar 25 2003 license_log.old.030325100913
-rw-r--r-- 1 nobody nobody 2512 Mar 25 2003 license_log.old.030325104955
-rw-r--r-- 1 nobody other 3943 Mar 25 2003 license_log.old.030325105658
-rw-r--r-- 1 nobody nobody 2415 Mar 25 2003 license_log.old.030325105927
-rw-r--r-- 1 nobody nobody 2867 Mar 25 2003 license_log.old.030325111102
-rw-r--r-- 1 nobody nobody 3203 Mar 25 2003 license_log.old.030325111828
-rw-r--r-- 1 nobody nobody 2414 Mar 25 2003 license_log.old.030326092952
-rw-r--r-- 1 nobody nobody 1893 Mar 26 2003 license_log.old.030327092306
-rw-r--r-- 1 nobody nobody 1893 M

You have a problem there, your root filesystem is full and since its a production server that needs to be up 24/7 you better clean it up.
Look in /var/adm for old log files that you might be able to remove. Check for any core dumps in /var/crash aswell. syslogs can also get big.

If you want to flare the root partition you can not use /tmp as it is only 7GB and you root partition is 10GB.

Use absdata5 or absdata6 as you have 51GB available there.

I just had a look at your first post again..
You were using /var/tmp which is on your root partition, I guess thats why your root partition is full.

If I was you I would cleanup /var/tmp ASAP and in future before trying something like this on a 24/7 production server make sure you know what it is you are doing..

I agree with the others. Root (/) is full and you must clean it up. Check /var/tmp and make sure you clear out old flarcreate files, unneeded temp files (it is a temporary directory; anything not reasonably current should be able to be deleted). Obvious additional choices are /var/adm and /var/log. Check your logs. Make sure they're being rolled (moved daily to something like syslog.0 or syslog.20061227) then make sure they're compressed (looking like syslog.0.gz or syslog.20061227.gz). If they are not being rolled and compressed, check your scripts to get that automatically taken care of. Also remove old log files. That'll depend on company policy. Generally 45 to 90 days is what's used. Next are home directories. Run a du -ks * | sort -n in /export/home or /home depending on your setup. That'll give you the largest directories at the bottom and you can address them for cleanup.

Another issue is your planning on flarcreate. In my location, we use flarcreate to create a bare-metal OS level backup, not a backup of the entire system. So we'll exclude all the oracle file systems for instance. In looking at your setup, you have about 230 GB of data on this system and you're trying to put it all into a filesystem that can have about 8 GB free at best. Regular backups need to be done on the production data and not by using flarcreate. An external tape drive for instance.

Another piece of advice. When posting to the forums, please enclose your output in a code tag (that's the # tag in the bar above your message). It takes the fixed font screen output you have and leaves it fixed font rather than proporational. For example, I whipped up a quick perl script to massage your df -k output:

                 Filesystem     kbytes       used      avail   capacity                   Mounted
          /dev/dsk/c1t0d0s0   10080200   10072392          0       100%                         /
                      /proc          0          0          0         0%                     /proc
                         fd          0          0          0         0%                   /dev/fd
                     mnttab          0          0          0         0%               /etc/mnttab
                       swap    7787888        136    7787752         1%                  /var/run
          /dev/dsk/c1t1d0s4   20645791   18699829    1739505        92%                   /oracle
          /dev/dsk/c1t2d0s5   20645791   13964517    6474817        69%                     /work
                       swap    7788440        688    7787752         1%                      /tmp
          /dev/dsk/c1t4d0s6   35009161   14362368   20296702        42%                    /oper2
          /dev/dsk/c1t3d0s6   35009161    5064115   29594955        15%                    /oper1
          /dev/did/dsk/d6s4     288509       4525     255134         2%   /global/.devices/node@1
 /dev/vx/dsk/utldg/oradata1   10321884    2433607    7785059        24%                 /oradata1
 /dev/vx/dsk/utldg/oradata2   10321884    5102382    5116284        50%                 /oradata2
 /dev/vx/dsk/utldg/oradata4   10321884     734421    9484245         8%                 /oradata3
 /dev/vx/dsk/utldg/absdata1   51609487    5173737   45919656        11%                 /absdata1
 /dev/vx/dsk/utldg/absdata2   51609487   41031377   10062016        81%                 /absdata2
 /dev/vx/dsk/utldg/absdata3   40254451    5162473   34689434        13%                 /absdata3
 /dev/vx/dsk/utldg/absdata4   51609487   33419515   17673878        66%                 /absdata4
 /dev/vx/dsk/utldg/absdata5   51609487      51217   51042176         1%                 /absdata5
 /dev/vx/dsk/utldg/absdata6   51609487      51217   51042176         1%                 /absdata6
 /dev/vx/dsk/utldg/absdata7   51609487    4486385   46607008         9%                 /absdata7
/dev/vx/dsk/utldg/absindex1   30965686   20520817   10135213        67%                /absindex1
/dev/vx/dsk/utldg/absindex2   30965686      30737   30625293         1%                /absindex2
/dev/vx/dsk/utldg/absindex3   30965686      30737   30625293         1%                /absindex3
/dev/vx/dsk/utldg/orabackup   10321884    8120756    2097910        80%                /orabackup
/dev/vx/dsk/utldg/operdata1   51609487   32162386   18931007        63%                /operdata1
/dev/vx/dsk/utldg/operdata2   51609487   39924869   11168524        79%                /operdata2
/dev/vx/dsk/utldg/operdata3   51609487   10907848   40185545        22%                /operdata3
         /dev/did/dsk/d34s4     288509       4430     255229         2%   /global/.devices/node@2

As you can see, it's a lot more readable this way.

Good luck.

Carl

In my location, we use flarcreate to create a bare-metal OS level backup,

I want just OS level root (it means internal disk). What will be the command

to volume backup just OS level to tape or drive. (it means when system broke, I

recover OS level by flarcreate and use another tape for DB recovery).

Is it ok I just flarcreate only internal disk? what is the actual meaning OS level

backup? It will be ok I can recover by two step, 1 flarcreate 2 coldbackup

tape.

Is it something like

flarcreate -n "utlabsflar" -c -S -R / -x /oradata1 -x /oradata2 -x /oradata3 -x /absdata1 -x /absdata2 -x /abadata3 -x /absdata4 -x /absdata5 -x /absdata6 -x /absdata7 -x /absindex1 -x /absindex2 -x /absindex3 -x /orabackup -x /operdata1 -x /operdata2 -x /operdata3 -t /dev/rmt/0n

I finished backup of my server only internal disk.

some question is why external disk has error message during backup?

and other message is "too large to archive in current mode" how to avoid this?

This is my log of flarcreate and flar info.

absapp@nepalabs1 # sh volumebackup.sh
WARNING: hash generation disabled when using tape (-t)
Determining which filesystems will be included in the archive...
Creating the archive...
cpio: cpio: absdata1/nccbs_ref.dbf: too large to archive in current mode
cpio: cpio: absdata2/nccbs_app.dbf: too large to archive in current mode
cpio: cpio: absdata2/nccbs_app6.dbf: too large to archive in current mode
cpio: cpio: absdata4/expdat.dmp: too large to archive in current mode
cpio: cpio: absdata6/vol072017.flar: too large to archive in current mode
cpio: cpio: absdata7/nccbs_app_temp.dbf: too large to archive in current mode
cpio: cpio: absindex1/nccbs_idx3.dbf: too large to archive in current mode
cpio: Error with lstat() of "oper1/absopr/cpm/ST_TFD/UTL_cef988_cefb3f_20070226_235004.TXT", errno 2, No such file or directory
cpio: Error with lstat() of "oper1/absopr/cpm/ST_TFD/UTL_cef5a0_cef793_20070226_221148.TXT", errno 2, No such file or directory
errno 2, No such file or directory
of "oper1/absopr/cpm/ST_TFD/UTL_cdf389_cdf57c_20070226_100108.TXT", errno 2, No such file or directory
cpio: cpio: oper1/temp/utlapp_0209.dmp: too large to archive in current mode
cpio: cpio: operdata1/nccbs/log/cpm/cpm_up_20631010_104709.log: too large to archive in current mode
cpio: cpio: operdata1/nccbs/log/cpm/cpm_up_20631011_124141.log: too large to archive in current mode
cpio: cpio: operdata2/orabackup/export/expdat.dmp: too large to archive in current mode
cpio: File size of "oracle/ora9i/920/network/log/listener.log" has increased by 183
cpio: cpio: oracle/ora9i/seo/nepal/20040209/utlapp_0209.dmp: too large to archive in current mode
cpio: cpio: oradata2/ORA9/undotbs01.dbf: too large to archive in current mode
155845519 blocks
199 error(s)
Archive creation complete

nepalabs1:/oper1/absapp> flar info -t /dev/rmt/0
files_archived_method=cpio
creation_date=20070228032613
creation_master=nepalabs1
content_name=utlabsflar
creation_node=nepalabs1
creation_hardware_class=sun4u
creation_platform=SUNW,Sun-Fire-880
creation_processor=sparc
creation_release=5.8
creation_os_name=SunOS
creation_os_version=Generic_108528-18
files_compressed_method=compress
content_architectures=sun4u

Why not just use ufsdump ?
You can use something like rkbackup to make it easier..
http://linuxmafia.com/faq/Admin/tape-backup.html

Your thing is linux backup.

I want solaris volume backup.

I use ufsdump as coldbackup of my DB and directory.

And I need volume of solaris OS level backup. not linux.

I almost done. What I ask is my script is ok or not?

I haven't recover from backup. I think it is ok.

Can you confirm that if my system crash I can recover from backup?

rkbackup is just a script that you can use. It's not just for linux.
I use ufsdump in the rkbackup script to backup a v490 running Solaris 10

I've never used flars and I wouldn't even attempt using it as my backup solution.

Our server is configured VxVM volume manager.

that is not working with flarcreate.

I may choose rkbackup or other method to volume-backup.