File system creation script on AIX 6.1 using while loop

#!/bin/sh

echo "VG: "
read VG
echo "LP: "
read LP
echo "SAP: "
read SAP
echo "NUM: "
read NUM
echo "SID: "
read SID


while [[ $NUM -lt 2 ]]; read VG LP SAP NUM SID ; do

mklv   -y   $SAP$NUM   -t   jfs2   -e   x   $VG   $LP;

crfs   -v   jfs2   -d   /dev/$SAP$NUM   -m   /oracle/$SID/$SAP$NUM  -A   yes   -p   rw -a   log=INLINE    -a   options=cio;

NUM=$((NUM+1)) OR (( NUM++ ))

done

I want to create file system on AIX as priyank1, priyank2 and so on...

VG is the volume group name, LP is the logical partition/size of FS, SAP is the name "priyank" and SID is the Directory under /oracle..

Please let me know if any further details needed.
Please help the above script is not working...not reading the variable properly while executing the commands.

also i have placed 2 variable together $SAP$NUM , will this be a problem ?

Error is unable to find the vg name...while the vg is already available.

Regards,
Priyank

Is this a valid use of the -e option of mklv?:

... -e   x ...

You should also test for the success of mklv before proceeding to the next (crfs) command. That would save on unnecessary output and more errors.

Not sure what this read statement is doing here:

while [[ $NUM -lt 2 ]]; read VG LP SAP NUM SID ; do

Perhaps you should remove that?

I presume you're trying to increment NUM here?:

NUM=$((NUM+1)) OR (( NUM++ ))

Try:

NUM=$((NUM + 1))

But I guess you want to go from 1 up to NUM?

...
printf "NUM: "
read NUM
CUR=1
...
...
while [ $CUR -lt $NUM ]; do
  ...
  CUR=$((CUR + 1))
  ...
done
1 Like

Hi Scott,

thanks a lot, that worked.

i think the only problem now is...it is still giving below Error:

" crfs: /oracle/E6P/sapdata1 file system already exists "

when the file system doesnt exist at all...and i am creating a new one.

#!/bin/bash
echo "VG: "
+ echo VG:
VG:
read VG
+ read VG
utilvg
echo "LP: "
+ echo LP:
LP:
read LP
+ read LP
1
echo "SAP: "
+ echo SAP:
SAP:
read SAP
+ read SAP
sapdata
echo "NUM: "
+ echo NUM:
NUM:
read NUM
+ read NUM
3
echo "SID: "
+ echo SID:
SID:
read SID
+ read SID
E6P
echo "CUR: "
+ echo CUR:
CUR:
read CUR
+ read CUR
1
while [ $CUR -lt $NUM ]; do
  (mklv -y $SAP$CUR -t jfs2 -e x $VG $LP)
  crfs -v jfs2 -d /dev/$SAP$CUR -m /oracle/$SID/$SAP$CUR -A yes -p rw -a log=INLINE -a options=cio;
  mount /oracle/$SID/$SAP$CUR
  CUR=$((CUR + 1))
done
+ [ 1 -lt 3 ]
+ mklv -y sapdata1 -t jfs2 -e x utilvg 1
sapdata1
+ crfs -v jfs2 -d /dev/sapdata1 -m /oracle/E6P/sapdata1 -A yes -p rw -a log=INLINE -a options=cio
crfs: /oracle/E6P/sapdata1 file system already exists
+ mount /oracle/E6P/sapdata1
+ CUR=2
+ [ 2 -lt 3 ]
+ mklv -y sapdata2 -t jfs2 -e x utilvg 1
sapdata2
+ crfs -v jfs2 -d /dev/sapdata2 -m /oracle/E6P/sapdata2 -A yes -p rw -a log=INLINE -a options=cio
crfs: /oracle/E6P/sapdata2 file system already exists
+ mount /oracle/E6P/sapdata2
Replaying log for /dev/sapdata2.
mount: /dev/sapdata2 on /oracle/E6P/sapdata2: Unformatted or incompatible media
+ CUR=3
+ [ 3 -lt 3 ]

I don't think it would be telling you that if it weren't true.

What does lsvg -l utilvg and lsfs show?

(also make sure the filesystem is not already in /etc/filesystems)

Is this for rebuilding your non-rootvg volume groups for DR?

If so, there is a better way. If not, I will keep out of it.

Robin

@Scott: sorry for bothering on this...it was actually my mistake.

the script works fine now. i need to merge several such scripts for FS creation on AIX.

i am going to need some more help on this for sure.

@rbatte1 : yes this is for new LPAR builds as well as rebuliding non-rootvg's for DR.

please let me know if there is any better way :slight_smile:

If you have an existing volume group that you want to recover the structure for before restoring the content, then your can use savevg and restvg

In our site, we have created an exclude file for each volume group so that the files/directories are not backed up, just the Logical Volumes and Filesystem information. For a volume group datavg we have a file called /etc/exclude.datavg that just contains one line:-

^\./

This can then be used with the following command:-

savevg -ef /disaster/datavg.structure datavg

What this is doing is saving the volume group, but excluding everything it finds that matches the pattern in the file. The pattern we have picked means it excludes everything, so you just get the structure, for which the definitions are a few small files. The directory /disaster needs to be in the root volume group, but not necessarily in the root filesystem. It just needs to be somewhere that will be included on your mksysb media.

Be sure to remove the directory /tmp/vgdata before you start and create your mksysb afterwards. Of course you can use whatever file name you like to save the information. You can copy that to another server that is already running and use it if you wish. You may need to be aware of conflicting logical volumes or filesystems in this case.

After restoring your mksysb elsewhere, you can use the following to build your volume group:-

restvg -q -f /disaster/datavg.structure hdiskx hdisky hdiskz

You need to give it enough disks to build what you had before.

I used to script this up with a mklv then a crfs using the LV I'd just built and on some servers it could take 90 minutes to run the script. When we got wise and used restvg it was about 10 minutes and the filesystems were all prepared, added to /etc/filesystems and mounted ready for data restore.

It is such a joy to recover with such ease and it emphasises to me why AIX is an excellent OS choice.

I hope that this helps. Let me know if I've not made anything clear.

Robin
Liverpool/Blackburn
UK

2 Likes

---------- Post updated at 08:50 PM ---------- Previous update was at 08:24 PM ----------

Hi Robin,

Thanks a lot for the information.

however few doubts,

/etc/exclude.datavg - will this file be detected by the save vg command automatically ?

be sure to remove the directory /tmp/vgdata before you start and create your mksysb afterwards. - what data does this directory contain and wnt it affect the online vgs that are in use on the LPAR.

After restoring your mksysb elsewhere - can we take the backup of just this VG structure through mksysb and restore it elsewhere ?

or it has to be taken along with complete mksysb image and new LPAR build has to be done using that image, which will import the vg structure.

Please help on this.

Also, if you have any scripts for FS creation..that would be really helpful :slight_smile:

Regards,
Priyank

Hello arorap,

Thanks for the follow up questions.

The -e flag asks savevg vgname to read /etc/exclude.vgname so that should be the case. Have a look at the man page just to check it is in your version.

The directory /tmp/vgdata contains working information about the volume group as savevg runs. I have found that it is not always overwritten by subsequent savevg commands. I have used it to tweak the definition before saving. We have a server that has local LVM mirrored disk, but there is SAN storage on the DR server, but not enough to create the mirrors too. I got in and change the definition of the LVs to be a single copy only.

The directory is not generally in use, so it's best to clean it up.

For the last part, I'm slightly confused on your question. I think you are wanting to take the file created by savevg and restore it wherever you like. This is fine with two clauses:-

  1. Do you have enough disk (total space on free disk) to restore the VG?
  2. Are all the LVs (including jfslogs) and filesystems unique when comparing to the target server?

If the answer to both is Yes, then you should be okay. The file should really only contain information like "Build be logical volume aaa with these properties and then make a filesystem from it to be mounted as /bbb."

If there are conflicts, then there may be a way to use the content of /tmp/vgdata to adjust the save so you can use it on the target. If not, there are ways to extract the commands from the file and tweak them before executing the lot, but it is slower if you have lots/large filesystems.

Is this what you need?

Robin

Hi Robin,

The directory /tmp/vgdata contains working information about the volume group as savevg runs. I have found that it is not always overwritten by subsequent savevg commands. I have used it to tweak the definition before saving. We have a server that has local LVM mirrored disk, but there is SAN storage on the DR server, but not enough to create the mirrors too. I got in and change the definition of the LVs to be a single copy only.

The directory is not generally in use, so it's best to clean it up.

What i meant was, the /tmp/vgdata contains the outputs from the savevg command only ? Or does it hold any data from the live volume groups that can effect the functionality of the VG's if we empty the directory /tmp/vgdata.

And the other question was...

I understood now, the file that will be created from the command:

savevg -ef /disaster/datavg.structure datavg

i.e /disaster/datavg.structure

can we copy this file manually (using scp or rcp) to another server and use the restvg command ?

Also can we edit this file and make few changes according the the destination server requirements ?

supposed source has a File systems as below

/dev/priyank1 100.00 98.41 1.59 99% /priyank/XXX/arora1
/dev/priyank2 100.00 98.41 1.59 99% /priyank/XXX/arora2

now the 'XXX' part here changes in every new LPAR we create.

so i want to edit the file( /disaster/datavg.structure ) and change the XXX and then restore it using restvg on the destination. IS that possible ?

Let me know if i didnt make it clear enough

Note: The above scenario is w.r.t a New Lpar build, in case of Prod-DR copy i know this will work just perfect

Again thanks a lot..i really appreciate you helping me out.

Regards,
PriyanK

Hello again,

Q1
The directory /tmp/vgdata is only created by a savevg (or mksysb) and is not in use during normal running. Not a problem to open volume groups, mounted filesystems etc.

Q2
Yes, you can copy the file /disaster/datavg.structure (or whatever you choose to call it) to the target server. It is just a data file, so you could also rename it if you wish. You might need to be careful on how you copy it. Do a cksum on the file on the source and on the target to make sure it transferred without error. Consider it as a binary file.

Q3
You have three choices:-

  1. Update before backup
    Here you would run the savevg, then edit a file in /tmp/vgdata and re-run the savevg command
  2. Update before restore
    Here you would (on the target server) extract the details from the datavg.structure file into a script and edit them before executing - can be a bit slow
  3. Update after restore
    After running the restvg (assuming no LV or FS conflicts) then you rename the filesystems as you wish with a chfs command and remount

Which would you prefer? If you have no preference, then I would suggest we look at option 3.

Robin

1 Like
  1. I tried the 1st option, but it didnt workout and got some error. I had modified the <vgname.data> file in the /tmp/vgdata directory.

I had Modified the file as per the destination server File System Requirements and then while running savevg again i got some error saying the "File system doesnt exist"

  1. I really dont have any idea about this procedure. Please provide more details if possible.

3.Update After Restore, you mean to say i create the same FS as per the source LPAR, and then edit the FS on the destination ?

One More Thing:

How about copying just modified <vgname.data> from the /tmp/vgdata to new LPAR and then creating the File systems using savevg ?

Regards,
Priyank

Option 1 was untested. Presumably, the savevg reads the file on finding it and expects to search the filesystems listed. Maybe creating dummy directories of the required name might get this through, maybe not.

Option 2 requires you to use a few commands:-

cd /
restore �x �f /disaster/datavg.structure

You can then read the files in /tmp/vgdata/datavg to build the structures you need based on the original. A bit of scripting to read in a loop will do it.

Option 3 is perhaps the easiest.

restvg -q -f /disaster/datavg.structure hdisk2 hdisk3

....or whatever disks are appropriate. This will restore the structure of your volume group along with a format and mount of the filesystems assuming that there are no conflicts.

You can then:-

lsvg -l datavg | egrep -v "MOUNT|N\/A|:" | tr -s " " | cut -f7 -d " " | sort -r | xargs -tn 1 umount

For each filesystem you need to change you simply

chfs -m  newfs  origfs

Then to remount them (first time only)

lsvg -l datavg | egrep -v "MOUNT|N\/A|:" | tr -s " " | cut -f7 -d " " | sort | while read fs
do
   if [ ! -d $fs ]         # If the mount point does not exist....
   then
      mkdir $fs            # .....create it
   fi
   mount $fs
done

The loop is done in this way to ensure that all the mount points exist and if there are filesystems mounted under other filesystems that they are all in the right place. There has been many occasions where people would create filesystems such as:-

/a
/a/b
/a/c
/a/d

... creating the mount points and then mount /a which works and then they get confused why /a/b does not exist when they just created it. Of course they will have created /a/b in the root filesystem and then mounted /a which is empty.

You may find some mount-points for re-named filesystems get left behind by this as chfs will not delete them.

If you want to rename the logical volumes too, you need to add a step:-

chlv -n  newlv   oldlv

You may need to edit /etc/filesystems afterwards and verify that each logical volume update has been applied. I have had occasions where this didn't happen, but I think that has now been fixed, so it depends on the patches applied.

Robin

Robin,

 
# restvg -r -d /tmp/appvg.data hdisk15 hdisk16
Will create the Volume Group: appvg
Target Disks: hdisk15 hdisk16
Allocation Policy:
Shrink Filesystems: no
Preserve Physical Partitions for each Logical Volume: no

Enter y to continue: y
appvg
oracle_XX
oracle_XX
oracle_XX
oracle_XX
ora_XX
sap_XXX
st_XXX
saXXX
saXX
usr_XXX
usr_XXX
usr_XXX
usr_XXX
XXXXXX
XXXXXX
XXXXXX
grep: can't open /tmp/vgdata/appvg/filesystems
grep: can't open /tmp/vgdata/appvg/filesystems
grep: can't open /tmp/vgdata/appvg/filesystems
grep: can't open /tmp/vgdata/appvg/filesystems
grep: can't open /tmp/vgdata/appvg/filesystems
grep: can't open /tmp/vgdata/appvg/filesystems
grep: can't open /tmp/vgdata/appvg/filesystems
grep: can't open /tmp/vgdata/appvg/filesystems
grep: can't open /tmp/vgdata/appvg/filesystems
grep: can't open /tmp/vgdata/appvg/filesystems
grep: can't open /tmp/vgdata/appvg/filesystems
grep: can't open /tmp/vgdata/appvg/filesystems
grep: can't open /tmp/vgdata/appvg/filesystems
grep: can't open /tmp/vgdata/appvg/filesystems
grep: can't open /tmp/vgdata/appvg/filesystems
grep: can't open /tmp/vgdata/appvg/filesystems
grep: can't open /tmp/vgdata/appvg/filesystems
grep: can't open /tmp/vgdata/appvg/filesystems
grep: can't open /tmp/vgdata/appvg/filesystems
grep: can't open /tmp/vgdata/appvg/filesystems
grep: can't open /tmp/vgdata/appvg/filesystems
grep: can't open /tmp/vgdata/appvg/filesystems
grep: can't open /tmp/vgdata/appvg/filesystems
grep: can't open /tmp/vgdata/appvg/filesystems
grep: can't open /tmp/vgdata/appvg/filesystems
grep: can't open /tmp/vgdata/appvg/filesystems
grep: can't open /tmp/vgdata/appvg/filesystems
grep: can't open /tmp/vgdata/appvg/filesystems
grep: can't open /tmp/vgdata/appvg/filesystems
grep: can't open /tmp/vgdata/appvg/filesystems
grep: can't open /tmp/vgdata/appvg/filesystems
grep: can't open /tmp/vgdata/appvg/filesystems

As you can see above...the file systems were created and mounted. i dont know why i am getting this error ??

Thanks
Priyank

I would suggest that this is because you have used the -d flag in your restvg command. The manual page is very confusingly written. You have a volume group backup file, I agree but you need to specify an input device instead. I can't even work out what the file option is for. I've never got it to work properly.

Of course, using -d for a file and -f for a device doesn't make it any more sensible.

I would suggest that you unmount and remove the filesystems then the volume group and try again with:-

restvg -f /tmp/appvg.data hdisk15 hdisk16

The -r flag is not required if you have excluded the files when you did the backup. you might want to add the -q flag to minimise the output you get and any prompts if you want to automate this.

Robin