NIMADM migration 5.3 to 7.1

I am attempting this for the first time having in the past used a DVD migrate. Unfortunately in this instance I do not have this luxury. The migration starts correctly and creates the alt_inst disk and them proceeds to create the filesystems and exports them. First failure was the filesystem /admin on the hd11admin logical volume did not exist on the client which caused a mount problem. However this will not exist as I believe the /admin filesystem appeared on 6.1.

I tried to get around this by creating a dummy LV on the 5.3 system which seemed to work! However it still fails with the following output:

 nimadm -d hdisk4 -c client1 -s AIX71_TL1spot -l AIX71_TL1 -Y
Initializing the NIM master.
Initializing NIM client client1.
Verifying alt_disk_migration eligibility.
Initializing log: /var/adm/ras/alt_mig/client1_alt_mig.log
Starting Alternate Disk Migration.

+-----------------------------------------------------------------------------+
Executing nimadm phase 1.
+-----------------------------------------------------------------------------+
Cloning altinst_rootvg on client, Phase 1.
Client alt_disk_install command: alt_disk_copy -M 7.1 -P1 -d "hdisk4"
Calling mkszfile to create new /image.data file.
Checking disk sizes.
LOGICAL_VOLUME= hd11admin
FS_LV= /dev/hd11admin
Creating cloned rootvg volume group and associated logical volumes.
Creating logical volume alt_hd5.
Creating logical volume alt_hd6.
Creating logical volume alt_hd8.
Creating logical volume alt_hd4.
Creating logical volume alt_hd2.
Creating logical volume alt_hd9var.
Creating logical volume alt_hd3.
Creating logical volume alt_hd1.
Creating logical volume alt_hd10opt.
Creating logical volume alt_lg_dumplv.
Creating logical volume alt_hd11admin.
Creating /alt_inst/ file system.
Creating /alt_inst/admin file system.
Creating /alt_inst/home file system.
Creating /alt_inst/opt file system.
Creating /alt_inst/tmp file system.
Creating /alt_inst/usr file system.
Creating /alt_inst/var file system.
Generating a list of files
for backup and restore into the alternate file system...
Backing-up the rootvg files and restoring them to the alternate file system...
Phase 1 complete.

+-----------------------------------------------------------------------------+
Executing nimadm phase 2.
+-----------------------------------------------------------------------------+
Exporting alt_inst filesystems from client client1
to NIM master NIM MASTER:
Exporting /alt_inst/admin from client.
Exporting /alt_inst from client.
exportfs: /alt_inst: sub-directory (/alt_inst/admin) already exported
exportfs: /alt_inst not found in /tmp/nfsexpT-ukMb
Exporting /alt_inst/home from client.
Exporting /alt_inst/opt from client.
Exporting /alt_inst/tmp from client.
Exporting /alt_inst/usr from client.
Exporting /alt_inst/var from client.
0505-154 nimadm: Error exporting client alt_inst filesystems.
Cleaning up alt_disk_migration on the NIM master.
Cleaning up alt_disk_migration on client client1.
Unexporting alt_inst filesystems on client client1:
Client alt_disk_install command: alt_disk_install -M 7.1 -X
forced unmount of /alt_inst/var
forced unmount of /alt_inst/usr
forced unmount of /alt_inst/tmp
forced unmount of /alt_inst/opt
forced unmount of /alt_inst/home
forced unmount of /alt_inst/admin
forced unmount of /alt_inst
Bootlist is set to the boot disk: hdisk2 blv=hd5

Just an idea: NIM can become pretty capricious if directories in NIMs working directories are being exported independently (even if only exported indirectly via exporting super-directories of them).

It might be a good idea to have a look in /etc/exports and clean up every entry regarding NIMs working directories or supersets of these prior to attempting any NIM operation.

This is - alas - a part not all too predictable about NIM.

I hope this helps.

bakunin

The problem with a Google search is if you do not ask the question correctly you get spurious answers. I asked the correct question and found a link to an IBM known problem patch.

IBM IV13892: NIMADM FAILS TO SORT /ALT_INST FILESYS PROPERLY WHEN LANG=EN_GB - United States

I downloaded the patch, updated the LPP_SOURCE and SPOT, ran NIMADM again and it ran all the way to phase 12. The server came back correctly and reported the O/S as:

7100-00-03-1115

A success after a lot of searching!

Thank you for keeping us updated. This makes a valuable contribution to the knowledge base.

bakunin