Using AIX HACMP and NFS together

Hi,

need advice on this.

Is it possible to assign a mountpoint from a SAN storage to server1 & server2.
Use NFS to the same mountpoint from server2 so that concurrent access is allowed.
Can this setup be used together with HACMP?
If server1 crash, the mountpoint resource will swing to server2, will it automatically remove the NFS and server2 take over the mountpoint resource and become active.

Simpler approach is to use PFSS?

Any ideas?

Regards
Solo

Hi,

Can anyone advise me on the following setup?

1) Server A and B both are assigned same VG from SAN storage.

2) Server A takes possession of resources /TEST and is active (HACMP)

3) Server B using NFS shares the /TEST from Server A.

Cluster WAS Application (active:active) in Server A & B concurrently writes to /TEST.

4) When Server A crashes or goes offline, HACMP auto swing resources /TEST over to Server B and becomes active.

Server B NFS connection to Server A is lost when Server A becomes offline but empty mountpoint remains. When HACMP swing resources over (same mountpoint name), will there be problems?

Thanks and best regards

My answers right after your questions.

1) Server A and B both are assigned same VG from SAN storage.

ANS: This is possible but then only one of them can have exclusive access. I think you could use concurrent VG, not too sure if that works with HACMP, that forms another cluster.

2) Server A takes possession of resources /TEST and is active (HACMP)

ANS: I would keep the share on another server if you have one OR a NAS share.

3) Server B using NFS shares the /TEST from Server A.

ANS: That is possible. Don't put the entries in /etc/filesystems but manually mount it as part of a cluster start script

Cluster WAS Application (active:active) in Server A & B concurrently writes to /TEST.

4) When Server A crashes or goes offline, HACMP auto swing resources /TEST over to Server B and becomes active.

ANS: This /TEST filesystem must be in the /etc/filesystems and the cluster start script must export it as soon as it gets access to the VG and mounted it.

Server B NFS connection to Server A is lost when Server A becomes offline but empty mountpoint remains. When HACMP swing resources over (same mountpoint name), will there be problems?

ANS: That could be a problem. Why don't you mount the filesystem with a different name and then link it to a specific directory for WAS to write ?.

Make the vg jfs2 enhanced concurrent and auto mount it on both.
Then they can both write to it what ever happens to the other system, HA or not.

dukessd, Are you talking to automount over NFS ? If not, can you please explain the procedure ? or a document link so that I can update myself .

thanks, Kaps

No. Forget NFS.
When you create the vg make it JFS2 enhanced concurrent.
Zone the SAN disks to both systems.
Search the IBM HA docs for JFS2 enhanced concurrent.

I did read about this one. This is what I understood,

  1. A normal concurrent VG does not support filesystems ( applications will need to manage their storage on raw LVs).

  2. Enhanced concurrent mode supports filesystems and the VG to be varried on on a set of systems. BUT The HACMP and LVM work together to put a lock , so that only one system can perform write operations to a filesystem from this concurrent VG.

Correct me , or direct me to a better reading.

Regards,

Kaps

Sorry, when i do not understand the problem completely.

There are two possible situations: you have a HACMP cluster (lets call the machines A and B) and they have between them some filesystem, which an application uses. HACMP should make sure that this filesystem is exported as an NFS share for some third machine C. Normal operation would be that the FS is mounted on A and exported from there to C, if A fails B should mount the FS and export it to C again.

If this is what you want it is called HA-NFS and is included in standard HACMP since more than a decade (IIRC the first version of HACMP including the up to then standalone feature HA-NFS was 4.2 from 1995 or so). You should simply enter the mounts and the exportfs commands necessary to the Start-/Stop-scripts of the TAKEOVER event in HACMP so they get executed whenever a takeover takes place.

The other possibility is: you have a external machine which exports some filesystem to the cluster nodes and you want to have the active cluster node mount this filesystem.

The first advice is: don't do it. NFS is by no means a device to mount filesystems for a longer time. Since your machine has one and only one IP stack (which NFS is using) your network performance probably will go down when you use the filesystem. NFS is a great tool if you want to quickly distribute data from one system to the other: mount the FS, do whatever you want, umount it. To use it for mounting e.g. filespace for a database or something such is very poor design, regardless of it working or not.

NFS won't be able to reliably remount the FS once the connecction gets temporarily distorted, you can't cleanly "cut the line" to the disks in case something fails (like you could do with SCSI- or FC-AL-devices), etc. If you have a hanging NFS-mount quite often the only way to get rid of it is to reboot the machine. This is NOT a desirable setup.

Having said this it will (with the restrictions mentioned) work quite normally, wenn you include the NFS-shares in a "resource group". See your HACMP documentation on how to manage resource groups and what they consist of. Basically a resource group is a set of filesystems, some Start-/Stop-scripts to start/stop an application, some service IP-adresses, etc. This resource groups will swing back and forth between the cluster nodes when a takeover takes place. An "active" cluster node is basically the node which has the resource group(s) active.

I hope this helps.

bakunin

Can the moderators merge this thread with another HACMP + NFS thread AIX 6.1 and NFS problem after HACMP config

thanks