Mount FIle systems from node-1 onto node-2

Hi,

We have HP UX service guard cluster on OS 11.23. Recently 40+ LUNs presented to both nodes by SAN team but I was asked to mount them on only one node. I created required VGs/LVs, created VxFS and mounted all of them and they are working fine. Now client requested those FS on 2nd node as well.

My queries:

  1. Is it possible to mount a VxFS on both nodes at the same time? If yes, what do I need to replicate the VG and LV info from node-1 to node-2?
    If no, then we may plan to mount them manually onto 2nd node when 1st node goes down. I guess, for this also, I need to replicate all the VG and LV info from node-1 to node-2?

  2. How to configure all the Volumes in cluster mode? There's already one SG package running, can we add these new LVs onto it or should I configure NEW Package to include these volumes? If new package then do I need another IP address to it? what are the tasks would I have to do?

I'm no expert so I would greatly appreciate any help with more details.

Many thanks!!

1) Yes, but with disastrous results ( just a question of time (short�)). So that is why you mount VG in SGMC as exclusive the same is true with AIX HACMP etc..
Normally you choose one server as the master, the reason is that server gets all new/change configuration when validated are replicated to the other node (when 2 nodes).
So to be sure you are talking of same VG both sides it is wise to give them same name and same major number when you create the VG on both side�
From home and not having touched to a HP cluster for now 8 years, it is difficult for me to say more�( notes are at work�)
One thing be sure you have same devices both sides, when a VG ( on master) is completed with all its PV and Lv defined, you use vgexport command with -p (preview) to create a map file etc you copy on the second node and use vgimport

2) a cluster is in short 2 servers configured to "host" on or more package that could be seen as virtual servers if you added LUNs that are for a package, they should be included to it� Not the same as creating a new package
There are scripts to help you through, and most important there are scripts after all change to update the package both sides and TEST both sides, the very important part of an admin life with cluster is to check its integrity, if the MCSG checking script fails, you are to correct promptly before anything happens or you will be in big trouble...

If you want to run different applications that reference them on both servers then you are not really in a cluster any more. The service that is protected by being in a cluster with consist of resources to be available, typically disk, IP address and service processes.

These resources in a standard cluster are only available on one node at a time, and for good reason. If you have people connecting to the standby server directly, then they need to be prevented from accessing the services being protected by your cluster.

You might have an Active-Active configuration, but this is really two discreet sets of services being protected by two clusters, but using the same hardware. The only exception to this would be a true database cluster such as Oracle RAC which truly shares the disk and offers services on both sides.

If you really need to provide what you describe, you could:-

  • Manage the disk as a cluster service, with it's own IP address and the NFS server included with node 1 active and node 2 standby
  • Manage your 1st business service with node 1 active and node 2 standby, mounting the disk with NFS from the IP address above.
  • Manage your 2nd business service with node 1 standby and node 2 active, mounting the disk with NFS from the IP address above.

You will need to stop people referring to the disk directly and get them to use the NFS mounted view.

I think that this would give you something nearer what you need. If node 1 were to fail, then the services running on node 2 would remain but you would lose the NFS disks until node 2 detected that node 1 had failed and brought up the disks and NFS to serve itself. At the same time, the business service on node 1 could be brought up on node 2 as they both rely on the NFS mounted disks. It might seem a bit odd to NFS mount to the same server, but that would mean you see the same view from either side.

An alternate would be to have the disks on a physically separate cluster just providing NFS to these two nodes.

It's a shame I don't know how to post a picture to better describe this. :o

Robin

Rbatte, there is a setup using clustered file systems (VxCFS), so san disk can be mounted on both nodes simultaneously.

So in theory, failover for NFS cluster service would be only IP address and NFS service not the actual volume group, which should be faster.

Is that a purchasable extra? I've not got it on my HP-UX servers. I am only on 11.11 though :o

Robin