NFS mounts query

We have 2 servers in cluster. Node1 has an ext3 mount for backups and the other connects using NFS to this node1.
I believe the reason it is configured in this manner is to not duplicate backups since this is a Database server. Not sure this was the reason though. Right now if node1 goes down all backups fail and they are not accessible.

I believe we can share this backup folder if it is presented from the SAN side to both nodes. But this will cause duplication of information. Any idea how to proceed.

Can you answer the following to give us something to work with:-

  • What Operating System version(s) are in the cluster?
  • What software makes this a cluster?
  • What is the database software?
  • Is the area NFS shared in its own volume group?
  • Is the volume group correctly configured to be available to both sides, but only one at a time?
  • What resources are managed by the cluster software?
  • Does the cluster work properly if you force it to fail-over?

It could be that it's actually the backups using the private IP address rather than a service IP address. How do your backups connect to the active server?

Some of these might seem trivial, but perhaps there is something simple to consider first. it might also give us something to work on rather than a simple description of a symptom.

Thanks, in advance,
Robin
Liverpool/Blackburn
UK

You can have that same ext3 partition on SAN, using lvm.

On node1 it's active (volume group is activated), on other it is not.
If things fail on node1, you can activate the volume group on node2 and mount the filesystem.

You need to reconfigure your backup to take it from local disk, instead of NFS mount.

Do not have entries in /etc/fstab or scripts that activate / mount it automatically (that is what clusters are for.)

If you have clusterware, check if you can configure a simple disk resource for that disk/volume group.

Hope that helps
Regards
Peasant.

Hello, sorry was caught up with few things.
Answer's to questions:

  1. OS - Red Hat 5.5
  2. Cluster Software: Oracle RAC.
  3. Database: Oracle 11g
  4. It is not a separate volume group.
  5. Volume group can be accesssed from both sides. We have had complaints from DB team that Backups cannot be initiated from Node2 which is the client.
  6. NO cluster on the OS level. DB cluster does fail over.

Peasant: Our storage engineer said we cannot have a local ext3 mount shared between 2 nodes cause of Fiber channels protocols. We could do it if we had a OS level cluster

What cluster does is the same as i mentioned in initial post.

In case of failure, activate the volume group on another node and mount the filesystem.
Later on, if first node becomes available, cluster knows it has active volume group on another, so it will not activate it.

You can do this action yourself, just don't use /etc/fstab or any other scripts to mount automatically after reboot.

1 Like

Thanks, Peasant,
I guess that will be the simplest solution to this problem. Was trying to make sure this could be done automatically. But guess have to do manual mounts when we face issues.

With the current setup. There is a problem with the backups not initiating from node2. Even when both nodes are running only node1 can start the backups. Not sure what is wrong. My fstab entries for both nodes for reference.

Node1: /dev/mapper/mpath7 /backup ext3 defaults 1 2
Node2: 172.16.184.220:/backup /backup nfs defaults 1 2

The IP above is for node1.