Unable to mount shared folder from Linux server

Hi all,

I have 3 servers, rx2600, rx2600, and rx2660.

I have another x86 server running on Suse, and create a shared folder named /public

From 1 of the rx2600 server I can mount that shared folder :

intaqa:/>mount 10.1.2.82:/public /bkup
intaqa:/>cd /bkup

But from 2 others server I am unable to mount that shared folder :

intabck:/>mount 10.1.2.82:/public /bkup
NFS server 10.1.2.82 not responding still trying

In my Linux server the /etc/exports contains this line only :

/public 10.1.2.0/20(rw,sync,no_root_squash,no_all_squash)

intabck:/>ipfstat -io
empty list for ipfilter(out)
empty list for ipfilter(in)
intabck:/>

The ip address of others server :

rx2600 : 10.1.2.6 (intaqa) ----> this one can mount

rx2660 : 10.1.2.8 (intabck)

rx2800 : 10.1.2.4 (intadb)

rx2800 : 10.1.2.103 (intaaapp2)

and all can ping to 10.1.2.82

intabck:/>rpcinfo -p 10.1.2.82
rpcinfo: can't contact portmapper: RPC: Rpcbind failure - RPC: Failed (unspecifi
ed error)
intabck:/>showmount -e 10.1.2.82
export list for 10.1.2.82:
/public 10.1.2.8/24
intabck:/>

 

intaqa:/>telnet 10.1.2.82 2049
Trying...
Connected to 10.1.2.82.
Escape character is '^]'.

intaqa:/>rpcinfo -p 10.1.2.82
program vers proto port service
100000 4 tcp 111 rpcbind
100000 3 tcp 111 rpcbind
100000 2 tcp 111 rpcbind
100000 4 udp 111 rpcbind
100000 3 udp 111 rpcbind
100000 2 udp 111 rpcbind
100005 1 udp 58286 mountd
100005 1 tcp 49516 mountd
100005 2 udp 58566 mountd
100005 2 tcp 51156 mountd
100005 3 udp 39827 mountd
100005 3 tcp 56390 mountd
100024 1 udp 53581 status
100024 1 tcp 42035 status
100003 2 tcp 2049 nfs
100003 3 tcp 2049 nfs
100003 4 tcp 2049 nfs
100227 2 tcp 2049
100227 3 tcp 2049
100003 2 udp 2049 nfs
100003 3 udp 2049 nfs
100003 4 udp 2049 nfs
100227 2 udp 2049
100227 3 udp 2049
100021 1 udp 41197 nlockmgr
100021 3 udp 41197 nlockmgr
100021 4 udp 41197 nlockmgr
100021 1 tcp 52940 nlockmgr
100021 3 tcp 52940 nlockmgr
100021 4 tcp 52940 nlockmgr
intaqa:/>showmount -e 10.1.2.82
export list for 10.1.2.82:
/public 10.1.2.8/24
intaqa:/>

 

intaapp2:/>telnet 10.1.2.82 2049
Trying...
Connected to 10.1.2.82.
Escape character is '^]'.

intaapp2:/>rpcinfo -p 10.1.2.82
rpcinfo: can't contact portmapper: RPC: Rpcbind failure - RPC: Failed (unspecifi
ed error)
intaapp2:/>showmount -e 10.1.2.82
export list for 10.1.2.82:
/public 10.1.2.8/24
intaapp2:/>

What should I check and change setting to solve this problem.

Thanks

looks like your NFS client are not running on both servers giving you trouble...
Why with the given info is uneasy to reply, I would start by looking at the system logs like rc.log ( to see if configured or not and if, errors? ), as it may be ( long time sine last time I touched an HP...) by default they are not..
It not configured the best is to use sam unless you know the way trough command line...

Hi,

I have tried to stop and start nfs client and server :

intabck:/>/sbin/init.d/nfs.server stop
killing nfsd
killing rpc.mountd
intabck:/>/sbin/init.d/nfs.client stop
killing rpc.lockd
killing rpc.statd
killing biod
killing automountd
intabck:/>/sbin/init.d/nfs.core stop
stopping rpcbind
intabck:/>/sbin/init.d/nfs.core start
    starting NFS CORE networking

    starting up the rpcbind
        /usr/sbin/rpcbind
intabck:/>/sbin/init.d/nfs.client start
    starting NFS CLIENT networking

    starting up the rpcbind
        rpcbind already started, using pid: 21533
    starting up the BIO daemons
        /usr/sbin/biod 16
    starting up the Status Monitor daemon
        /usr/sbin/rpc.statd
    starting up the Lock Manager daemon
        /usr/sbin/rpc.lockd
    Starting up the AutoFS daemon
        /usr/sbin/automountd
        Running the AutoFS command interface
        /usr/sbin/automount
    mounting remote NFS file systems ...
    mounting CacheFS file systems ...
intabck:/>/sbin/init.d/nfs.server start
    starting NFS SERVER networking

    starting up the rpcbind daemon
        rpcbind already started, using pid: 21533
    starting up the mount daemon
        /usr/sbin/rpc.mountd
    starting up the NFS daemons
        /usr/sbin/nfsd 30
    starting up the Status Monitor daemon
        rpc.statd already started, using pid: 21731
    starting up the Lock Manager daemon
        rpc.lockd already started, using pid: 21737

But still unable to mount :

intabck:/>mount 10.1.2.82:/public /bkup
NFS server 10.1.2.82 not responding still trying

Having read your post a number of times I'm still very confused!!!

The /public NFS handle is published by 10.1.2.82, yes??

When you enquire the mounts available on 10.1.2.82 it says:

and from a different client:

So both are being told for /public go to 10.1.2.8/24

Can you please clarify what I'm missing. I just don't get that.

Yes the /public NFS is published by 10.1.2.82.
But I do not know why when it told for /public go to 10.1.2.8/24
As I know I can mount from server 10.1.2.6 but unable to mount it from 10.1.2.8, or 10.1.2.4, or 10.1.2.103

I have changed the exports file on the NFS server 10.1.2.82 and when I run the showmount command from client, it looks like this:

intabck:/var/adm>showmount -e 10.1.2.82
export list for 10.1.2.82:
/public 10.1.2.0/24

Maybe you have any suggestion to fix this problem?
Thanks

So if you issue command:

 
 # showmount -e 10.1.2.82
 

from 10.1.2.6 (the client that can successfully mount), does it give the correct answer?

---------- Post updated at 08:51 AM ---------- Previous update was at 08:45 AM ----------

I'm not a HP-UX expert (although I've used it a lot in the past) so I'm answering in generic terms only here.

I know that you are only referencing ip addresses here and not node names but, anyway, can you check all your /etc/hosts files and ensure that the ip addresses against all node names are correct (especially the one's on each node referring to itself).

I'll continue to scratch my head here.

Hi,

This is the result from 10.1.2.6:

intaqa:/>showmount -e 10.1.2.82
export list for 10.1.2.82:
/public 10.1.2.0/24

This the hosts file from 10.1.2.6

10.1.2.6        intaqa
10.1.2.4      intadb
10.1.2.22     intaqas
10.1.2.103    intaapp2
10.1.2.24     intaapp1
10.1.2.8      intabck
127.0.0.1       localhost       loopback
intaqa:/>

and this one from 10.1.2.8

10.1.2.4        intadb
10.1.2.103      intaapp2
10.1.2.6        intaqa
10.1.2.8        intabck
127.0.0.1       localhost       loopback
10.1.2.82       intalinux # fileshare

and this one from NFS server 10.1.2.82:

127.0.0.1       localhost

# special IPv6 addresses
::1             localhost ipv6-localhost ipv6-loopback

fe00::0         ipv6-localnet

ff00::0         ipv6-mcastprefix
ff02::1         ipv6-allnodes
ff02::2         ipv6-allrouters
ff02::3         ipv6-allhosts
127.0.0.2       linux-4zr9.site linux-4zr9
10.1.2.82       osuse osuse

I'm still scratching my head on this one. Hopefully a HP-UX expert will chime in at some point.

Another thought is could the NFS versions on these systems be different???
Often you can specify the version (2, 3 or 4) on the 'mount' command line but I don't know whether that option (eg, '-o vers=3') is available in HP-UX. If the NFS versions are different then that will cause trouble.

Also, perhaps you should be putting '-F nfs' option on your mount command. Again, I'm not sure about HP-UX.

Also, I would think about setting the protection mask on the /public share to 777 until this issue is resolved to ensure security isn't getting in the way.

I'll continue to think about this one.

---------- Post updated at 12:55 PM ---------- Previous update was at 12:35 PM ----------

This page from the HP forum:

how to check NFS version - Hewlett Packard Enterprise Community

states that you can specify the NFS version to use on the mount command with '-o vers=<2, 3 or 4>'

Also it would be worth checking the supported NFS versions on all HP-UX boxes (in case they are different) and the Suse box.

Both server 10.1.2.6 and 10.1.2.8 has a NFS version 2 and 3, and the Linux has NFS version 2,3, and 4

I have tried to run

mount -o vers=3 10.1.2.82:/public /bkup

Still no success on 10.1.2.8 but success on 10.1.2.6

Can you please post the subnet mask setting for each of your systems.

I just wonder whether the NFS mount request to 10.1.2.82 is actually reaching that box (but perhaps going out onto the internet). You say "all can ping 10.1.2.82". Can you prove that by downing/unplugging that box and check that a subsequent ping fails on all boxes.

You can use 10.x.x.x on your own network but you have to be sure it's configured correctly and all your boxes know that such addresses are on the local subnet.

As I say, I'm beginning to consider whether this is a network config problem and the NFS mount request is never reaching its target.

Hi,

Subnet mask setting for each server :

intabck:/>ifocnfig lan0
sh: ifocnfig:  not found.
intabck:/>ifconfig lan0
lan0: flags=1843<UP,BROADCAST,RUNNING,MULTICAST,CKO>
        inet 10.1.2.8 netmask fffff000 broadcast 10.1.15.255
intaqa:/>ifconfig lan0
lan0: flags=843<UP,BROADCAST,RUNNING,MULTICAST>
        inet 10.1.2.6 netmask fffff000 broadcast 10.1.15.255
intaqa:/>
linux-4zr9:~ # ifconfig eth0
eth0      Link encap:Ethernet  HWaddr 12:50:75:EC:B8:87
          inet addr:10.1.2.82  Bcast:10.1.15.255  Mask:255.255.240.0
          inet6 addr: fe80::1050:75ff:feec:b887/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:61509649 errors:1449 dropped:223434 overruns:0 frame:1449
          TX packets:70176 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000
          RX bytes:4150003355 (3957.7 Mb)  TX bytes:3901140 (3.7 Mb)

linux-4zr9:~ #

Ping from 10.1.2.8, and after a while down the 10.1.2.82

64 bytes from 10.1.2.82: icmp_seq=76. time=0. ms
64 bytes from 10.1.2.82: icmp_seq=77. time=0. ms
64 bytes from 10.1.2.82: icmp_seq=78. time=0. ms
64 bytes from 10.1.2.82: icmp_seq=79. time=0. ms
64 bytes from 10.1.2.82: icmp_seq=80. time=0. ms
64 bytes from 10.1.2.82: icmp_seq=81. time=0. ms

----10.1.2.82 PING Statistics----
82 packets transmitted, 20 packets received, 75% packet loss
round-trip (ms)  min/avg/max = 0/54/1040

Ping from 10.1.2.6 when 10.1.2.82 down for a while and up the 10.1.2.82 :

64 bytes from 10.1.2.82: icmp_seq=37. time=61. ms
64 bytes from 10.1.2.82: icmp_seq=38. time=0. ms
64 bytes from 10.1.2.82: icmp_seq=39. time=0. ms
64 bytes from 10.1.2.82: icmp_seq=40. time=0. ms
64 bytes from 10.1.2.82: icmp_seq=41. time=0. ms

----10.1.2.82 PING Statistics----
42 packets transmitted, 7 packets received, 83% packet loss
round-trip (ms)  min/avg/max = 0/459/2081

I can't see anything obviously wrong in that.

Can you retrieve the MTU setting for all boxes?

(Which version of HP-UX are you using? I don't think you've said.)

For 10.1.2.6 and 10.1.2.8 using HP-UX B.11.23 U ia64 (tb)
For 10.1.2.103 using HP-UX intaapp2 B.11.31 U ia64 (tb)

MTU for 10.1.2.8:

intabck:/>netstat -i
Name      Mtu  Network         Address         Ipkts   Ierrs Opkts   Oerrs Coll
lan1      1500 192.168.2.0     192.168.2.8     0       0     0       0     0
lan0      1500 10.1.0.0        intabck         5039336 0     69686   0     0
lo0       4136 loopback        localhost       306720  0     306720  0     0

For 10.1.2.6 :

Name      Mtu  Network         Address         Ipkts   Ierrs Opkts   Oerrs Coll
lan1      1500 192.168.2.0     192.168.2.6     809368  0     809449  0     0
lan0      1500 10.1.0.0        intaqa          2527675585 94677797 876270315 0
   0
lo0       4136 loopback        localhost       21359196 0     21359196 0     0

Is the MTU on the Suse box also 1500?

Please show use the output of the following when the error occurs :

# Your mount try on intabck box
fuser /bkup
mount
bdf
netstat -rn
traceroute 10.1.2.82
dmesg | tail -5

Regards
Peasant.

When you use the showmount command you are implicitly stating that you use NFSv3, so just to clarify: you do use NFSv3, yes?

For the outside chance you don't: you need to have the "NFS domain" on server and client set to the same value. You do that in /etc/idmapd.conf . My suggestion is to use the same value as for the DNS domain, but it can have any value in fact.

Also check if the non-working systems have the right (=corresponding) services started, because NFSv3 and NFSv4 use completely different daemons.

I hope this helps.

bakunin

Your problem is here

intabck:/>rpcinfo -p 10.1.2.82
rpcinfo: can't contact portmapper: RPC: Rpcbind failure - RPC: Failed (unspecified error)

Is port 111 (rpcbind) defined in /etc/services on intabck for both udp and tcp?
Is a firewall running on the NFS server that restricts port 111 to certain IP addresses?

Hi all,

Hi,

The MTU from Suse server is :

linux-4zr9:~ # netstat -i
Kernel Interface table
Iface   MTU Met    RX-OK RX-ERR RX-DRP RX-OVR    TX-OK TX-ERR TX-DRP TX-OVR Flg
eth0   1500   0 110413187   2431 327835      0   126598      0      0      0 BMR
U
lo    16436   0      118      0      0      0      118      0      0      0 LRU
linux-4zr9:~ #

Mount try from intabck :

intabck:/etc>mount 10.1.2.82:/public /bkup
NFS server 10.1.2.82 not responding still trying
NFS server 10.1.2.82 not responding still trying

Fuser :

intabck:/etc>fuser /bkup
/bkup:

Mount :

intabck:/etc>mount
/ on /dev/vg00/lvol7 ioerror=nodisable,log,dev=40000007 on Thu Sep 21 22:17:08 2
017
/stand on /dev/vg00/lvol1 ioerror=mwdisable,log,nodatainlog,tranflush,dev=400000
01 on Thu Sep 21 22:17:09 2017
/var on /dev/vg00/lvol12 ioerror=mwdisable,delaylog,nodatainlog,dev=4000000c on
Thu Sep 21 22:17:23 2017
/usr on /dev/vg00/lvol11 ioerror=mwdisable,delaylog,nodatainlog,dev=4000000b on
Thu Sep 21 22:17:23 2017
/usr/sap/trans on /dev/vg00/lvol13 ioerror=mwdisable,largefiles,delaylog,nodatai
nlog,dev=4000000e on Thu Sep 21 22:17:23 2017
/usr/sap/INQ on /dev/vg00/lvol17 ioerror=mwdisable,nolargefiles,delaylog,nodatai
nlog,dev=4000001e on Thu Sep 21 22:17:23 2017
/usr/sap/DEV on /dev/vg00/lvol18 ioerror=mwdisable,nolargefiles,delaylog,nodatai
nlog,dev=4000001f on Thu Sep 21 22:17:23 2017
/tmp on /dev/vg00/lvol10 ioerror=mwdisable,delaylog,nodatainlog,dev=4000000a on
Thu Sep 21 22:17:24 2017
/sapmnt/INQ on /dev/vg00/lvol15 ioerror=mwdisable,nolargefiles,delaylog,nodatain
log,dev=4000001b on Thu Sep 21 22:17:24 2017
/sapmnt/DEV on /dev/vg00/lvol16 ioerror=mwdisable,nolargefiles,delaylog,nodatain
log,dev=4000001d on Thu Sep 21 22:17:24 2017
/oracle on /dev/vg00/lvora ioerror=mwdisable,largefiles,delaylog,nodatainlog,dev
=4000000d on Thu Sep 21 22:17:24 2017
/oracle/INQ/sapreorg on /dev/vg00/lvreorg ioerror=mwdisable,nolargefiles,delaylo
g,nodatainlog,dev=40000019 on Thu Sep 21 22:17:25 2017
/oracle/INQ/sapdata8 on /dev/vgsap/lvdata8 ioerror=mwdisable,nolargefiles,delayl
og,nodatainlog,dev=4001000e on Thu Sep 21 22:17:25 2017
/oracle/INQ/sapdata7 on /dev/vgsap/lvdata7 ioerror=mwdisable,nolargefiles,delayl
og,nodatainlog,dev=4001000d on Thu Sep 21 22:17:25 2017
/oracle/INQ/sapdata6 on /dev/vgsap/lvdata6 ioerror=mwdisable,nolargefiles,delayl
og,nodatainlog,dev=4001000c on Thu Sep 21 22:17:25 2017
/oracle/INQ/sapdata5 on /dev/vgsap/lvdata5 ioerror=mwdisable,nolargefiles,delayl
og,nodatainlog,dev=4001000b on Thu Sep 21 22:17:25 2017
/oracle/INQ/sapdata4 on /dev/vgsap/lvdata4 ioerror=mwdisable,nolargefiles,delayl
og,nodatainlog,dev=4001000a on Thu Sep 21 22:17:26 2017
/oracle/INQ/sapdata3 on /dev/vgsap/lvdata3 ioerror=mwdisable,nolargefiles,delayl
og,nodatainlog,dev=40010009 on Thu Sep 21 22:17:26 2017
/oracle/INQ/sapdata2 on /dev/vgsap/lvdata2 ioerror=mwdisable,nolargefiles,delayl
og,nodatainlog,dev=40010008 on Thu Sep 21 22:17:26 2017
/oracle/INQ/sapdata1 on /dev/vgsap/lvdata1 ioerror=mwdisable,nolargefiles,delayl
og,nodatainlog,dev=40010007 on Thu Sep 21 22:17:26 2017
/oracle/INQ/saparch on /dev/vg00/lvarch ioerror=mwdisable,nolargefiles,delaylog,
nodatainlog,dev=4000000f on Thu Sep 21 22:17:27 2017
/oracle/INQ/origlogB on /dev/vg00/lvolb ioerror=mwdisable,nolargefiles,delaylog,
nodatainlog,dev=40000010 on Thu Sep 21 22:17:27 2017
/oracle/INQ/origlogA on /dev/vg00/lvola ioerror=mwdisable,nolargefiles,delaylog,
nodatainlog,dev=40000011 on Thu Sep 21 22:17:27 2017
/oracle/INQ/mirrlogB on /dev/vg00/lvmirb ioerror=mwdisable,nolargefiles,delaylog
,nodatainlog,dev=40000012 on Thu Sep 21 22:17:27 2017
/oracle/INQ/mirrlogA on /dev/vg00/lvmira ioerror=mwdisable,nolargefiles,delaylog
,nodatainlog,dev=40000013 on Thu Sep 21 22:17:27 2017
/oracle/DEV/sapreorg on /dev/vg00/lvdreor ioerror=mwdisable,nolargefiles,delaylo
g,nodatainlog,dev=4000001a on Thu Sep 21 22:17:27 2017
/oracle/DEV/sapdata6 on /dev/vgsap/lvol25 ioerror=mwdisable,nolargefiles,delaylo
g,nodatainlog,dev=40010006 on Thu Sep 21 22:17:28 2017
/oracle/DEV/sapdata5 on /dev/vgsap/lvol24 ioerror=mwdisable,nolargefiles,delaylo
g,nodatainlog,dev=40010005 on Thu Sep 21 22:17:28 2017
/oracle/DEV/sapdata4 on /dev/vgsap/lvol23 ioerror=mwdisable,nolargefiles,delaylo
g,nodatainlog,dev=40010004 on Thu Sep 21 22:17:28 2017
/oracle/DEV/sapdata3 on /dev/vgsap/lvol22 ioerror=mwdisable,nolargefiles,delaylo
g,nodatainlog,dev=40010003 on Thu Sep 21 22:17:28 2017
/oracle/DEV/sapdata2 on /dev/vgsap/lvol21 ioerror=mwdisable,nolargefiles,delaylo
g,nodatainlog,dev=40010002 on Thu Sep 21 22:17:28 2017
/oracle/DEV/sapdata1 on /dev/vgsap/lvol20 ioerror=mwdisable,nolargefiles,delaylo
g,nodatainlog,dev=40010001 on Thu Sep 21 22:17:28 2017
/oracle/DEV/saparch on /dev/vg00/lvdar ioerror=mwdisable,nolargefiles,delaylog,n
odatainlog,dev=40000014 on Thu Sep 21 22:17:29 2017
/oracle/DEV/origlogB on /dev/vg00/lvdolb ioerror=mwdisable,nolargefiles,delaylog
,nodatainlog,dev=40000015 on Thu Sep 21 22:17:29 2017
/oracle/DEV/origlogA on /dev/vg00/lvdola ioerror=mwdisable,nolargefiles,delaylog
,nodatainlog,dev=40000016 on Thu Sep 21 22:17:29 2017
/oracle/DEV/mirrlogB on /dev/vg00/lvdmib ioerror=mwdisable,nolargefiles,delaylog
,nodatainlog,dev=40000018 on Thu Sep 21 22:17:29 2017
/oracle/DEV/mirrlogA on /dev/vg00/lvdmia ioerror=mwdisable,nolargefiles,delaylog
,nodatainlog,dev=40000017 on Thu Sep 21 22:17:29 2017
/opt on /dev/vg00/lvol9 ioerror=mwdisable,delaylog,nodatainlog,dev=40000009 on T
hu Sep 21 22:17:29 2017
/home on /dev/vg00/lvol8 ioerror=mwdisable,delaylog,nodatainlog,dev=40000008 on
Thu Sep 21 22:17:30 2017
/net on -hosts ignore,indirect,nosuid,soft,nobrowse,dev=a on Mon Sep 25 14:00:22
 2017

bdf :

intabck:/etc>bdf
Filesystem          kbytes    used   avail %used Mounted on
/dev/vg00/lvol7    2097152  616568 1480584   29% /
/dev/vg00/lvol1    2097152  223656 1858944   11% /stand
/dev/vg00/lvol12   20971520 5238456 15638224   25% /var
/dev/vg00/lvol11   15728640 6282888 9371984   40% /usr
/dev/vg00/lvol13   4194304 2281447 1793353   56% /usr/sap/trans
/dev/vg00/lvol17    655360   16629  598818    3% /usr/sap/INQ
/dev/vg00/lvol18    655360   16629  598818    3% /usr/sap/DEV
/dev/vg00/lvol10   2097152 1967112  129088   94% /tmp
/dev/vg00/lvol15   2097152   16982 1950167    1% /sapmnt/INQ
/dev/vg00/lvol16    655360   16629  598818    3% /sapmnt/DEV
/dev/vg00/lvora    20971520 10356584 9951586   51% /oracle
/dev/vg00/lvreorg  2097152   16982 1950167    1% /oracle/INQ/sapreorg
/dev/vgsap/lvdata8 204800000   66665 191937509    0% /oracle/INQ/sapdata8
/dev/vgsap/lvdata7 92160000   39060 86363389    0% /oracle/INQ/sapdata7
/dev/vgsap/lvdata6 81920000   36547 76765745    0% /oracle/INQ/sapdata6
/dev/vgsap/lvdata5 81920000   36547 76765745    0% /oracle/INQ/sapdata5
/dev/vgsap/lvdata4 81920000   36547 76765745    0% /oracle/INQ/sapdata4
/dev/vgsap/lvdata3 81920000   36547 76765745    0% /oracle/INQ/sapdata3
/dev/vgsap/lvdata2 112640000   44080 105558682    0% /oracle/INQ/sapdata2
/dev/vgsap/lvdata1 92160000   39060 86363389    0% /oracle/INQ/sapdata1
/dev/vg00/lvarch   2097152    1622 1964567    0% /oracle/INQ/saparch
/dev/vg00/lvolb    1048576   16725  967368    2% /oracle/INQ/origlogB
/dev/vg00/lvola    1048576   16725  967368    2% /oracle/INQ/origlogA
/dev/vg00/lvmirb   1048576   16725  967368    2% /oracle/INQ/mirrlogB
/dev/vg00/lvmira   1048576   16725  967368    2% /oracle/INQ/mirrlogA
/dev/vg00/lvdreor  2097152   16982 1950167    1% /oracle/DEV/sapreorg
/dev/vgsap/lvol25  8208384   18484 7678039    0% /oracle/DEV/sapdata6
/dev/vgsap/lvol24  7176192   18227 6710600    0% /oracle/DEV/sapdata5
/dev/vgsap/lvol23  7176192   18227 6710600    0% /oracle/DEV/sapdata4
/dev/vgsap/lvol22  7176192   18227 6710600    0% /oracle/DEV/sapdata3
/dev/vgsap/lvol21  9224192   18733 8630125    0% /oracle/DEV/sapdata2
/dev/vgsap/lvol20  7176192   18227 6710600    0% /oracle/DEV/sapdata1
/dev/vg00/lvdar    1048576   16725  967368    2% /oracle/DEV/saparch
/dev/vg00/lvdolb   1048576   16725  967368    2% /oracle/DEV/origlogB
/dev/vg00/lvdola   1048576   16725  967368    2% /oracle/DEV/origlogA
/dev/vg00/lvdmib   1048576   16725  967368    2% /oracle/DEV/mirrlogB
/dev/vg00/lvdmia   1048576   16725  967368    2% /oracle/DEV/mirrlogA
/dev/vg00/lvol9    10485760 4434000 6004536   42% /opt
/dev/vg00/lvol8    2097152   37056 2044056    2% /home
intabck:/etc>

netstat -rn :

intabck:/etc>netstat -rn
Routing tables
Destination           Gateway            Flags   Refs Interface  Pmtu
127.0.0.1             127.0.0.1          UH        0  lo0        4136
10.1.2.8              10.1.2.8           UH        0  lan0       4136
192.168.2.8           192.168.2.8        UH        0  lan1       4136
192.168.2.0           192.168.2.8        U         2  lan1       1500
10.1.0.0              10.1.2.8           U         2  lan0       1500
127.0.0.0             127.0.0.1          U         0  lo0           0

traceroute 10.1.2.82 :

intabck:/etc>traceroute 10.1.2.82
traceroute to 10.1.2.82 (10.1.2.82), 30 hops max, 40 byte packets
 1  intalinux (10.1.2.82)  1.140 ms  0.225 ms  0.195 ms
intabck:/etc>

dmesg | tail -5 :

intabck:/etc>dmesg | tail -5
NFS server 10.1.2.82 not responding still trying
NFS server 10.1.2.82 not responding still trying
NFS server 10.1.2.82 not responding still trying
NFS server 10.1.2.82 not responding still trying
NFS server 10.1.2.82 not responding still trying
intabck:/etc>

And yes I do use NFSv3

/etc/services on intabck :

supdup        95/tcp                 #
hostnames    101/tcp  hostname       # NIC Host Name Server
tsap         102/tcp iso_tsap iso-tsap # ISO TSAP (part of ISODE)
pop          109/tcp postoffice pop2 # Post Office Protocol - Version 2
pop3         110/tcp  pop-3          # Post Office Protocol - Version 3
portmap      111/tcp  sunrpc         # SUN Remote Procedure Call
portmap      111/udp  sunrpc         #
auth         113/tcp  authentication # Authentication Service

Silly questions it may seem but I had a lot of HP years ago ( and strange issues)...

  • Do all the HP boxes have their peer HP defined in /etc/hosts?
  • what do you have in /etc/nsswitch.conf and /etc/resolv.conf ?

I'm obviously still confused about this one but still convinced this is a network issue.

In your last post you said.................

Surely that means that if 'intabck' is trying to reach any address starting 10.1. it will route to gateway 10.1.2.8 which I guess is your internet gateway, and NOT treat it as if 10.1.2.82 is on the local subnet. Yes?

---------- Post updated at 08:37 AM ---------- Previous update was at 08:33 AM ----------

I said many posts back that the 'showmount' command telling you that /public NFS handle was on 10.1.2.8 was not right.