Source port on AIX for NAS is same?

In AIX servers, for mounting NAS mount points only 1021 1022 1023 are used as source ports on more than 300 servers while destination port on storage end is 2049, is there any settings on servers where these ports are defined for mounting NAS mount points? Any body faced this scenario?

Thanks

Do you mean NAS? Or did you mean NFS?

1 Like

Yes, NAS

Hmmmm....ok......can you elaborate on why you are asking this question please.

Are you trying to make a NAS accessible from a large number of servers concurrently? What problems are you facing?

Certainly mountd and lockd can be configured to use a different port but now I'm not sure that is relevant to your question.

IBM How to force mountd/lockd to use a specific port. - United States

"services" is only about the destination port.
Tweaking the source port range and others But this is Linux.
In AIX might be hard coded. Some parameters are changeable with the no command.
A list of these:

no -a

This post (in this forum!) suggests the nfso command.

In AIX system, we face NAS issue on system.

Working:-

root]df -gt
/ 
/boot
/var
10.208.108.9:/data /data <- NAS  mount point

Not working:-

/
/boot
/var
NFS is trying to mount using 10.208.108.9

in AIX , if we type "nfso -a"
we see some parameters, in which there is one which states that

nfs_use_reserved ports= 1 (use ports less than 1024)
nfs_use_reserved ports= 0(use ports more than 1024)

however keeping "0" value here does resolve the NAS issue of mounting but it is not safe as per SCD to allow NAS coomunication to happen between aix client and storage on random ports.

but when we keep 1 we face the issue as it only takes 1021 1022 1023 ports as source port for mounting.

so my question is , can we specified that only 3000 to 3020 ports be used for NAS mount points?

Thanks

According to this link you can add a line to /etc/environment

NFS_PORT_RANGE=udp[4000-5000]:tcp[7000-8000]

that will be inherited in a new login shell.
(I am not sure how/if this will spread to a "NFS mount at boot".)
I do not understand why you bother with the source ports; it does not matter at all, and usually a firewall does not have rules about source ports.

In AIX (many) daemons are started with a sort-of "super-daemon" called SRC (system resource controller). It is possible to change the way a process controlled by it is started (or run) by using the command chssys . It is also possible to start daemons via the /etc/inittab directly as AIX has a SystemV-style boot sequence. It also sports RC-scripts, which can also be configured. (Some ssh-versions are an example of a service started by such an RC-script, although newer ssh-packages usually start it via inittab .) The group of daemons used for NFS depend on the NFS version(s) the system is using: biod , lockd , portmapper and statd are used for NFSv3, nfsrygd for NFSv4.

Amen to that. Furthermore, in post #3 the question was definitely about NAS and not NFS.

I hope this helps.

bakunin

1 Like

I will get back to you , after trying this.

This does not work, don't know what we are missing here.

KIndly advise.

---------- Post updated at 11:47 AM ---------- Previous update was at 11:17 AM ----------

BELOW IS THE DETAIL EXPLANATION OF THE ISSUE:-

Let me explain you the scenario

-There are 100 AIX clients which have few NAS volumes mounted on it.

-These NAS volumes are created on NetApp Storage.
-For AIX clients , they have separate IP called (NAS IP) for NAS volume operations.
-For NetApp Storage, it has LIF IP(Logical Interface )
-Destinastination port on Storage for NAS communication are 2049 and 111
-NAS comminucation happens between this NAS IP on AIX clients to LIF IP on NetApp Storage.
-There are below settings on AIX clients, which you can check with nfso -a
nfs_use_reserved ports= 1 (use ports less than 1024)
nfs_use_reserved ports= 0(use ports more than 1024)
-as per security rule we should keep as " 1" .
-however keeping "0" value here does resolve the NAS issue of mounting but it is not safe as per SCD to allow NAS coomunication to happen between aix client NAS IP and storage LIF IP on random ports.
-but when we keep 1 we face the issue as it only takes 1021 1022 1023 ports as source port for mounting.
-Now what issue we are facing currently with nfs_use_reserved ports= 1 settings, That I will explain you.

-So when we keep nfs_use_reserved ports= 1 settings
clients sends "SYN" from 1021 soruce port to 2049 port on Storage
Storage sends SYN,ACK to from 2049 to 1021 port.
clients sends ACK to from 1021 to 2049 port on storage,
so 3 way hand shake is done.
and at the end of this connection on storage is established on port 1021 and is active.
Next
clients sends "SYN" from 1022 soruce port to 2049 port on Storage
Storage sends SYN,ACK to from 2049 to 1022 port.
clients sends ACK to from 1022 to 2049 port on storage,
so 3 way hand shake is done.
and at the end of this connection on storage is established on port 1022 and is active and now on storage both connection from client on ports 1021 and 1022 are active.
now here comes the problem part:-
dont know some how the connection from client gets broken on one port, lets say 1021 and clients starts sending the SYN request on port 1021 again ,BUT the connection broken info does not reach to storage and it remains active on port 1021.SO when client sends SYN request again from source port 1021, storage responds with ACK ( as connection is already established ) rather than SYN,ACK so firewall which sits in between the client and storage drops this packet from storage rather than reseting the connection, the result of which the client keeps on sending the SYN request from same source port 1021 and we face the issue of NAS mount points as NAS mount points dont get mounted on clients.
but when we keep nfs_use_reserved ports= 0 it uses random ports and still now we have not face any NAS issue on that client,

So my question is that how to define specific NAS source ports on AIX clients?

Hope you all have understood my issue now.

Let us know if any query.
Thanks

It seems to me your problem is because of the firewall in between that is interfering with NFS communication. I think that the reason that you are having more problems with nfs_use_reserved_ports=1 is that there are fewer ports in the pool and you are therefore more likely to reuse a port that the Netapp SVM thinks is still in use. I think this can happen when the firewall interferes with normal communication and therefore the Netapp SVM has not learned that a port is no longer in use.

The firewall is probably configured to drop, rather than reject packets, so that is something that you could look into. Another thing to investigate is keep-alive signals and timeouts, to ensure that the firewall does not interfere..

That being said, it may be that your particular brand just does not work well with NFS, no matter what you try.

I am guessing that you are using a firewall to limit which systems are allowed to approach the filer, but I think it would be better if you put the firewall around the systems and the storage SVM so that there is a clear path between them, while also limiting which servers can approach the SVM.

1 Like

Well, we talked with Firewall team as well, but they are saying that it is the normal behavior of the firewall to drop the packets rather than sending reset.

Another plan of action to resolve this issue is
Plan 1
keep both NAS IP and Storage LIF IP in same VLAN and don't keep any firewall in between. (currently both NAS IP and Storage LIF IP are in different VLAN with firewall in between)

but I would like to know
Plan 2
What if we keep the same setup with communication happening from random source ports from client end to storage LIF ports with firewall in between,

which will be more secure plan 1 or Plan 2?

Thanks

That is a matter of choice. To drop packets is more so legitimate in an Internet facing situation, but if you are using it for internal segmentation dropping will break stuff, while a reject is more graceful. There are pros and cons, but it is not "normal behavior" in the sense that it is the only possibility.

Besides this, there are options to keep connections alive, to change timeouts or to make the time longer before the firewall interferes.

With plan 2 I think you may still have the problem once in a while. just less frequently. I personally would typically avoid sharing NFS through a firewall, unless you are using NFS with Kerberos. If you are using standard NFS with auth_sys authentication then in my opinion that is usually not a very secure situation and using reserved ports is not going to help that. But even with all that you described I do not know enough about your situation...

Besides this, there are options to keep connections alive, to change timeouts or to make the time longer before the firewall interferes.

How to keep it alive? What do you mean by this, which connection to keep it alive? Kindly suggest..

NFS_PORT_RANGE=udp[4000-5000]:tcp[7000-8000]

How to make this work? The Port range for NFS, this can also resolve the issue.

Thanks.

In regular intervals packets are sent in an existing connection to make sure the partner still is there. These packets are called "keepalive" packets. If these packets are not received the partner assumes that the other side went dead and closes the connection.

Think of a connection like a telephone call: when you talk to someone you expect some sort of acknowledgement that the other is still listening at times, be it "aha" or "hmm" or something such. If you don't get that you may ask "are you still there" - and if there is no answer you hang up. This is quite the same mechanism.

I hope that helps.

bakunin

In addition to what Bakunin said:
A firewall drops or rejects a connection after a certain period if there is no activity.
A keep-alive message can be sent as a null packet periodically by the client to keep a service alive. This keeps the firewall from dropping the connection. Of course if this is done liberally by every host for every connection then the connection table in the firewall would become too long. That is why some firewalls detect this behavior and ignore keepalives messages.

I know this is an old discussion - but your problem is, imho, self-inflicted.

michael@x071:[/home/michael]nfso -h nfs_use_reserved_ports
Purpose:
Specifies using nonreserved IP port number.
Values:
        Default: 0
        Range: 0 - 1
        Type: Dynamic
        Unit: On/Off
Tuning:
Value of 0 will use nonreserved IP port number when the NFS client communicates with the NFS server.

The default is zero (0)

michael@x071:[/home/michael]nfso -o nfs_use_reserved_ports
nfs_use_reserved_ports = 0

In the early 1980's there was this idea that port numbers less than 1024 could be "trusted" because only the super-user (aka root) could access them. This trust has been misplaced since the late 1980's as too many processes can access this so-called trusted ports. Why trust NFS (on port 2049)? It is well above 1024. Why it that number above 2049 trust-worthy and not other numbers.

In short, "trusted ports" exist in that it is still specified that a kernel privilege is needed to "open" aka request a connection from/to any other port.

If someone, even from your local security, says they MUST be 1023 and smaller - of course you can comply - BUT they are causing another security concept to become breached - availability. Not enough ports means no connectivity.

In short, port numbers - there is no added trust because a specific port number is being used. There might be a technical reason (e.g., firewall rules) to stay in a particular range - but the port number itself neither adds nor subtracts from the application security.

My 4 cents - hope it gets you decent coffee :slight_smile: