Sub-folder in share disappears when mounting to other server

Hello,

I have 3 solaris 11.2 servers:

  • 1 and 2: are just fileservers with 1 zfs-share
  • server 3: i want to use this one to connect to the rest of our network (windows machines and a few solaris machines)

I created the shares on all servers like this (x needs to be replace with the number of the server; jobsx = jobs1 on server 1):

zfs create rpool/jobsx
zfs set share=name=jobsx,path=/rpool/jobsx,prot=smb,guestok=true rpool/jobsx
zfs set sharesmb=on rpool/jobsx
zfs set share=name=jobsx,path=/rpool/jobsx,prot=nfs rpool/jobsx
zfs set sharenfs=on rpool/jobsx

On server 3:
Mount the shares from server1 and server2 to a sub-folder :

mount server1:/rpool/jobs1/ /rpool/jobs3/server1
mount server2:/rpool/jobs2/ /rpool/jobs3/server2

On server3 i can see the files from server 1 and 2 but when i do a mount to server3 on a other solaris in the network i can't see the sub-folders in /rpool/jobs3 (idem when i try to connect from a windows machine)

How can i resolve this?

Regards
Wim

I've read your post several times but I'm finding it difficult to comprehend what is going on here.

You have:
a) created an NFS share on Server1 (Solaris) and created an NFS share on Server2(Solaris). From Server3(Solaris) you are able to mount both these shares from Server1 & Server2 and they work.

b) created an NFS share on Server3(Solaris), are you trying to encompass the two mounts from Server1 & Server2 within this share???? and then mount this share from other systems. Subdirectories within the share on Server3 are then not accessible from the remote clients??

Is that right? Can you add any detail to this please?

1 Like

I'm not understanding what you did, either. Are server 1 & 2 & 3 on the same physical box (zones)? In a cluster? That might help to explain it.

1 Like

Hello,

@hicksd8: yes that's it.
So basically i want server 3 to be connected to the rest of the network and server 1 and 2 accessible through server 3.
Server 1 and 2 are physical servers, server 3 is a virtual (Hyper-V)

Hmmmmm.............I must admit that I don't have much experience in passing NFS mounts within NFS mounts. So I'm still thinking about this one (and I'll be interested to see what my fellow moderators think) but these are my initial thoughts.

I expect that when you logged into Server3 and could see the two mounts from Server1 & Server2 that you were using an account with reasonable privileges; perhaps your own account or even root. Be aware that when you connect from a remote NFS client to Server3 it is probably using a lowly account, eg nobody, and so access can easily be caught up in standard Unix security. Therefore, I would be tempted to open up security by setting the mount points on Server3 to 777. Create some small files in the root of the share on Server3 and test whether you can see those from the remote NFS client(s). You might also try opening up security on the Server1 and Server2 shared directories to 777. Also check out how the NFS server security is configured in /etc/exports files on each NFS server.

Do be professional and record the directory/file you are changing security on, who the owner is, who the group is, and what the security mask is on every file before you change it so that you know how to reverse it all.

Obviously, if changing security does make it work, you still might not want to leave such open security in place, I understand that, but at least it will prove what the problem is. I'll post again if/when I have more thoughts on this.

1 Like

Sorry, but I'm confused too. What do you actually need to achieve for your business? Can you help me understand?

  • Are servers 1 & 2 on a private segment of the network or not routable so that server 3 has to be a conduit? Why are they like this?
  • Does '...connect to the rest of our network...' mean that it actively opens connections, or do you just mean on the same subnet or can be routable?
  • Are you trying to achieve HA with a single point to contact?
  • Are you trying to present a single 'share' for the end-clients rather than two?
  • Are you trying to ensure that the data is read-only and only certain portions are visible at all?

Sorry for being confused, but these and other possible desired outcomes have all sorts of other ways to better deliver them.

Thanks in advance,
Robin

Security is set to 777 for all directories.

@rbatte1

All servers are in the same network/subnet

Are you trying to present a single 'share' for the end-clients rather than two?

Yes, because there are several machine who needs to connect to the shares on server 1 and 2, and last week we had a crash on server 1 and it took quiet some time to find all machines connected to this server. So if this happens again i just need to adjust the mounting point in server 3.

I think this might be a limitation of Solaris that doesn't allow you to cascade NFS mounts.

Some Linux versions implement a crossmnt option in /etc/exports to specifically allow this kind of 'route through' of NFS mounts.

I'm still thinking about this one.

Have you considered using solaris cluster on server 1 and server 2 with VIP address ?
Do you have shared storage (FC, ISCSI ?)

If i correctly understood your problems / story, i would suggest back to the drawing board...

Is this what are you trying to do :

        *------------*
        |            |
        |   SERVER1  |
        |            |
        |/nfs/share1 ---------------*
        *------------*              |
                                    |
                                    |
                                    |   *---------------------------------------*
                                    |   |                                       |
                                    |   |       SERVER 3                        |
                                    |   |                                       |
                                    *--->server1:/nfs/shareX/share1 [nfs mount] |
                                *------->server2:/nfs/shareX/share2 [nfs mount] |
                                |       |/nfs/shareX/[nfs export]------------------->-->--> [various number of nfs clients]
                                |       |                                       |
                                |       *---------------------------------------*
                                |
                                |
        *------------*          |
        |            |          |
        |   SERVER2  |          |
        |            |          |
        |/nfs/share2 -----------*
        *------------*

Regards
Peasant.

2 Likes

@Peasant: Yes, that's what i'm looking for!
What's the advantage of solaris cluster?
No shared storage.

@hicksd8: there's no file/folder /etc/exports

For info: the data on server 1 and 2 are all temporary images (between 25.000 and 50.000 / day - .tiff, .eps or .ps files) and will be deleted after +/- 14 days, so no need to make it complicated and no need for backups. :slight_smile: