Zone failes to boot due to mount issue, dir exists in zone.

I have two physical servers, with zones that mount local storage.

We were using "raw device" in the zonecfg to point to a metadevice on the global zone (it was not mounted in the global zone at any point).

It failed to mount on every boot because the directory existed in the zone.

I changed it to a lofs mount, so at that point it is mounted in the global zone and being shared.

I still get the same error, I have to delete the directory in the zone prior to booting the zone in order to mount, or for the zone to even boot. It fails cause the directory exists, and the zone will not boot when the physical server reboots...

Has anyone een this before? I'm stumped, I dont want to have to make a script that deletes the directory.

Can you post output of:

zonecfg -z zone_in_question export

The filesystem type is UFS

create -b
set zonepath=/Zone/#zonename3
set autoboot=true
set ip-type=shared
add inherit-pkg-dir
set dir=/lib
end
add inherit-pkg-dir
set dir=/platform
end
add inherit-pkg-dir
set dir=/sbin
end
add inherit-pkg-dir
set dir=/usr
end
add fs
set dir=/var/syslog
set special=/Zone/ZONE_MOUNTS/#zonename#_var_syslog
set type=lofs
end
add net
set address=10.xx.xx.xx
set physical=aggr1
end

---------- Post updated at 02:51 PM ---------- Previous update was at 02:27 PM ----------

Do you think it has anything to do with the fact that I have "create -b" but it is a sparse root zone? I just noticed that myself...

And what does this show when zone is halted:

ls -la /Zone/#zonename3/root/var/syslog

Config seems be little bit strange. If you used

create -b

why are presented inherited directories???
Anyway if you are approaching metadevice from inside of local zone it should be accessed as raw device (but this is not rule). If you are using lofs see config example below.

global# newfs /dev/rdsk/c1t0d0s0
global# mount /dev/dsk/c1t0d0s0 /mystuff
global# zonecfg -z my-zone
zonecfg:my-zone> add fs
zonecfg:my-zone:fs> set dir=/usr/mystuff
zonecfg:my-zone:fs> set special=/mystuff
zonecfg:my-zone:fs> set type=lofs
zonecfg:my-zone:fs> end

Can I ask about hashes in output?? They are replacing something what shouldn't be visible on public??

<server> 89 # ls -al //Zone/zonename/root/var/syslog
total 4
drwxr-xr-x 2 root root 512 Nov 30 15:26 .
drwxr-xr-x 48 root sys 1024 Nov 30 15:26 ..
<zonename> 90 #

---------- Post updated at 09:02 PM ---------- Previous update was at 09:01 PM ----------

That would be correct, I have to filter server names and IP addresses. Hence the hashes.

---------- Post updated at 09:04 PM ---------- Previous update was at 09:02 PM ----------

And ya, don't even go there, i hate sparse root zones.

---------- Post updated at 09:08 PM ---------- Previous update was at 09:04 PM ----------

Figured out something else, the zone boot only fails if I reboot the global zone.
this is quite mysterious.

OK, try this then:

  1. Set autoboot to false
zonecfg -z zonename set autoboot=false
  1. Boot up the zone (or leave it running if it is already up)
  2. Reboot global zone
  3. After the global zone is up again, run:
ls -al /Zone/zonename/root/var/syslog