Filesystems automatically umounted Closed/Synced

Hello friends,

I am confused with one of aix filesystem problem.
On one of my server, some of my rootvg filesystems shows Closed/synced status for i.e /home, /var/adm/ras/platform
Everyday i manually mount these filesystems.

What is the reason causing filesystems to go in Closed/synced state?.
Also /etc/filesystems attributes seems to be normal. Please refer below outputs.

# lsvg -l rootvg 
rootvg:
LV NAME             TYPE       LPs   PPs   PVs  LV STATE      MOUNT POINT
hd5                 boot       1     2     2    closed/syncd  N/A
hd6                 paging     32    64    2    open/syncd    N/A
hd8                 jfs2log    1     2     2    open/syncd    N/A
hd4                 jfs2       16    32    2    open/syncd    /
hd2                 jfs2       120   240   2    open/syncd    /usr
hd9var              jfs2       24    48    2    open/syncd    /var
hd3                 jfs2       40    80    2    open/syncd    /tmp
hd1                 jfs2       24    48    2    closed/syncd  /home
hd10opt             jfs2       24    48    2    open/syncd    /opt
fwdump              jfs2       1     2     2    closed/syncd  /var/adm/ras/platform
tsmtestlv           jfs        10    10    1    closed/syncd  N/A
fslv00              jfs2       1     1     1    closed/syncd  /testtsm
loglv06             jfslog     1     1     1    closed/syncd  N/A
fslv07              jfs2       8     8     1    closed/syncd  /tsmdata/toc
lv02                jfs        117   117   1    closed/syncd  /mkcd/cd_images

/etc/filesystems

 
/home:
        dev             = /dev/hd1
        vfs             = jfs2
        log             = /dev/hd8
        mount           = true
        check           = true
        vol             = /home
        free            = false

Please help ASAP.

The "mount = TRUE" indicates that the FS would be mounted automatically during a reboot, so that rules a reboot out.

The only other way to get a FS into "closed" state is to umount it. Maybe this is done by some script, which runs frequently?

You could write a little script which tests if /home is still mounted in regular intervals and writes a timestamp to a log file each time it is. This way you cound find out when exactly the umount happens. Have a look then in the crontabs, maybe you can find the "offender".

Just guessing, but could it be some that a script mounts an NFS share, tries to umount it and simply gets it wrong - umounting not the NFS share but the /home FS?

I hope this helps.

bakunin

Do you have automount enabled for these filesystems ... than they might appear unmounted as long as they are not used.

Rgds
zx

Sorry for the late reply guys. Thanks Bak,
Ya your right. Theres a kinda backup script run by tivoli which has umount all and varyoffvg command for some of my apps vgs. But at later after the backup activity theres another script which varyons the same vgs and mounts my application filesystems.
So now the question arises is when the 1st script runs umount all why /home does get in closed/synced mode? though it is system related filesystem?

Unmount them manually and then use whatever the script uses to mount them, probably mount -a and see if it fails. Sometimes /etc/filesystems needs pruning, though that's usually when you have nested filesystems like /home and /home/fsname, etc.