Hi,
anyone please let us know how to write shell script to find the missing mountpoints after server reboot.
i want to take the mountpount information before server reboot, and validate the mountpoints after server reboot if any missing.please let us know the shell script from begining to end as i don't have knowledge on scripting.
Thanks,
Venkat
Hi VenkatReddy786,
Welcome to forum, please use code tags for any commands or codes. you can go through once with all forum rules. Also let me give you some basic ideas here for same and please try it by yourself and let us know if you face any issues with what you would have tried so far.
i- you can make a script for checking mount points by using df
command check them before and after activity and compare them after activity they should be same as before activity.
ii- To ake a permanent solution, you can check fstab
file in /etc/fstab
in case of bash
, make sure here mount points are not soft. Meaning if they are mentoned as soft mount points they will not come up after reboot. This should be permanent fix for the issue.
EDIT: In case of solaris
it should be /etc/vfstab
.
kindly try out the steps and let us know if you face any issues.
Thanks,
R. Singh
Hi Venkst & Ravinder,
Please note that on Solaris there is not an
/etc/fstab
It is in fact;
/etc/vfstab
root@ekbvsollic01:~# cat /etc/vfstab
#device device mount FS fsck mount mount
#to mount to fsck point type pass at boot options
#
/devices - /devices devfs - no -
/proc - /proc proc - no -
ctfs - /system/contract ctfs - no -
objfs - /system/object objfs - no -
sharefs - /etc/dfs/sharetab sharefs - no -
fd - /dev/fd fd - no -
swap - /tmp tmpfs - yes -
/dev/zvol/dsk/rpool/swap - - swap - no -
You can glean a great deal of additional information from the mount tab.
root@ekbvsollic01:~# cat /etc/mnttab
rpool/ROOT/solaris / zfs dev=4490002 0
/devices /devices devfs dev=8540000 1405604525
/dev /dev dev dev=8580000 1405604525
ctfs /system/contract ctfs dev=8640001 1405604525
proc /proc proc dev=85c0000 1405604525
mnttab /etc/mnttab mntfs dev=8680001 1405604525
swap /system/volatile tmpfs xattr,dev=86c0001 1405604525
objfs /system/object objfs dev=8700001 1405604525
sharefs /etc/dfs/sharetab sharefs dev=8740001 1405604525
/usr/lib/libc/libc_hwcap1.so.1 /lib/libc.so.1 lofs dev=4490002 1405604533
fd /dev/fd fd rw,dev=8840001 1405604535
rpool/ROOT/solaris/var /var zfs rw,devices,setuid,nonbmand,exec,rstchown,xattr,atime,dev=4490003 1405604535
swap /tmp tmpfs xattr,dev=86c0002 1405604535
rpool/VARSHARE /var/share zfs rw,devices,setuid,nonbmand,exec,rstchown,xattr,atime,dev=4490004 1405604536
rpool/data /data/dataarch zfs rw,devices,setuid,nonbmand,exec,rstchown,xattr,atime,dev=4490005 1405604541
rpool/export /export zfs rw,devices,setuid,nonbmand,exec,rstchown,xattr,atime,dev=4490006 1405604541
rpool/export/home /export/home zfs rw,devices,setuid,nonbmand,exec,rstchown,xattr,atime,dev=4490007 1405604541
rpool /rpool zfs rw,devices,setuid,nonbmand,exec,rstchown,xattr,atime,dev=4490008 1405604541
rpool/space /space zfs rw,devices,setuid,nonbmand,exec,rstchown,xattr,atime,dev=4490009 1405604541
-hosts /net autofs nosuid,indirect,ignore,nobrowse,dev=8900001 1405604546
auto_home /home autofs indirect,ignore,nobrowse,dev=8900002 1405604546
-fedfs /nfs4 autofs ro,nosuid,indirect,ignore,nobrowse,dev=8900003 1405604546
/dev/dsk/c7t0d0s2 /media/SOL_10_113_X86 hsfs ro,nosuid,noglobal,maplcase,rr,traildot,dev=3380042 1405604546
Regards
Dave
Hi Ravinder Singh
Thanks for prompt responce. I know how to check manually with df command, but i want to validate with shell scripting. I am at basic level. Coould you please help on how to write shell script for this.
Hello Dave,
I had immediately edited the same in my post.
Thanks,
R. Singh
Hi Guys,
Check the post again as I have subsequently updated with more information.
Dave
Hi Dave,Ravinder Singh,
I want to place the shell script on server to validate the mountpoints.i want shell script for this.It would be more helpfull if you guys provide the script.
Thanks,
Venkat
Hi Venkat,
Show me what you have and I'll see if I can help, the forum is here for people to help each other and to do that we have to have a starting place.
To benefit from being a member of the forum it is necessary to learn and if someone just provides you with a script you will not learn, have you searched the previous posts in the forum as the information is probably there.
Regards
Dave
Hi Dave,
I tried to search previous posts, but couldn't find anything related to this. Could you please provide shell script for this. i will test my server, and get back to you with result. Below is my plan to prepare shell script.
1) I want to take the mountpoint information before server reboot.
2) I want to validate the mountpoints after server reboot with comparision the information which we taken before server reboot.
3) I want output like mountpoints are not missed if not miss.
4) I want output like mountpoints are missed if missed.
Thanks,
Venkat
Due to missing sample data, following example is how it could work in theory. It may or may not work for you.
script to run before reboot: collect-info-pre.sh
#!/bin/bash
df -k | awk 'NR>1 {print $1}' >df.pre
script to run after reboot: collect-info-post.sh
#!/bin/bash
df -k | awk 'NR>1 {print $1}' >df.post
if $(diff -q df.pre df.post >/dev/null 2>&1); then
echo "No missing filesystems found."
else
echo "Missing filesystems:"
grep -vx -f df.post df.pre
echo "*** Further in-depth analysis is absolutely necessary! ***"
fi
Hi junior-helper,
I am appreciate for your help. I will test in server, and get back to you.
Thanks,
Venkata
---------- Post updated at 06:54 AM ---------- Previous update was at 06:32 AM ----------
Hi junior-helper,
I have tested your script, It is working, but got wrong output.
Output should come if part like "No missing filesystems found.", but came elase part.
Below is the output for your reference. Please check and get back to me with correction please.
FYI.
BRKC101@dog105 [/home/oracle]
>#!/bin/bash
BRKC101@dog105 [/home/oracle]
>df -k | awk 'NR>1 {print $1}' >df.pre
BRKC101@dog105 [/home/oracle]
>more df.pre
/dev/dsk/c0d0s3
/devices
ctfs
proc
mnttab
swap
objfs
sharefs
/platform/SUNW,SPARC-Enterprise-T5220/lib/libc_psr/libc_psr_hwcap2.so.1
/platform/SUNW,SPARC-Enterprise-T5220/lib/sparcv9/libc_psr/libc_psr_hwcap2.so.1
fd
/dev/dsk/c0d0s4
swap
swap
dog105_INT_pool/etc_opt_opsware
dog105_INT_pool/home
dog105_INT_pool/opt_opsware
dog105_INT_pool/systools
dog105_INT_pool/tivoli
dog105_INT_pool/var_log_opsware
dog105_INT_pool/OV
dog105_INT_pool/var_opt_opsware
dog105_INT_pool/perf
daen0234-dm3-ipsan.corp.domain.com:/vol/test04/sfw01_dog105_189_orardbmspart_102041q09_sol10_sparc/oracle
daen0234-dm3-ipsan.corp.domain.com:/vol/test04/sfw01_dog105_189_orardbmspart_102041q09_sol10_sparc/dbtools
daen0234-dm3-ipsan.corp.domain.com:/vol/test04/sfw01_dog105_189_orardbmspart_102041q09_sol10_sparc/software
daen0234-dm3-ipsan.corp.domain.com:/vol/test04/sfw01_dog105_189_orardbmspart_102041q09_sol10_sparc/admin
daen0234-dm3-ipsan.corp.domain.com:/vol/test04/data01_test1_evn_jun10/orardbmspart_102041q09_sol10_sparc_410866/BRKC101/bkup01
daen0234-dm3-ipsan.corp.domain.com:/vol/test04/data01_test1_evn_jun10/orardbmspart_102041q09_sol10_sparc_410866/BRKC101/arch01
daen0234-dm3-ipsan.corp.domain.com:/vol/test04/data01_test1_evn_jun10/orardbmspart_102041q09_sol10_sparc_410866/BRKC101/admin
daen0234-dm3-ipsan.corp.domain.com:/vol/test04/data01_test1_evn_jun10/orardbmspart_102041q09_sol10_sparc_410866/BRKC101/db01
daen0234-dm3-ipsan.corp.domain.com:/vol/test04/data01_test1_evn_jun10/orardbmspart_102041q09_sol10_sparc_410866/BRKC101/redo01
daen0234-dm3-ipsan.corp.domain.com:/vol/test04/data01_test1_evn_jun10/orardbmspart_102041q09_sol10_sparc_410866/BRKC101/redo02
dog105_INT_pool/systools_monitoring
BRKC101@dog105 [/home/oracle]
>df -k | awk 'NR>1 {print $1}' >df.post
BRKC101@dog105 [/home/oracle]
>if $(diff -q df.pre df.post >/dev/null 2>&1); then
echo "No missing filesystems found."
else
echo "Missing filesystems:"
grep -vx -f df.post df.pre
echo "*** Further in-depth analysis is absolutely necessary! ***"
> echo "No missing filesystems found."
> else
> echo "Missing filesystems:"
> grep -vx -f df.post df.pre
> echo "*** Further in-depth analysis is absolutely necessary! ***"
> fi
Missing filesystems:
grep: illegal option -- x
grep: illegal option -- f
Usage: grep -hblcnsviw pattern file . . .
*** Further in-depth analysis is absolutely necessary! ***
BRKC101@dog105 [/home/oracle]
Hi Venkat,
Post the output of the following commands;
cat /etc/vfstab
AND
cat /etc/mnttab
It looks like you have a muxture of four different types of file systems here.
Also could you please use code tags.
Regards
Dave
Hi Dave,
Below is the out , which you asked.
BRKC101@dog105 [/home/oracle]
>cat /etc/vfstab
#live-upgrade:<Sun May 18 22:29:34 CDT 2014> updated boot environment <14R1-0518-2035>
#live-upgrade:<Tue Oct 8 18:17:44 CDT 2013> updated boot environment <2H2013-1008-1637>
#live-upgrade:<Mon Feb 11 12:42:34 CST 2013> updated boot environment <2H2012-0211-1113>
#live-upgrade:<Tue Jun 5 20:08:54 CDT 2012> updated boot environment <1H2012-0605-1830>
#live-upgrade:<Tuesday, July 26, 2011 06:43:13 PM CDT> updated boot environment <2ndHalf2011>
#live-upgrade:<Tuesday, January 11, 2011 11:20:36 AM CST> updated boot environment <1stHalf2011>
#device device mount FS fsck mount mount
#to mount to fsck point type pass at boot options
#
fd - /dev/fd fd - no -
/proc - /proc proc - no -
/dev/dsk/c0d0s1 - - swap - no -
/dev/dsk/c0d0s3 /dev/rdsk/c0d0s3 / ufs 1 no -
/dev/dsk/c0d0s4 /dev/rdsk/c0d0s4 /var ufs 1 no -
/devices - /devices devfs - no -
sharefs - /etc/dfs/sharetab sharefs - no -
ctfs - /system/contract ctfs - no -
objfs - /system/object objfs - no -
swap - /tmp tmpfs - yes -
# start TEO 4S: orardbmspart_102041q09_sol10_sparc
daen0234-dm3-ipsan.corp.domain.com:/vol/test04/sfw01_dog105_189_orardbmspart_102041q09_sol10_sparc/oracle - /oracle nfs - yes ro,bg,hard,vers=3,proto=tcp,rsize=32768,wsize=32768,nointr
daen0234-dm3-ipsan.corp.domain.com:/vol/test04/sfw01_dog105_189_orardbmspart_102041q09_sol10_sparc/software - /oracle/g01/software nfs - yes rw,bg,hard,vers=3,proto=tcp,rsize=32768,wsize=32768,nointr
daen0234-dm3-ipsan.corp.domain.com:/vol/test04/sfw01_dog105_189_orardbmspart_102041q09_sol10_sparc/admin - /oracle/g01/admin nfs - yes rw,bg,hard,vers=3,proto=tcp,rsize=32768,wsize=32768,nointr
daen0234-dm3-ipsan.corp.domain.com:/vol/test04/sfw01_dog105_189_orardbmspart_102041q09_sol10_sparc/dbtools - /oracle/dbtools nfs - yes rw,bg,hard,vers=3,proto=tcp,rsize=32768,wsize=32768,nointr
# end TEO 4S: orardbmspart_102041q09_sol10_sparc
# start TEO 4S: orardbmspart_102041q09_sol10_sparc
daen0234-dm3-ipsan.corp.domain.com:/vol/test04/data01_test1_evn_jun10/orardbmspart_102041q09_sol10_sparc_410866/BRKC101/admin - /oracle/g01/admin/BRKC101 nfs - yes rw,bg,hard,vers=3,proto=tcp,rsize=32768,wsize=32768,nointr
daen0234-dm3-ipsan.corp.domain.com:/vol/test04/data01_test1_evn_jun10/orardbmspart_102041q09_sol10_sparc_410866/BRKC101/arch01 - /oracle/g01/arch01/BRKC101 nfs - yes rw,bg,hard,vers=3,proto=tcp,rsize=32768,wsize=32768,nointr
daen0234-dm3-ipsan.corp.domain.com:/vol/test04/data01_test1_evn_jun10/orardbmspart_102041q09_sol10_sparc_410866/BRKC101/bkup01 - /oracle/g01/bkup01/BRKC101 nfs - yes rw,bg,hard,vers=3,proto=tcp,rsize=32768,wsize=32768,nointr
daen0234-dm3-ipsan.corp.domain.com:/vol/test04/data01_test1_evn_jun10/orardbmspart_102041q09_sol10_sparc_410866/BRKC101/db01 - /oracle/g01/db01/BRKC101 nfs - yes rw,bg,hard,vers=3,proto=tcp,rsize=32768,wsize=32768,nointr
daen0234-dm3-ipsan.corp.domain.com:/vol/test04/data01_test1_evn_jun10/orardbmspart_102041q09_sol10_sparc_410866/BRKC101/redo01 - /oracle/g01/redo01/BRKC101 nfs - yes rw,bg,hard,vers=3,proto=tcp,rsize=32768,wsize=32768,nointr
daen0234-dm3-ipsan.corp.domain.com:/vol/test04/data01_test1_evn_jun10/orardbmspart_102041q09_sol10_sparc_410866/BRKC101/redo02 - /oracle/g01/redo02/BRKC101 nfs - yes rw,bg,hard,vers=3,proto=tcp,rsize=32768,wsize=32768,nointr
# end TEO 4S: orardbmspart_102041q09_sol10_sparc
BRKC101@dog105 [/home/oracle]
@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@
>cat /etc/mnttab
/dev/dsk/c0d0s3 / ufs rw,intr,largefiles,logging,xattr,onerror=panic,dev=2640003 1400607609
/devices /devices devfs dev=5980000 1400607590
ctfs /system/contract ctfs dev=59c0001 1400607590
proc /proc proc dev=5a00000 1400607590
mnttab /etc/mnttab mntfs dev=5a40001 1400607590
swap /etc/svc/volatile tmpfs xattr,dev=5a80001 1400607590
objfs /system/object objfs dev=5ac0001 1400607590
sharefs /etc/dfs/sharetab sharefs dev=5b00001 1400607590
/platform/SUNW,SPARC-Enterprise-T5220/lib/libc_psr/libc_psr_hwcap2.so.1 /platform/sun4v/lib/libc_psr.so.1 lofs dev=2640003 1400607600
/platform/SUNW,SPARC-Enterprise-T5220/lib/sparcv9/libc_psr/libc_psr_hwcap2.so.1 /platform/sun4v/lib/sparcv9/libc_psr.so.1 lofs dev=2640003 1400607600
fd /dev/fd fd rw,dev=5c80001 1400607611
/dev/dsk/c0d0s4 /var ufs rw,intr,largefiles,logging,xattr,onerror=panic,dev=2640004 1400607619
swap /tmp tmpfs xattr,dev=5a80002 1400607619
swap /var/run tmpfs xattr,dev=5a80003 1400607619
dog105_INT_pool/etc_opt_opsware /etc/opt/opsware zfs rw,devices,setuid,nonbmand,exec,rstchown,xattr,atime,dev=4010002 1400607630
dog105_INT_pool/home /home zfs rw,devices,setuid,nonbmand,exec,rstchown,xattr,atime,dev=4010003 1400607630
dog105_INT_pool/opt_opsware /opt/opsware zfs rw,devices,setuid,nonbmand,exec,rstchown,xattr,atime,dev=4010004 1400607630
dog105_INT_pool/systools /systools zfs rw,devices,setuid,nonbmand,exec,rstchown,xattr,atime,dev=4010005 1400607630
dog105_INT_pool/tivoli /usr/local/Tivoli zfs rw,devices,setuid,nonbmand,exec,rstchown,xattr,atime,dev=4010006 1400607630
dog105_INT_pool/var_log_opsware /var/log/opsware zfs rw,devices,setuid,nonbmand,exec,rstchown,xattr,atime,dev=4010007 1400607631
dog105_INT_pool/OV /var/opt/OV zfs rw,devices,setuid,nonbmand,exec,rstchown,xattr,atime,dev=4010008 1400607631
dog105_INT_pool/var_opt_opsware /var/opt/opsware zfs rw,devices,setuid,nonbmand,exec,rstchown,xattr,atime,dev=4010009 1400607631
dog105_INT_pool/perf /var/opt/perf zfs rw,devices,setuid,nonbmand,exec,rstchown,xattr,atime,dev=401000a 1400607631
daen0234-dm3-ipsan.corp.domain.com:/vol/test04/sfw01_dog105_189_orardbmspart_102041q09_sol10_sparc/oracle /oracle nfs ro,bg,hard,vers=3,proto=tcp,rsize=32768,wsize=32768,nointr,xattr,dev=5d00001 1400607638
daen0234-dm3-ipsan.corp.domain.com:/vol/test04/sfw01_dog105_189_orardbmspart_102041q09_sol10_sparc/dbtools /oracle/dbtools nfs rw,bg,hard,vers=3,proto=tcp,rsize=32768,wsize=32768,nointr,xattr,dev=5d00002 1400607639
daen0234-dm3-ipsan.corp.domain.com:/vol/test04/sfw01_dog105_189_orardbmspart_102041q09_sol10_sparc/software /oracle/g01/software nfs rw,bg,hard,vers=3,proto=tcp,rsize=32768,wsize=32768,nointr,xattr,dev=5d00003 1400607639
daen0234-dm3-ipsan.corp.domain.com:/vol/test04/sfw01_dog105_189_orardbmspart_102041q09_sol10_sparc/admin /oracle/g01/admin nfs rw,bg,hard,vers=3,proto=tcp,rsize=32768,wsize=32768,nointr,xattr,dev=5d00004 1400607639
daen0234-dm3-ipsan.corp.domain.com:/vol/test04/data01_test1_evn_jun10/orardbmspart_102041q09_sol10_sparc_410866/BRKC101/bkup01 /oracle/g01/bkup01/BRKC101 nfs rw,bg,hard,vers=3,proto=tcp,rsize=32768,wsize=32768,nointr,xattr,dev=5d00005 1400607639
daen0234-dm3-ipsan.corp.domain.com:/vol/test04/data01_test1_evn_jun10/orardbmspart_102041q09_sol10_sparc_410866/BRKC101/arch01 /oracle/g01/arch01/BRKC101 nfs rw,bg,hard,vers=3,proto=tcp,rsize=32768,wsize=32768,nointr,xattr,dev=5d00006 1400607639
daen0234-dm3-ipsan.corp.domain.com:/vol/test04/data01_test1_evn_jun10/orardbmspart_102041q09_sol10_sparc_410866/BRKC101/admin /oracle/g01/admin/BRKC101 nfs rw,bg,hard,vers=3,proto=tcp,rsize=32768,wsize=32768,nointr,xattr,dev=5d00007 1400607639
daen0234-dm3-ipsan.corp.domain.com:/vol/test04/data01_test1_evn_jun10/orardbmspart_102041q09_sol10_sparc_410866/BRKC101/db01 /oracle/g01/db01/BRKC101 nfs rw,bg,hard,vers=3,proto=tcp,rsize=32768,wsize=32768,nointr,xattr,dev=5d00008 1400607639
daen0234-dm3-ipsan.corp.domain.com:/vol/test04/data01_test1_evn_jun10/orardbmspart_102041q09_sol10_sparc_410866/BRKC101/redo01 /oracle/g01/redo01/BRKC101 nfs rw,bg,hard,vers=3,proto=tcp,rsize=32768,wsize=32768,nointr,xattr,dev=5d00009 1400607639
daen0234-dm3-ipsan.corp.domain.com:/vol/test04/data01_test1_evn_jun10/orardbmspart_102041q09_sol10_sparc_410866/BRKC101/redo02 /oracle/g01/redo02/BRKC101 nfs rw,bg,hard,vers=3,proto=tcp,rsize=32768,wsize=32768,nointr,xattr,dev=5d0000a 1400607639
-hosts /net autofs nosuid,indirect,ignore,nobrowse,dev=5d40001 1400607640
dog105:vold(pid681) /vol nfs ignore,noquota,dev=5d0000b 1400607644
dog105_INT_pool/systools_monitoring /systools/monitoring zfs rw,devices,setuid,nonbmand,exec,rstchown,xattr,atime,dev=401000b 1401926943
BRKC101@dog105 [/home/oracle]
>
Hi Venkat,
Your system currently has a number of different file systems types mounted, some of these seem to automount. This will mean that the mounted file systems can be changed simply by having someone else logged onto the server.
This will not be an easy script to write, mainly because of the auto mounter.
What you will probably have to do is prepare a list (one entry per line - like this) lets say file.txt,
/
/var
/etc/opt/opsware
This file will require to have a complete list of the required file systems.
Then you will have to do something like this.
#! / bin / bash
while read line
do
check the file system is mounted
done <file.txt
Here is an example of a subroutine that I would use it will check and report status that is all for "zfs" , beware the input file has different fields in it.
CHECKMOUNT ()
{
for FS in `cat ${FSFILE} | awk -F":" '{ print $1 }'`
do
if zfs list | grep "^${FS} " > /dev/null
then
MSG "${FS} is mounted"
ERR "Unmount all filesystems under ${FS} before re-running"
fi
done
}
In order to mount and error check failed mounts you will need to have several modules like to one above.
Regards
Dave
Hi Dave,
Thanks lot for your initiate on this.
Just i want to compare two files, one file is having mountpoint information with df -k before server reboot, another is having mountpoint information with df -k after server reboot. I need to find the missing mountpoint information with comparision of two files if mountpoints are missing after server reboot.I don't want script should mount missing mountpoints.I will contact SA if any mountpoint missed.I would be more appreciate if you prepare the script for this
Hi Venkat,
Before restart your host, you can take backup of your mount point by using
#df > df
It will store the all df data as a file in your home directory. You can verify it after reboot.
Hi seenuvasan,
We can do manuall check. I want to find with shell script.
Thanks,
Venkat
Try two changes:
-
diff
instead ofdiff -q
-
/usr/xpg4/bin/grep
instead ofgrep
Oops...
Just i saw your output of vfstab. Seems very large one... tough to check manually everything.
Also its complicated to create script, only shell script experts can help you.
Regards,
Sri
Hi junior-helper,
The script is working as expected..Thanks lot for your help..you are tiger..i am very appriciate.
Thanks,
Venkat