FYI, this is what I've done to reduce size of /var directory
nulling log file such as /var/adm/wtmp
trimming /var/mail/oracle using vi editor
delete file such as /var/preserve/Ex*
delete /var/adm/crash/crash.5 directory and its content
delete an application log file on /var/tmp, initially I tried to null it with "cat /dev/null > trasym.ulma" command but it didn't work so I delete it. After I delete it, this file appeared again with same size before the deletion. After several times of deletion, finally this file is disappeared. Unfortunately I forgot to check with "fuser" command when delete this file to ensure that the respective process is already stop
.... to get the biggest at the bottom. You can also:-
ls -l | sort -nk 5
.... to get the biggest files in the current directory.
Does that direct you anywhere?
I see that /var/preserve is quite large. This is normally where editor recovery files are left. Perhaps they could be pruned. There is also mail waiting to be read. Have a look in /var/mail and for each large file, get the user to read their mail. These are normally the output from cron jobs, so perhaps you have something that works, but issues messages that you need to take care of.
I hope that this helps, but feel free to ask more if you still need help.
There's no file missing. du reports the disk usage by files/directories. But there's more to that: file system infra structure/meta data like super blocks are consuming disk space. Plus, the OS reserves some space for emergency root access (not sure if this holds true for data file systems). This is what df reports. It goes without saying that the two infos differ.
I forgot to say something else, quite important. If you have removed a large file (or large total size of files) and there was no change in the usage, then it could be that the file was in-use. The only thing you will have done is is to remove the entry from the directory it is in. The filesystem as a whole will still have the blocks marked as used until the process that has it open ends / closes it.
Use can use the fuser command to look for deleted files that are still in use on a filesystem. It depends on your OS as to what flags to use.
You are in the HP-UX thread, but I'm only a 11.11, so my fuser does not have this as an option. On other platforms, you could use something like:-
fuser -duV /var
Not sure if this helps, but I thought I should bring it up.
Hi rbatte1,
Yes, I've removed a single large file with size about 1.5 GB from /var/tmp directory. It was an application log file and I thought it was safe to delete it. It took several times of deletion before this file is totally disappeared. After that I ran bdf command and see no change on usage. I thought this is the root cause but it only last for a while. This log file is generated again by the application but with a smaller size.
Beside this file, I've also deleted some other files. I was panic at that time so I didn't record properly what files on which directory that I've deleted. Maybe one or several of them caused this problem. Current conclusion is I have to find process that is still accessing these files and kill it in order to release those files.
I'm using HP-UX 11.11 too but fuser command only support following options
-c Display the use of a mount point and any file beneath that
mount point. Each file must be a file system mount point.
-f Display the use of the named file only, not the files
beneath it if it is a mounted file system.
-u Display the login user name in parentheses following each
process ID.
-k Send the SIGKILL signal to each process using each file.
Correct me if I'm wrong, since English isn't my native language Based on above options, fuser command can not be used to find process that is using deleted file.
Is there any other command that I can use for this purpose ?
I hope that it helps. I don't have the tool crashinfo available on my HPUX server. It seem that you have to request it from HP support, according to this article:-
On October 8th 2013, %used that reported by bdf command already reached 89%. I've escalated this issue to HP Call Center and they suggested to stop diagnostic service, delete or move files on /var/stm/logs/os and start diagnostic service. This help to reduce %used to 61%. They've also asked me to update Event Monitoring System (EMS) and onlinediag to the latest version to solve difference of disk usage that reported by bdf and du command. I decided not to update them since %used dropped to 61%.
On October 10th 2013, %used became 62% and the next day became 63%. This is not good because it keeps on growing again. Fortunately I've found "The /var filesystem is full." thread on HP forum (sorry, I can't post the URL due to my posts is < 5 but you can google it with "hp-ux /var full" keyword). So I downloaded lsof utility, installed it and ran it. Finally I've managed to found processes that were locking deleted files. After I killed them, %used dropped to 38%.