Slow Copy(CP) performance

Hi all

We have got issues with copying a 2.6 GB file from one folder to another folder.
Well, this is not the first issue we are having on the box currently, i will try to explain everything we have done from the past 2 days.

We got a message 2 days back saying that our Production is 98% full (disk space). So we started compressing all the old files, moved half of them to a backup server, and compressed the remaining on our server. By doing this we effectively bought down the disk space occupancy to 79%.

Now today morning when i tried to run my process, the copying of a 2.6 GB file from folder A to folder B took ages. The SA checked the CPU utilization and told me that it was HIGH. i went ahead and killed some orphaned processes, to bring the CPU Utilization to 20%. Now, when there is high activity the CPU utilization varies between 20-35%.

Even after improving the CPU Utilization, the copy of file is still painfully slow.

Any guesses on what might have gone wrong ??? :confused:

Let me know if i need to elaborate more

Thanks
Sri

Since you don't tell us anything about your OS, your disklayout or anything else, we obviously have to guess, but in any case a copy from A to B that is slow is rather an IO issue than a cpu problem.
My best guess is, that both filesystems are maybe on the same disk and have maybe even different blocksizes. Since your filesystem was almost full, your fragmentation is very likely very high, since the OS had to put additional data where space were left, so typically the data was spread across the remaining diskspace and not nicely lined up like it would have been the case with lots of free space in the volumegroup. And I assume you haven't done a defragfs after cleaning up your diskspace.
When you now copy data from A to B and both locations are on the same disk, your system will take a lot more time to 1. find the data in the 'correct' order in filesystem A and read it - because its spread across the physical volume and 2. it will take a lot of time put the data back to disk in filesystem 'B' in the correct and suitable order - since the system has to find again free blocks big enough for your data chunks - and these chunks are likely as well spread across the entire disk.
Try to defrag your diskspace a few times, maybe that improves performance. If not, backup your data, drop the filesystems, defrag, recreate them and restore the content from backups.

Kind regards
zxmaus

zxmaus

Thanks for the response. we are on HP-UX server.

you are right, we had fragmentation problem on the unix box. My SA was saying we had the buffer Cache fragmentation as well, which kept adding to our problems.

the Admin ran a over-night de-frag process and increased the kernel parameter bcvmap_size_factor.

We re-booted the box and things look much better now :slight_smile:

we are planning to keep an eye on the system and see when it starts to choke, so that we can plan for a de-frag periodically.

Thanks

Rule of thumb:
Seriously consider never letting a given filesystem get above 80-85% full.

Filesystems under heavy I/O loads suffer from various kinds of latency issues
when free space becomes tight, file allocation times increase as well.

The other caveat:
Assuming loads of available free inodes, huge directory files (the directory file itself, not what is in the directory) are the result of adding lots of files to a single directory. As the directory file itself grows, system performance against it - ls, find, stat, etc. - becomes very poor.

This is because any operation that does a readdir, which is sequential, is really slow if it has to read thru 2 million entries to find one filename.

When files are deleted from the bloated directory, it does not shrink. You have to park the remaining files somewhere, delete the directory, recreate the directory, then move the files back into it. And you have a new, smaller size directory file. Having broader directory trees solves this problem in the first place.