because the folder has thousand of files it takes too long and have some trouble to get the largest files and then compress files or delete it, for instance
If there are many many files, then any search trough them will take time, potentially a lot of time. If this is on a filesystems that is mounted over the network, then this time will be much greater. Far better to do the work when the disk really is.
Perhaps this may be slightly more efficient for you though:-
You may create a list of filenames and sizes prior to having a closer look with different criteria. So the long running part - the reading of all file sizes is only being done once.
Example
1) Read the sizes
find / -type f -exec stat -c "%n %s" "{}" + >$HOME/files_sizes.txt
Can you sign on to the server at address 10.80.1.83 ? If you can, running your code there will be significantly faster that running over the network. This will be not just the searching, but the actual compression too. If you compress over the network, then you have to read the file across the network to your local memory, compress it and then write the resultant file back across the network to the server disk.
It really could be a massive difference in performance.
Stomp - there is no stat command on vanilla SunOS. AFAIK. Since Oracle took over the Solaris freeware site died as well.
I think the OP also has another problem - Solaris 10 file systems (not ZFS) and earlier
all had a problem. If there are large numbers of files in a single directory, some file-related commands, notable is ls , bog down. A lot.
We had a directory with >30K small files in it. I fixed the performance problems by
moving files off the primary directory every day in a cron job. But still kept on the same file system. With about 5000 files performance was acceptable.
alexcol - please post the the output of a command the gives the physical size in bytes of the exact directory with the problem.
Since I do not know the name of the directory, here is an example, note the lowercase "d" in the command:
ls -ld /path/to/directory
Please post the result so we can help
And if you happen to have too many individual files, regardless of size, on a file system then you can run out of inodes as well. This is pretty hard to do, but if the filesystem was created with unusual parameters this happens.
To see used inodes, try:
df -i /path/to/mountpoint
where mountpoint is the place in the file system where you interesting directory is mounted
Can you sign on to the server at address 10.80.1.83 ? If you can, running your code there will be significantly faster that running over the network. This will be not just the searching, but the actual compression too. If you compress over the network, then you have to read the file across the network to your local memory, compress it and then write the resultant file back across the network to the server disk.
It really could be a massive difference in performance.