Facing file processing timeouts

Hi

In our ETL application we have used simple scripts to move & split the files. We are randomly facing timeouts on these jobs. It indicates we are running out of resources.

How should I confirm the resources are inadequate?

I know we have commands like vmstat & iostat which displays performance stats. Problem with them they mention some disks in their outputs, but I am not sure how the paths/dirs those files are being operated maps to those disks?

Any suggestion to track this?

I love chasing performance issues :slight_smile: Fisrt, what exactly operations you are doing ? What language is being used for these operations ? How big are the files, where do you see the timeouts, how powerful is your hardware ? What is your OS ?
Use "top" command to determine what's going on like :
"top -p PID-number, second-PID-number"
replace the PID-number with the respective PIDs of your scripts and you'll see where the resources are going.

We have simple shell scripts which splits the file & move them.

The ETL application we are using is DataStage. Though we see timeouts at DataStage & reported by ControlM as well we had seen this issue with simple 'mv' commands.

OS we are using is AIX.

As you suggested ... If I type 'man top' I get message as manual entry is not installed.

How would you find out ... how powerful or configuration of hardware? Mostly I understand the problem surrounds to hardware limitation. But before I can comment on that I should provide numbers like we are have so many GBs of memory % of utilization is done ...

use "topas" instead, that's why I asked about the OS.