Inconsistency with parallel run

Hi All,

I am running a parallel processing on aggregating a file. I am splitting the process into 7 separate parallel process and processing the same input file and the process will do the same for each 7 run. The issue I am having is for some reason the 1st parallel processes complete first with minimum time and the second complete as second and so on. Each process completion having significant difference in time.

I tried to look CPU usage when process with top command all the process is occupying 97% of CPU not sure why there is a difference between each parallel run.

Is there a way I can trace the process and find whether it is problem with IO/ Memory or CPU.

Note: Each process will read the same file from NAS mount and do the aggregation. I am using RedHAT

Thanks
Arun

More details, please.
That file almost certainly will be buffered locally when being accessed from NAS, so the first process should take longest. Will the file be updated / written back? Per process? Are the processes doing identical operations on the file? Do these influence each other? How do user and system times compare between processes? Do you have lock information available?

Why would they be identical? Especially if they're I/O bound. More details needed.

No we are not wring the file . Basically we are reading the file from NAS and then comparing the same file with qualified records and doing the aggregation.

The file in NAS is the full set. Where it have details about the customer and the file we will be compared will be SAN. Which have the transaction record of customer. The file from NAS will be compared with the transaction record and then the aggrigation happens. We are splitting that into 7 parallel so that we can achieve performance.

Are the seven splits identical? Sounds like you are breaking a transaction file into seven parts to lookup against a master file.
Because I cannot fathom any reason to do the same thing seven times - my only guess is that you are doing it seven times BUT with different data elements.
Please provide more details.

Below is what I traced back. Basically there will be huge file we are processing that in parallel . The file transaction_data.dat will be compared with the spend.dat. The file spend is a small file. We will match the transaction between these file and do the aggregation

I made the transaction_data.dat in SAN . Even with that I am seeing the first parallel process is taking less time and the process time increase with the split going on

Below is the log on the process. I see the process split the file almost into equal split but not sure why the process different between each parallel run

Once you've maxed out your I/O bandwidth, adding more processes will just make a task slower. How many processes it takes to max out your I/O bandwidth could well be "one". Spinning disks especially lose a lot of bandwidth when split between competing tasks.

Beyond that, it's difficult to say what's happening. We still don't know what you're doing. "Processing" is a fine word but tells us little.

1 Like

Thanks again. I don't think the I/O bandwidth is issue. Because once I run the parallel with all time the run happening I am seeing the CPU usage as as 99% . So if that is an issue with I/O then the all process wont run at 99% right ?

When I forked out different process. Each will have the same priority right and I/O is the shared resource which should be equal to all process. that's what I am confussed

I am using top to determine the speed. is there is any utility which give more insights ?