Can you provide us the output from a df command for a starter. Are you sure, for instance, that /filesystem was mounted? It could be that you filled the root filesystem by mistake.
Some OS will create /tmp from memory (and therefore paging/swap space) If you were writing to the wrong place, then this is another option, and when the machine re-starts, /tmp will be empty.
There's not much else to go on. Perhaps don't run them in parallel. Maybe memory got exhausted by the processing load.
hmm, judging from errpt and the format of the error message ( 0511-051 Read failed ) i suppose it is indeed AIX (at least it looks like it). Still, it would help to know the version.
You do know that
nohup time dd if=/dev/zero of=/filesystem/file_$count bs=512m count=4 &
will have all the writer processes run in parallel in the background, do you? If i had to take a wild guess i'd ask myself if the blocksize might be kept in memory and if the 10 parallel dd -instances maybe taxed memory too much. You might want to run this again with a smaller blocksize and a higher count to make up.
What rbatte1 suggested (not mounted FS) seems still the most likely cause to me. If this can be verified not to be the case you might consider running "vmstat" in one window and then start the job again in another to see a more detailed picture of what is happening.
3) I am aware all the dd process will run in parallel as that is the aim to look into the throughput on the LVM (SAN attached disks) and the benefit (if any) of having the inter policy set to maximum
With regard to the blocksize, you may well be onto something. When I originally had this set to 1m and a count of 2000, the dd's ran through without any issue. Therefore your theory of the bs being held in memory might be correct.
I was trying a bs of 512m with the thought that it might be faster than using a bs of 1m.
Good. Sorry for sometimes stating the obvious superfluously, the general experience of long-standing members here is that the painfully obvious is less obvious one might think to the better part of the audience.
If you are carrying out performance tests you might consider taking the file system driver out of the equation by addressing the raw device instead of the filesystem:
dd if=/dev/null of=/dev/somelv ....
or even the raw hard disk. I once did exactly that for the same reasons, you can read here an account of the risks involved - afterwards it was quite funny.
Give the LPAR more memory, then (6GB should suffice). Increase not the "max" alone in the profile, increase the "Desired" too. Fork()-ing 10 such processes will be done faster than the hypervisor can react.