Would that really work, though? To me, it sounds like the script is being run at the same time on 4 different occasions (... I wonder if that makes sense or not...).
To me, the problem is that I want all of the results to go into one output textfile. If the script were to be 4 different processes at the same, would they be able to write into the same output file at the same time with no errors?
Writing into the same file will be a problem for sure. But you can write to different file based on the input parameters and once all the processes finish, you can 'cat' them and write the output to the final file.
Just an idea - I am sure there are better ways of doing it
Actually, that sounds pretty good.
Just means more scripting for me to remember, hahaha.
What about this scenario, though?
Since pinging can take a while (especially with a lot of servers and IPs), reading through each server one at a time can be time-consuming.
What if all of this info was in one text file?
Would forking be recommended then if I had no choice but to read from one file only?
If all server names are on one file, then pinging each of them one by one will take a long time, especially if some of them are unreachable (which WILL happen). That is why I want to try to fork the processes so that multiple servers can be pinged at a single time in order to reduce time usage.
writing a simple script like soleil4716 suggested would work fine and run in parrellel. Using Perl and fork() is possible also but slightly more complex. I suggest you limit your ping command to 3 requests and set a small timeout. Depending on how many processors you have you probably don't want to create 1,000 concurrent pings so you should keep it manageable. For example: only run 20 pings concurrently and as one completes kick off another so you don't tax the system.