Need to develop a script to create a report reading multiple server logs

I am currently trying to develop a script to connect to mulltiple servers, reading specifc data from log files on the servers and append the data from each file into a single tab delimited row. So, at the end I am planning to have a report with all the extracted data with each row per server. I am trying to understand if I can login to each of these servers and be able to write the extracted data to a single remote file, or if I need to generate a file for each server with a single row of extracted data and scp all of them back to the source server inorder to be able to build the report. Any suggestions, pointers,etc will be really appreciated.

  1. set ssh passwordless for each remote server.

  2. update and run the script:

while read server
do
  ssh $server  "grep DATE /XXX/remote_log_file " > local_log_file
done < server.list

Currently I have deviced a logic , to have files generated on individual servers, which will pulled over to the central server and appended to one large file, which will be emailed as a report. I am currently stuck at place where, I need the following output in a file:

/dev/emcpowera1      636382720 236480640 399902080  38% /datatransY
                     1252757932 337580588 851540864  29% /datatransXX/XXXXXXX1
                     626371912 161135116 433418864  28% /datatransXX/XXXXXXX5

I am trying to remve the other datafileds and modify file to only have the filesystem names,

/datatransXX/XXXXXXX1/datatransXX/XXXXXXX5

so that, I can use the filestems using a variable in my logic. I am trying to find only the strings to match, /datatrans**/ to be the output.

Any ideas will be really appreciated.

Thanks

$ cat infile

/dev/emcpowera1 636382720 236480640 399902080 38% /datatransY
1252757932 337580588 851540864 29% /datatransXX/XXXXXXX1
626371912 161135116 433418864 28% /datatransXX/XXXXXXX5

$ awk '/datatrans/{print $NF}' infile

Hi rdcwayx,
I have tried it and it seems to work for me and the following is the output:

$ awk '/datatrans**/{print $NF}'awk filesystem.txt
/datatransb
/datatransn1/XXXXXXX1
/datatransn1/YYYYYYY5

The thing is , I need to implement it in multiple locations/servers.The strings XXXXXXX1/YYYYYYY5 will vary but will always end with a number. Is there a filter I can set while using the awk command, to exclude /datatransb and only get the output for the anything that has a string like XXXXXXX1 ending with a number?

Thanks

awk '/datatrans.*[0-9]$/{print $NF}' infile
1 Like