Below is my script code which filters failed logins from existing and non-existing users from the server. The problem is, it takes longer to execute and complete than expected. Anyone here can edit the script to speed up its execution.
#!/bin/bash
LIST=`cat $1`
for i in $LIST
do
date=`date --date="yesterday" +'%b %d'`
ssh -q root@$i cat /var/log/secure | grep "$date" | grep Failed > failed-logins-$i.txt
#Splitting the failed-logins in INVALID & VALID
cat failed-logins-$i.txt | grep -i invalid > invalid-failed-logins-$i.txt
cat failed-logins-$i.txt | grep -v invalid > valid-failed-logins-$i.txt
# Count the valid-failed-logins per host and print
echo -e "\n************"
echo -e "\n$i"
echo -e "\n--Failed Logins for EXISTING Accounts:"
cat valid-failed-logins-$i.txt | awk '{a[$1 FS $2 FS $9 FS "\t"$11]++} END {for (i in a) {print i " = " a }}'
echo -e "\n--Failed Logins for NON-EXISTING Account:"
# Count the invalid-failed-logins per host and print
cat invalid-failed-logins-$i.txt | awk '{a[$1 FS $2 FS $11 FS "\t"$13]++} END {for (i in a) {print i " = " a }}'
done
You are executing several external commands for each and every word of the file you are processing. You can reduce the number first by removing all those useless cats and giving the filename as an argument to awk or grep or other commands.
Worth a look at the size of /var/log/secure and check whether it is maintained. How old is the oldest entry in that file?
The ssh line is just causing the whole of /var/log/secure to be copied up the network link to a local pipe.
It would be more efficient to run the commands in a script on the remote server and then copy the output files back to the calling server.
sar -f sa15 | grep -v Average | grep -v "^$" | grep -vi restart | grep -v System | grep -v AIX | tail -144 > sa15.$$
mailx -s "Last 48 hours sar data at `date`" user@domain.com < sa15.$$
rm sa15.$$
i have sar file which i run as sar -f sa15 -- which has 7 days data..
Each row has a timestamp where 1st line tells me the date and each line has hour
I want to email only last 48 hours....
@noorm Hmm, you have two threads of your own open on your issue with sar.
As previously stated, the command "sar -f sa15" is flawed. Looking at your latest example the file "sa15" is corrupt because it should not contain data for more than one day. It could contain a jumble of multiple previous 15th of the month data when sar data collection is not set up correctly on a system.
Data get appended to the sa15 file automatically.....we have filter and send only last 48 hours data. every row is hourly data ie., tail -48.
How can i schedule it to run every hour without scheduling it in cron...as we dont have access to cron... any other way out (using sleep, autosys etc).
Pleas suggest....
@noorm
Other correspondents have replied to your questions on your other posts. I've lost the plot. Any chance you can consolidate the whole lot into a private document. While doing this the lightbulb might light before your project work is due in.
Hi i am not sure how much time gain you are going to get
but you can provide file name as argument to grep you dont have to cat file every time same goes with awk
cat failed-logins-$i.txt | grep -i invalid > invalid-failed-logins-$i.txt
can be => grep -i invalid ailed-logins-$i.txt > invalid-failed-logins-$i.txt
cat failed-logins-$i.txt | grep -v invalid > valid-failed-logins-$i.txt
can be => grep -v invalid failed-logins-$i.txt > valid-failed-logins-$i.txt
cat valid-failed-logins-$i.txt | awk '{a[$1 FS $2 FS $9 FS "\t"$11]++} END {for (i in a) {print i " = " a }}'
can be => awk '{a[$1 FS $2 FS $9 FS "\t"$11]++} END {for (i in a) {print i " = " a }}' valid-failed-logins-$i.txt
cat invalid-failed-logins-$i.txt | awk '{a[$1 FS $2 FS $11 FS "\t"$13]++} END {for (i in a) {print i " = " a }}'