I have the following code:
declare -a search
search=(name label elapsed totalfilecount totalfilebytes starttime exitcode errors warnings fatals)
for (( i = 0 ; i < ${#search[@]} ; i++ ))
do
data=`awk -F '^'${search[$i]}'=\"' 'BEGIN{RS="\" "; OFS="\n"; ORS=""} { print $2 }' summary.alg`
echo "${search[$i]}: $data"
done
it scans through alog file like this:
name="myserver01" label="ashburn_daily-ash_hpux-1228953600121" excluded="0" dirbytes_sent="2144078"
dirbytes_hashcache="13155171" filebytes_ispresent="26919788" compbytes_ispresent="15160"
filebytes_sent="7524448242" filebytes_skipped="0" totalprimarybytes="1375375088179" dirbytes_ispresent="1728"
compbytes_sent="23386944" filebytes_hardlink="430957859" chunkcount_hashcache="3231729" errors="0"
warnings="0" fatals=""
In the log file it scans there is, for some unknown reason, duplicates of some of the data.
The script I have will pull all of the matching data.
So if name="myserver01" ..Blah.Blah.. name="myserver01" it will out put the data twice.
How do I change the following to only return the first occurance.
.
.
data=`awk -F ${search[$i]}'=\"' 'BEGIN{RS="\" "; OFS="\n"; ORS=""} { print $2 }' summary.alg`
.
.