Parse output path to set variable

I am looking to parse a text file output and set variables based on what is cropped from the parsing.

Below is my script I am looking to add this feature too.
All it does is scan a certain area of users directories for anyone using up more than X amount of disk space. It then writes to the output file and emails the output file when done to a number of people that is defined in the script.
I want it to also email the username found in the output file.
Below the script is a sample output of what is emailed.

#!/bin/bash
_dufile="/tmp/testing-results.log"
MAILTO=myname@mydomain.com

rm -rf /tmp/testing-results.log

du -m -s /home/project_A/users/* | perl -ne '@l = split();print "@l\n" if $l[0]>=1960' > /tmp/testing-results.log

if [ -s "$_dufile" ]
then
        echo "$_dufile has found user with over allowed data amount, emailing "
        cat /tmp/testing-results.log | mailx -s " Testing user exceeded 2GB limit Quota Notification" $MAILTO
else
        echo "$_dufile is empty."
fi

--------------------------
output file data in /tmp/testing-results.log :

39292 /home/project_A/users/jdoe
49200 /home/project_A/users/bsmith
89019 /home/project_A/users/rguy

Basically, I need to read the output of /tmp/testing-results seen above, and crop out everything except the username "jdoe" and "bsmith" and "rguy" and set each one to a variable. Then just plugging in those variables to my mailx list with a domain name added.

This will then send the output to myname@mydomain, and also the output sent to each person/user "jdoe" "bsmith" ect...

I'm sure there are a million ways of doing this and would highly appreciate any suggestions of the read and parse of my output file /tmp/testing-results

Thanks in advance.

while read size dir
do
   user=$( echo ${dir} | awk -F/ ' { print $NF } ' )
   echo $user                                                   # Got username in variable: user
done < /tmp/testing-results.log

Running awk 10,000 times to handle 10,000 individual lines is like making 10,000 phone calls to say 10,000 words. If you're not going to use awk on more than one line at once you shouldn't bother using it at all. It's not a builtin. It's a programming language. Imagine running perl 10,000 times to handle 10,000 lines. Same problem.

You don't even need it. Ordinary shell operations will work here. Here's one method:

OLDIFS="$IFS"
IFS=" /"

while read LINE
do
        # Divide "39292 /home/project_A/users/jdoe" into
        # $1=39292, $2=home, $3=project_A, $4=users, $5=djoe
        set -- $LINE

        COUNT=$1 # Save the total
        shift $(($# - 1)) # Turf all but the last argument
        USER=$1 # That will be the username

done < inputfile

IFS="$OLDIFS"

Zero external commands per line instead of two.

1 Like
echo "body" | mailx -s "subject"  `awk -F/ ' { print $NF "@mydomain" } ' testing-results.log`

Thank you, this fits well :slight_smile: as do the other responses also thank you everyone.