Reading ls -l output line by line awk the user name and su user to run commands

Using ksh on AIX what I am trying to do is to read the ls -l output from a file in a do while loop line by line. Extract the user name(3rd field) and the directory/file name(9th field) using awk and save them into variables. su -c to the user and change directory/file permisions to 777. Script I wrote is

#!/bin/ksh

cd /saswork/sastemp
ls -ltr  | egrep -i -v 'total [0-9]+' > /u/sasp/scripts/purge_clean/test_list
 
while read line 
do
   # su the owner of each work directory and change permisions to 777 for the work directory
    
      
        sas_user=`awk '{print $3}'`
        sas_work_dir=`awk '{print $9}'`
        echo $sas_user 
         echo $sas_work_dir 
       su $sas_user -c "whoami; cd /saswork/sastemp; chmod 777 $sas_work_dir"
done < /u/sasp/scripts/purge_clean/test_list

However the problem is that it seems like that once the first awk is executed all iterations of do while are done within first awk sub shell? as it gives me all user Ids in one line (return the value in sas_user variable), output from second awk is blank as it looks like it has nothing to process? and then it tries to su all the user ids at a time and su gives error.

What I expected it to do was to awk the user Id and D/F name for first row in the ls -l output, su -c the first user to change the permisons and then go to next line of ls -l , process it the same way and keep doing it till it has read all lines from ls -l output. This is why I used a do while loop reading file line by line. If I wanted the awk to process all lines in ls -l output together I could have just piped the ls -l output to awk...

following is what I get when I run the script as root

root:/u/sasp/scripts/purge_clean]./test_su4
dsm04 dsm04 sasp omecea hbharuch hbharuch mdu vsingh dsm04 vsingh ssun sli yyao aaronwne ssun dsm01 ssun sli yyao alawson vfan vsethi vneto twang yyao vneto szhi szhi mdu
ksh: dsm04:  not found.

if I run the script as non-root user then I get

s

asp:/u/sasp/scripts/purge_clean]test_su4
dsm04 dsm04 sasp omecea hbharuch hbharuch mdu vsingh dsm04 vsingh ssun sli yyao aaronwne ssun dsm01 ssun sli yyao alawson vfan vsethi vneto twang yyao vneto szhi szhi mdu
dsm04's Password: 

Any help would be greatly appreciated

sas_user=`awk '{print $3}'`

Don't you want to be echoing line into that?

You might have better luck with a much simpler approach:

#!/bin/ksh

cd /saswork/sastemp
ls -ltr | while read -r x x sas_user x x x x x sas_work_dir
do      if [ "$sas_owrk_dir" = "" ] 
        then    continue        # skip total line
        fi
        # su the owner of each work directory and change permisions to 777 for
 the work directory
        echo $sas_user
        echo $sas_work_dir
        su $sas_user -c "whoami; chmod 777 /saswork/sastemp/$sas_work_dir"
done

Note, however, that your original script and this replacement will both fail miserably if there are any whitespace characters in any of the filenames in /saswork/sastemp. Both will also change any files found; not just directories.

1 Like

Thanks very much Don.

The script you gave did the job. I am not concerned about the blank spacees in dir/file names because in my final script I filter the ls -l output using egrep, to keep only the directories that match a pattern, before I would read it in the do while loop to do su.

Thanks for your help

Note that I replaced the egrep in your pipeline with a simple if statement inside the loop. The only thing your egrep was doing was to remove the line in the ls output that is of the form:

total <digits>

Nothing in our scripts verify that there are no spaces, tabs, or newlines in the names of the files in the directory. Hopefully, it won't be an issue because you won't create filenames that contain these characters. Scripts can be written to handle such filenames, but it is MUCH SIMPLER for everyone involved if you just don't create filenames that need special handling.

Actually the script I showed was only a part of a big script. In the main script I have another egrep filter to keep only the directories that match the pattern of SAS work directories, in the ls -l output. This is why I know that there won't be any directories with blanks in their name, in the ls -l output I am going to use in do while loop.

I did not show the complete script because I did not want to lose focus from my issue which was how to extract each owner id and su to the user to change permisions, which was solved by your script.

I agree with you that creating files/directories in UNIX with blanks is a very bad idea.... but from time ot time we do see users who do that.

Thanks again :slight_smile:

The plain fix that CarloM means

sas_user=`echo "$line" | awk '{print $3}'`

Otherwise awk reads from stdin i.e. the rest of the loop's input!
The read into individual variables is much better of course.
The su remains problematic. I suggest the following safety enhancements

</dev/null su $sas_user -fc "whoami; chmod 777 /saswork/sastemp/'$sas_work_dir'"

</dev/null prevents an eventual reading from stdin.
-f skips the user's .bashrc .kshrc .cshrc processing.
The 'ticks' can handle special characters in the file name. NB it must be 'ticks' here because it is already within "quotes".

:slight_smile: Thanks Made in Germany and Carlos for your input... Now I understand what Carlos meant... Yes echoing the $line works ... That was confusing me as why the why loop goes into the awk subshell...you guys cleared that.

For this script I will use variables as suggested by Don as that is a simpler approach.

regarding the additional checks that you proposed </dev/null work fine but I run into some issues using -f option for su and the 'ticks'... With -f option for su it looks like I can't use -R for chmod(my original script did not have -R for chmod but now I added it) and I get

-R: 0403-010 A specified flag is not valid for this command.

With the 'ticks' looks like it can't resolve the variable(which is what i thought)

chmod: /saswork/sastemp/$sas_work_dir: A file or directory in the path name does not exist.

The error message you showed above should not be printed as a result of using the command:

</dev/null su $sas_user -fc "whoami; chmod 777 /saswork/sastemp/'$sas_work_dir'"

The double quotes around the string whoami; chmod 777 /saswork/sastemp/'$sas_work_dir' make the single quotes regular characters with no special meaning, so $sas_work_dir should be expanded before su is invoked. Assuming as an example that $sas_work_dir expands to some/dir ), the shell should invoke su with a final operand that is the string whoami; chmod 777 /saswork/sastemp/'some/dir' and the shell that su invokes would then remove those single quotes.

Please show us the exact command line that you are using to invoke su.

Don,

Please see below the complete do while loop

 
SCRIPT_DIR=/u/sasp/scripts/purge_clean

SASWORK_LIST_SASWORK=$SCRIPT_DIR/saswork_list_saswork

SASWORK=$1

cd $SASWORK
ls -ltr  | egrep -i -v 'total [0-9]+' > $SASWORK_LIST_ALL_BEFORE
egrep -i 'SAS_[0-9a-z]{16}_[0-9a-z]+' $SASWORK_LIST_ALL_BEFORE  > $SASWORK_LIST_SASWORK

 
while read -r x x sas_user x x x x x sas_work_dir
do
    # su the owner of each work directory and change permisions to 777 for the work directory
   echo $sas_user
   echo $sas_work_dir
   </dev/null su $sas_user -c "chmod -R 777 /saswork/sastemp/$sas_work_dir"  # </dev/null prevents an eventual reading from stdin. Just a safety check
done < $SASWORK_LIST_SASWORK

 

Thanks

Omer

And by just adding the ticks (single quotes)

 </dev/null su $sas_user -c "whoami; chmod -R 777 /saswork/sastemp/'$sas_work_dir'"

you get what error or malfunction?

Actually you and Don are right.. single quotes within double quotes are not causing any error.

I must have done somethign wrong the first time when I tested and got the error about not finding the directory..I tested again with single quotes around the '$sas_work_dir' and it worked fine thsi time..

Thanks again for your help

Do you use the contents of the files named by $SASWORK_LIST_ALL_BEFORE and $SASWORK_LIST_SASWORK after this loop completes? If $SASWORK_LIST_ALL_BEFORE isn't used after this loop, the first egrep is superfluous and the code:

ls -ltr  | egrep -i -v 'total [0-9]+' > $SASWORK_LIST_ALL_BEFORE
egrep -i 'SAS_[0-9a-z]{16}_[0-9a-z]+' $SASWORK_LIST_ALL_BEFORE  > $SASWORK_LIST_SASWORK

can be replaced by:

ls -ltr | egrep -i 'SAS_[0-9a-z]{16}_[0-9a-z]+' > $SASWORK_LIST_SASWORK

If $SASWORK_LIST_SASWORK isn't used after this loop either, then neither file is needed:

ls -ltr | egrep -i 'SAS_[0-9a-z]{16}_[0-9a-z]+' |
while read -r x x sas_user x x x x x sas_work_dir
do
    # su the owner of each work directory and change permisions to 777 for the work directory
   echo $sas_user
   echo $sas_work_dir
   </dev/null su $sas_user -c "chmod -R 777 /saswork/sastemp/$sas_work_dir"  # </dev/null prevents an eventual reading from stdin. Just a safety check
done

Thanks for your comments Don :slight_smile:

I actually do use SASWORK_LIST_BEFORE again. The script deletes the orphaned SAS work directories and afterwards it runs sdiff on SASWORK_LIST_BEFORE with SASWORK_LIST_AFTER.

Also another reason why I keep these files temoprarily(deleted at the end of the end of the script) is because I generate a report from these files.