Rename files - need help with array?

I am writing a basic shell script to go out and rename files in a filesystem then scp them to a server. The problem is that there are 11 directories and files. I know I can write this 2 ways, one way the long way and one way the short way. I know how to do the long way... I'm curious about the short way. Here is the basic script I've got so far:

#!/usr/xpg4/bin/sh
#
#
MAINDIR=/apps/main1
FILECOMP=/apps/scripts/rename.txt
SCPINFO=box1:/path/to/location

# Directories
DIRNAME1=Test1/Ready/ 
DIRNAME2=Test2/Ready/ 
DIRNAME3=Test2_Part2/ 
DIRNAME4=Test3_Part3/Ready/ 
DIRNAME5=Test4/Ready/ 
DIRNAME6=Test5/Ready/ 
DIRNAME7=Test6/Ready/ 
DIRNAME8=Test7/Ready/ 
DIRNAME9=Test8/Ready/ 
DIRNAME10=Test9/Ready/ 
DIRNAME11=Test10/Ready/

# File Names
FNAME_1=file1.csv 
FNAME_2=file2.csv 
FNAME_3=file3.csv 
FNAME_4=file4.csv 
FNAME_5=file5.csv 
FNAME_6=file6.csv 
FNAME_7=file7.csv 
FNAME_8=file8.csv 
FNAME_9=file9.csv 
FNAME_10=file10.csv 
FNAME_11=file11.csv

umask -S
# Dates the log file
echo "" >>$LOGDIR 2>&1
`/usr/bin/date  >>$LOGDIR 2>&1`

# ------- START PROCESSING  ---------
# Go into first directory  
/usr/bin/cd $MAINDIR/$DIRNAME1 

# Get newest file - Files are named by date: ex:   Data_081202-021150_000.txt
FNAMETMP=/usr/bin/find . -name *.txt -newer $FILECOMP 2>>$LOGDIR

# Rename
/usr/bin/mv $FNAMETMP $FNAME_1 2>>$LOGDIR

# Move to archive
/usr/bin/mv $FNAMETMP archive  2>>$LOGDIR

# SCP to another server
/usr/bin/scp $FNAME_1 $SCPINFO 2>>$LOGDIR


# Update the File for comparison
touch $FILECOMP
exit

I didn't go on and copy the parts I am wanting to do... since it seems like I can do this easier via some for loop and an array. I'm just not sure how to do that... still learning scripting :slight_smile: Thanks.

Btw... the files won't be called file1,2,3,4,5 for the File Name section and also directories. I'm using those names as examples for now.

I don't see any reason for keeping directories and files separated, as you are mapping one file to each directory in a one-to-one correspondence. You could create two arrays and then access the items as you like, but I think a simpler and straightforward way for doing this could be just creating a list of files including the full path. You could loop through your list of files like this:

#!/usr/xpg4/bin/sh
#
#
MAINDIR=/apps/main1
FILECOMP=/apps/scripts/rename.txt
SCPINFO=box1:/path/to/location

theFilesList='Test1/Ready/file1.csv Test2/Ready/file2.csv Test3_Part3/Ready/file3.csv ....'

umask -S
# Dates the log file
echo "" >>$LOGDIR 2>&1
`/usr/bin/date >>$LOGDIR 2>&1`

# ------- START PROCESSING ---------
for i in $theFilesList
do
FNAMETMP=/usr/bin/find . -name *.txt -newer $FILECOMP 2>>$LOGDIR
# Rename
mv $FNAMETMP $i 2>>$LOGDIR
# Move to archive
mv $FNAMETMP archive 2>>$LOGDIR
# SCP to another server
scp $i $SCPINFO 2>>$LOGDIR
# Update the File for comparison
touch $FILECOMP
done
exit

If you are interested in using arrays, then you can define two arrays like:
set -A theDirectories Test1/Ready Test2/Ready ....
set -A theFiles file1.csv file2.csv ...

To access an array element:
${Directories[_position_of_the_item]}

To loop through an array:
for i in ${theFiles[@]}
do
echo $theFiles
done

To get the number of elements:
${#theFiles[@]}

That is using the Korn Shell though...

This looks pretty straight foward. How does it know to go into each directory though.. I see you have MAINDIR defined but then don't call it above. Would I just put $MAINDIR in the find?

FNAMETMP=/usr/bin/find $MAINDIR -name *.txt -newer $FILECOMP 2>>$LOGDIR

A couple of things:

First of all, this:

FNAMETMP=/usr/bin/find $MAINDIR -name *.txt -newer $FILECOMP 2>>$LOGDIR

Will not work.
I think what you want is:

/usr/bin/find $MAINDIR -name *.txt -newer $FILECOMP | head -1 | read FNAMETMP
echo $FNAMETMP

Secondly, it's better to redirect all LOGGING commands all at once.
There's a variety of ways to do this.... however, my preferred way
is through a subshell and redirection. It's simple and straightforward:

(
date
echo some other commands
echo more commands
echo etc...
date
) > LOG 2>&1

This way, you don't clutter up code with a bunch of ">> $LOGFILE" stuff.
Plus, you save stdout and stderr in one fell swoop and cannot lose anything.

I don't understand why you'd want to clobber a bunch of *.txt files with *.csv
files in this way....

it seems to me that this:

(
find $MAINDIR -name \*.csv -print |
while read file_nm ; do

  ### change csv to txt
  tmp=${file_nm%.csv}.txt

  ### remove directory name ( like basename )
  new_file_nm=${tmp##*/}

  mv $file_nm archive/$new_file_nm

  scp archive/$new_file_nm $SCPINFO

done
) > $LOGFILE 2>&1

... is what you're looking for.

It finds a bunch of *.csv files.
Moves them to an archive directory as *.txt files.
Then scp's them over to somewhere else.
And stores any error messages in a log file.

Actually, what I'm wanting to do is

1.) Go through each directory (DIR1, DIR2, DIR3, DIR4 ... )
2.) Find newest .txt file in that directory
3.) Copy the .txt file to archive directory under that directory with .timestamp
3.) Rename .txt file from list to (file1.csv file2.csv file3.csv)
4.) move to archive directory with new .csv file
5.) scp file to server

good thing i'm bored today...

date '+%Y-%m-%d-%H-%M-%S' |
  sed -e 's/-//g' | ### in case of sccs...
  read time_stamp

for dir in DIR1 DIR2 DIR3 DIR4 ; do

  cd $dir
  /bin/ls -t *.txt | head -1 | read latest_file

  new_file_nm=${latest_file%.*}.csv

  cp $latest_file $ARCH_DIR/$latest_file.$time_stamp

  mv $latest_file $ARCH_DIR/$new_file_nm

  scp $ARCH_DIR/$new_file_nm $SCPINFO

  cd ..

done

That will fail in all shells except ksh because each element of a pipeline is executed in its own subshell. This will work in any POSIX shell:

latest_file=$( ls -t *.txt | head -1 )

Yes, I'm using ksh. When you say "POSIX", do you mean Bourne?

No, I mean the standard UNIX shell as defined by the SUS/POSIX specification. See: sh.