Here's my problem:
I have a file that contains say for this example, three records, each twenty bytes long:
I have two other very large files (over 500,000 records) one is 500 bytes, the other is 200 bytes long. These two files contain the CustNum from the first file as well as MANY more that I don't want.
I want to extract out the CustNum from the first file, then do a loop thru the other two files, matching the CustNum and only writting out those matching records all 500 or 200 bytes.
Essentially, I want to reduce the 100,000 record file to a managable amount.
I have tried cut to extract the records out to a variable, then loop with grep, but the results produce a file of one continuous record. No newline? Should I use awk instead?
Any help would be appreciated......
Here is my code:
#== Local Variables ==#
if [ -s $driver ]
bids=`cut -c1-10 $driver`
echo "Function: $0 - No data found in $driver or file does not exist."
echo "Aborting script with a status of $stat"
#== For each bid picked up, check each of the CP and ==#
#== SM files for a match and just write those records. ==#
for i in $bids
match=`grep -s $i $file`
case "$stat" in
0) echo $match >> file_$num.new
echo Status is $stat
2) echo "Function: $0 - The file $file is not accessible - grep status is $stat"
#== For each File, execute the Match_Files function ==#
for data in "$file_1" "$file_2"
let num="$num + 1"