#!/bin/bash
file="/home/CSV/data.csv"
badfile="/home/CSV/bad/"
while IFS= read -r line
do
num_fields=`echo "$line" | awk -F'|' '{print NF}'`
field1=`echo "$line" | awk -F'|' '{print $1}'`
echo $num_fields
echo $field1
done <"$file"
the code is checking number of fields present and the value of first field. the number of fields can vary in data.csv. I wwant to move these variable and commands into one config file. some thing like this:
The sourcing of the config file with . ./config.cfg will read them in as you suspect, but it actually executes them so the values set will be for that one instance. If $line has no value at that point, then that is what you are asking it to process. Once only.
Perhaps you would be better to consider another way like this:-
declare -a my_array
while read line
do
my_array=($line)
echo "number of fields ${#my_array[@]}"
echo "First field ${my_array[0]}"
done < $file
Would that suit your purpose okay?
I hope that this helps and saves the performance costs of repetitive awk calls.
You could also use awk to process the files as a whole:-
awk '{print "Field one is \""$1 "\" with field count " NF}' $file
...but I suppose it depends what you want to do with the values after this.
I'm not sure how to assess your code snippet. num_fields and field1 will hold meaningful values only if file has one single line only. Why, then, the while loop? With it, the two variables will hold the last line's values.
Please explain what exactly you want the config file to do.
It is not a bit strange to assume external resources in the config file.
E.g. use $line, hoping that $line exists in the calling script.
If you really want that, you should put the per-line code in a function