Hi,
I have a file which contains the following data
claim_src|clm_id,typ_id
pat_src|pat_id
prov_src|prov_id,clm_id,prov_name
The first field is table name and second field is primary keys of the table
Now I have three files which contain ddl of each table.
clam_src.sql
pat_src.sql
prov_src.sql
So claim_src.sql would have something like
create table claim_src {
clm_id double,
typ_id double,
name varchar(30)
.
.
.
}
I want my shell script to read primary keys listed in first file and change datatype of field in second file
so new claim_src.sql could be
create table claim_src {
clm_id bigint,
typ_id bigint,
name varchar(30)
.
.
.
}
I'm stuck in getting primary key out of first file, since number of keys vary in each table.
I'm trying to use awk...but getting messed up
My plan was to get primary key and use sed on second file.
Any help is appreciated.
---------- Post updated at 12:21 PM ---------- Previous update was at 10:21 AM ----------
Hi,
This is what I'm thinking of doing, but I'm sure that there is an efficient way
awk -F "[ |]*" '
{
printf $1 "|" $2 "|"
flds=split($2, F, ",")
printf flds
printf "\n"
}' file1.txt >out.txt
out.txt would be
claim_src|clm_id,typ_id|2
pat_src|pat_id|1
prov_src|prov_id,clm_id,prov_name|3
Next I would do something like this
while read LINE
do
TABLE_NAME=`echo ${LINE} |cut -d '|' -f1`
KEY_LIST=`echo ${LINE} |cut -d '|' -f2`
NUM_KEYS=`echo ${LINE} |cut -d '|' -f3`
for(i=1; i<=NUM_KEYS;i++)
do
KEY=`echo ${KEY_LIST} |cut -d ',' -f$i`
sed -i '/$KEY/c\ $KEY bigint,' $TABLE_NAME.sql
done
done< out.txt
I'm sure there is an easier way to do things, please suggest a better way