Awk/sed problem to write Db insertion statement

Hi There,

I am trying to load data from a csv file into a DB during our DB migration phase. I am successfully able export all data into a .csv file but those have to rewritten in terms insert statement which will allow for further population of same data in different DB

My exiting csv record format

10013, 23-AUG-2013,02:02:03, 1111111, 22222222

The expected output format

INSERT INTO TABLE_NAME(PARTNER_ID,CHARGE_DATE,CURRENT_DAILY_LIMIT,CURRENT_MONTHLY_LIMIT) VALUES ( '10013', TO_DATE('23-AUG-2013,02:02:03','DD-MON-YYYY,HH24:MI:SS'),'1111111','22222222');

I am trying to use awk and it's using a variable definition in it. But no luck till now. The main challenge I am facing while introducing the single quote. awk -F : variable="'" is not working.

No restriction that conversion has to be performed in single awk statement.

Any help, much appreciated!!

Regards,
Bhaskar

Did you try SQLload or similar?

Here is one way to do it:

awk -v sq="'" -F", *" '{printf ("INSERT INTO TABLE_NAME(PARTNER_ID,CHARGE_DATE,CURRENT_DAILY_LIMIT,CURRENT_MONTHLY_LIMIT) VALUES ( %s, TO_DATE(%s,%sDD-MON-YYYY,HH24:MI:SS%s),%s,%s);\n", sq $1 sq, sq $2 "," $3 sq, sq, sq, sq $4 sq, sq $5 sq)}' input

Another option is to use a dblink between the two databases.

Hi Don,

Thanks for your effort and posting.

I have tried

[wlapp@~/scripts] $ line="10015, 23-AUG-2013,02:03:46, 1111111, 22222222"
[wlapp@~/scripts] $ echo $line
10015, 23-AUG-2013,02:03:46, 1111111, 22222222
[wlapp@~/scripts] $ awk -v sq="'" -F", *" '{printf ("INSERT INTO TABLE_NAME(PARTNER_ID,CHARGE_DATE,CURRENT_DAILY_LIMIT,CURRENT_MONTHLY_LIMIT) VALUES ( %s, TO_DATE(%s,%sDD-MON-YYYY,HH24:MI:SS%s),%s,%s);\n", sq $1 sq, sq $2 "," $3 sq, sq, sq, sq $4 sq, sq $5 sq)}' $line
awk: syntax error near line 1
awk: bailing out near line 1

Can you please further suggest? is there any other way?

Actually db link and is not going to work as these DBs are in two different environments.

Effectively, what we are trying to do is fetch the data from few tables. exporting them into .csv files. Those .csv files will be used populate DB in another environment. Our target is convert those .csv files into INSERT statements using shell scripts and then run sqlplus to load the data.

Cheers,
Bhaskar

The script I gave you had file (representing the name of a file containing the data you wanted to process). You changed file to $line (a variable containing a single input line instead of the name of a file containing that input line) that awk interpreted as four file names: "10015,", "23-AUG-2013,02:03:46,", "1111111,", and "22222222".

To feed that line to awk instead of having awk read it from a file, try:

$ line="10015, 23-AUG-2013,02:03:46, 1111111, 22222222"
$ echo $line
10015, 23-AUG-2013,02:03:46, 1111111, 22222222
$ printf '%s\n' "$line" | awk -v sq="'" -F', *' '{printf ("INSERT INTO TABLE_NAME(PARTNER_ID,CHARGE_DATE,CURRENT_DAILY_LIMIT,CURRENT_MONTHLY_LIMIT) VALUES ( %s, TO_DATE(%s,%sDD-MON-YYYY,HH24:MI:SS%s),%s,%s);\n", sq $1 sq, sq $2 "," $3 sq, sq, sq, sq $4 sq, sq $5 sq)}'

And, if you're running on a Solaris/SunOS system, use /usr/xpg4/bin/awk , /usr/xpg6/bin/awk , or nawk instead of awk .

Hi Don,

Once again thanks a lot for your response.

Yes finally it worked, I was using wrong version of awk, this one

/usr/xpg4/bin/awk 

worked.

Cheers,
Bhaskar