Regarding file input to SQL from command line

Hi friends,

Need your help again to get a best approach for the below scenario.

I am previously having one shell script which accepts request_id/s as the command line argument.
single req_id arg= 1111
Multiple arg= 1111,2222,3333

which gets passed to the embedded sql inside to the in operator to update the status to inactive

Pseudo code for the same : (please ignore syntax error)

#! /bin/bash
set -vx

req_ids=$1

sqlplus -s xxxx/yyyy@zzzz  << EOF >> /grs/sql_logs
update table_name  set status='INACTIVE' where req_id in ($req_ids);
commit;
exit;
EOF

The above method was working like a charm for some small set of request_ids.
As presently I am getting a huge volume of request_ids(~1200) I fear whether the command line argument will allow a string of 5000bytes.
If at all it allows the in operator will not allow > 2499 bytes of data.

I have tried inserting the record into a temp table and run a for loop in sql but can't as i am not privileged to create temp table in prod env.
Also tried to pass the file as the command line (.csv file) argument then split the file into 300 records each by pivoting using awk command and process one by one file and remove them once processed but still restricted to create temp files in prod box.

Can you please focus some idea to deal with this situation without using temporary table and split method.

Many Thanks to all the members and UNIX.com

Do you really (need to) supply the IDs on the command line? How do you construct the line? How about creating an sql- file with several update lines for a max number of IDs? And then executing that?

Thanks Rudic. Good idea of creating multiple updates in a single SQL
File . but the ids are not constant. Everyday I am getting request 3-4 times with different sets of ids.

How do those IDs come in?

user provides in the form of txt file (single column) through mail. then Sent to the deployment team through some defined process.

However i have access to transfer the files to production box.

I am wondering whether it is possible to cut 300 lines every time pivot them to a string and pass to the sql.

say 900 ids are there then 3 loops of unix same update with different strings of ids will suffice.:wall:

I think the line length limitation has to do with parameters that are passed to the script and not what is in the script itself. You say there is a text file with arguments in a single column, so that means this?

$ cat file.txt
1111
2222
3333
....

?

If so, you could try something like this
Pass the filename a parameter to the script:

#! /bin/bash
set -vx

req_ids=( $(<"$1") )
IFS=,
sqlplus -s xxxx/yyyy@zzzz  << EOF >> /grs/sql_logs
update table_name  set status='INACTIVE' where req_id in ($(printf "%s" "${req_ids[*]}"));
commit;
exit;
EOF

Obviously this is just a principle. Your script would need exception handling, for example to test where the filename is passed as an argument and whether it exists.
IFS=, is used for the expansion of the array so that the values are separated by comma's. If you have more lines in your script then you'll probably want to save the contents of IFS and restore it afterwards..

oldIFS=$IFS
IFS=,
....
IFS=$oldIFS
1 Like

Yes ! I meant that. I will simulate the solution you have provided in dev and post the result. Thanks for your time.

Maximum command line length (in Linux)

getconf ARG_MAX
1 Like

How about

 { tr $'\n' ' ' < $1; echo; } | fold -w80 | while read ID; do ID=${ID// /,}; echo "update table_name  set status='INACTIVE' where req_id in ($ID);" ; done
update table_name  set status='INACTIVE' where req_id in (001,002,003,004,005,006,007,008,009,010,011,012,013,014,015,016,017,018,019,020);
update table_name  set status='INACTIVE' where req_id in (021,022,023,024,025,026,027,028,029,030,031,032,033,034,035,036,037,038,039,040);
update table_name  set status='INACTIVE' where req_id in (041,042,043,044,045,046,047,048,049,050,051,052,053,054,055,056,057,058,059,060);
update table_name  set status='INACTIVE' where req_id in (061,062,063,064,065,066,067,068,069,070,071,072,073,074,075,076,077,078,079,080);
update table_name  set status='INACTIVE' where req_id in (081,082,083,084,085,086,087,088,089,090,091,092,093,094,095,096,097,098,099,100);
update table_name  set status='INACTIVE' where req_id in (101,102,103,104,105,106,107,108,109,110,111,112,113,114,115,116,117,118,119,120);
.
.
.

Save that to a file and have SQL run it.

1 Like

Thanks Everyone for your help. In the meantime i got permission to split the file based upon no of lines and moved the code successfully to prod.

Anyway I learn few new thing from the discussion and extend my thanks again to you.

---------- Post updated at 12:21 PM ---------- Previous update was at 11:34 AM ----------

Hi RudiC,

Your approach is quite straight forward and simple. I was just curious to do that in dev
but upon executing the below command

{ tr $'\n' ' ' < to_pivot.txt; echo; } | fold -w80 

the fold functionality seems to be not working rather it is just taking one one id each. If i remove the dollar($) and keep the \n then the fold is correctly working.

The command to format the ids to be comma separated inside the do while loop is throwing the below error.

test11.sh: ID=${ID// /,}: The specified substitution is not valid for this command.

Finally the fold will wrap the lines to 80 chars irrespective of whether the id is complete or in complete.

update table_name set status='INACTIVE' where req_id in (11682 10550 11105 8855 11674 11670 11668 11666 11702 10841 11744 11743 11574 11);
update table_name set status='INACTIVE' where req_id in (573 11549 11548 11118 8376 11116 11115 9910 11581 11567 11793 11798 4642 11800 1);
can you please suggest me to move.
Many Thanks.

---------- Post updated at 01:12 PM ---------- Previous update was at 12:21 PM ----------

Hi RudiC,

I have done some changes to the above method and now it itt working perfectly. Many Thanks

{ tr '\n' ' ' < to_pivot.txt; echo; } | fold -s | while read ID; do ID=`echo ${ID}|sed 's/ /,/g'`; echo "update table_name  set status='INACTIVE' where req_id in ($ID);" >> form_sql.sql ; done