Create Multiple UNIX Files for Multiple SQL Rows output

Dear All,

I am trying to write a Unix Script which fires a sql query. The output of the sql query gives multiple rows. Each row should be saved in a separate Unix File.
The number of rows of sql output can be variable. I am able save all the rows in one file but in separate files.

 Any pointers will be sincerely appreciated. 

Thanks and Regards,

Take your single output file and parse it line by line into separate files with awk or stream your output into a "while read"

Each row can be considered as a book page having multiple lines including new line character. Is there a possible way by using number of output rows as a unix script variable and then save them in separate files ?

man split

The output file is filled with only the first line of each row. Remaining lines are not filled in the file.

Hope this will help.

#!/bin/sh
no_of_rows="1"
file_nm=new_file

if [ -f temp.dat ]
then 
rm -f temp.dat
fi

#Writes the data to a file (temp.dat)
sqlplus scott/tiger@database<<EOF
set feedback off;
set heading off;
spool temp.dat;
select * from table_name;
exit;
EOF

#splits the file to no of rows  as defined in variable (no_of_rows)
split $no_of_rows temp.dat $file_nm

I tried using the above script with no_of_rows = 4 for 4 output rows of sql. But it is still printing the first line of each row.

Also, in the sql file I'm using the below SET cmds, not sure what they do:-
set feedback off
set linesize 5000000
set pagesize 1000000
set trimspool on

What do you plan to do with your one records files? Do you have a naming convention to follow based on the data or just a sequential number?

If you can tell us more, there may be a better way to achieve your overall objective.

Robin

Please post a representative example of your data.

Hi,

Consider a table column, which stores each page of a book in a seperate row.

Now, we have to save each of these rows(each page of a book) in a seperate unix file. We can use any naming convention.

Row 1 :- Page 1 of a book
Row 2 :- Page 2 of a book
.
.
.
Row n :- Page n of a book, where n is the number of rows.

Example:-

Query:-
select page_text from books;

Suppose, this query yields 1000 rows. Each row contains data in the below form:-

--------------------------------------
Row1:-

Rohan Bopanna and Aisam-ul-Haq Qureshi progressed to the semifinals of the Sydney International but Leander Paes failed to rescue his 2014 season from a poor start as he crashed out in the first round with Czech partner Radek Stepanek.
Bopanna and his Pakistani partner, seeded third, faced stiff resistance from Trat Huey of Philippines and Briton Dominic Inglot before prevailing 6-7(3) 7-6(5) 10-3 in the quarterfinals.

***********************

Row2:-
--------------------------------

They now face giant-killers Lukas Rosol and Joao Sousa, who dumped the top seeded American pair of Mike and Bob Bryan 3-6 6-3 14-12 in their quarterfinal.
For Paes, it has been a nightmarish start to the new season. At the Chennai Open, he did not get to play a single ball as his partner Fabio Fognini pulled out due to an injury.
***********************

Thanks

It all seems a bit homework to me..........can you elaborate?

Might you be better off writing this as a loop?

  • Firstly, an SQL query to count the rows.
  • Loop for each row, extracting it to a specific file.

Robin

Yes ...I figured out this much. But not able to implement this logic. Could you please provide a sample code for this ..Thanks

What have you tried so far?

It would be better for me to suggest adjustments rather than just deliver a (potential) solution.

Below Shell Script is for splitting the file.

#!/usr/bin/ksh
exec 1>/Temp/`basename ${0}`.log
exec 2>/Temp`basename ${0}`.err

SQLPLUS=""
PATH=""
no_of_rows="1"
file_name=new_file
if [ -f Query.txt ]
then
rm -rf Query.txt
fi
${SQLPLUS}  scott/tiger@Server @Query.sql

split $no_of_rows Query.txt $file_name

SQL File:-

set feedback off
set linesize 5000
set pagesize 1000
set trimspool on
set colsep "    "
spool Query;
SELECT  text FROM table WHERE trunc(date_time)=trunc(sysdate);
spool off;
exit;

The above Script generates a file Query.lst, which prints only the first 8 lines of each row.

Well, I must admit I don't see how you get any output. You set SQLPLUS to null, the execute ${SQLPLUS} . I would expect you to get something like:-

scott/tiger@Server: not found

You are limiting the output linesize to 5000. Perhaps your column text has more than that in it. You are retrieving 8 rows with your WHERE clause.

You would be better executing the query to extract one entire record at a time. Set the linesize value to something huge and use the rownum value in your WHERE clause. We can always trim off the excess characters later.

Run this in a loop, increasing the rownum value by 1 each time and change the output file by using the rownum value each time too.

So:-

  • Query 1 to get total number of rows
  • Loop for every row
    [list]
  • Query 2 to extract 1 very long line into a file into a file based on the row number
    [/list]
  • End loop

Perhaps coding the SQL query in a file is not the best. If you use a here document, then you can use the row number to vary the output file name and the row selected in one go. You can also avoid having the credentials on the command line for all the world to see with a simple ps -ef | grep sqlplus command whilst you are running.

Does this give you more to consider?

Robin