That's not a very efficient way of doing things.
First of all, you'll need an "exit" in there so as to exit from sqlplus and get ready for the next iteration with a new filename. (Otherwise, the script opens up sqlplus, runs the script and just waits for you to supply the next input to sqlplus.)
So the code would be something like this:
$ cat load_data_1.sh
#!/bin/bash
for i in data*.txt ; do
sqlplus test/test @procedure.sql $i <<EOF 1>/dev/null 2>&1
exit
EOF
done
$
Here, "procedure.sql" is a script similar to the one posted by the OP.
$
$ cat procedure.sql
exec prc_read_data('&1')
/
$
It calls the stored procedure "prc_read_data" which takes in a file name as input. (Furthermore, the procedure opens the file,reads the data and loads into a table.)
And there are 100 such files that match the pattern "data*.txt":
$
$ ls -1 data*.txt | wc -l
100
$
The problem with your approach is that it executes the "sqlplus" binary 100 times, one per iteration. So there are 100 connections, 100 executions, and 100 disconnections.
What you'd want to do is - 1 connection to Oracle, 100 executions, 1 disconnection. And for that you'd need something more powerful - Perl, for instance:
$
$ cat load_data.pl
#!/usr/bin/perl -w
use DBI;
$dir="data*.txt";
# get the database handle
$dbh = DBI->connect("dbi:Oracle:xe","test","test"); # connect once
# now loop through the files that match the glob pattern
while (defined($fname=glob($dir))) { # start looping through files
$sth = $dbh->prepare("begin prc_read_data('$fname'); end;"); # prepare statement within loop
$sth->execute(); # execute statement within loop
} # end loop
$dbh->disconnect(); # and disconnect once
exit;
$
This script uses the Perl DBI module, and is pretty self-explanatory.
For the 100 file testcase, the execution times are shown below:
$
$ time . load_data_1.sh
real 0m16.963s
user 0m4.540s
sys 0m2.304s
$
$
$ time perl load_data.pl
real 0m0.828s
user 0m0.196s
sys 0m0.080s
$
$
Of course, if it is a one-time job, then the shell script approach may work fine for the task at hand. But if it is a scheduled, regular job, with the number of files potentially increasing down the line, then the OP may want to consider ways to minimize the overall execution time.
tyler_durden