How to limit output

hello, i'm trying to figure out a way to limit the output from an SQL query that is counting the number of occurances of a value in a field and the problem is when i run this query against a huge file with many unique values the output is pretty huge.

Is there a way i can specifically LIMIT the output form the query or query the size of the output file while it's being loaded and do something to LIMIT the output?

I tried the ulmit below but so far in testing it appears to delete the file all together instead of just leaving what already been written to the file.

ulimit -s 512
cat > col.sql <<EOF
set serveroutput on
select 'loan_nbr' , count() , loan_nbr from xxx.yyyyy_CREDIT_SCORE group by loan_nbr order by count() desc;
exit;
EOF
sqlplus -s xxxxxxx/xxxxxxxxxx@xxxx < col.sql >> xxx.yyyyy_CREDIT_SCORE_LOAN_NBR.out
size=`ls -l xxx.yyyyy_CREDIT_SCORE_loan_nbr.out | awk '{print $5}'`

Just an fyi, i'm trying to do some data analysis on a bunch of files and identify the control or parent values but LIMIT the output when i encounter a amount or unique value that is not a control or meta type data.

Since you're using Oracle, the SQL syntax to do this to add a rownum limit to your WHERE clause.

SELECT field2, field2 FROM table_name WHERE <whatever> AND rownum < 1000;

Thanks very much ShawMilo i will give that a try, appreicate that!

BobK