Variable Sized Array Length Question

I need to implement the following logic and need some expert help from UNIX community.

These are the steps in my Shell script.

  1. Analyze a file.
  2. Extract all the ID's in that file.
  3. Use the ID's from #2 to run another filter on the file.

I've implemented # 1 and 2 using shell / perl scripts.
My problem is that the # of ID's can vary from each run. How can I grep with a variable sized array?

For example, my first run would be:
ID's = <id1, id2, id3>
cat <file1> | grep <id1, id2, id3>

for second run, this would look like:
ID's = <id1, id2, id3, ... id60>
cat <file1> | grep <id1 ... id60>

thank you,
Kalpesh

What shell are you using? The standard Unix shell does not have arrays.

In those that do (bash, ksh), the array can be any size and can change as necessary. They don't need to be declared a specific size.

Can you post:
1) The format of the original file with sample data, making it clear whether there are field delimiters and which field is the "ID".
2) Estimate the maximum number of IDs.
3) Sample IDs - expecially if they are variable length and could be contained within another ID (e.g. "foobar" and "myfoobar")

I resolved this without the use of Arrays. I did a MOD operation on the the total number of ID's. I collected the first 5 ID's, performed the operation and saved it to a new file with a variable name. Then I merged all the files together.

Thank you all for your suggestions.