Most efficient method to extract values from text files

I have a list of files defined in a single file , one on each line.(No.of files may wary each time)

eg. content of ETL_LOOKUP.dat

/data/project/randomname
/data/project/ramname
/data/project/raname
/data/project/radomname
/data/project/raame
/data/project/andomname

size of these files is from 5-20 mb. I want to scan each of these files one by one and extract multiple values from each of them. The format for extraction would be same for each of the files.

The high level code that i am thinking of is :

while read lookupfilecontent
      wf_dir = `grep lookupfilecontent |cut -d ',' -f7` 
      sess_dir =` grep  lookupfilecontent |cut -d ',' -f10| cut -d ',' -f1`
      #this goes on for 7-8 values
done<data/SrcFiles/ETL_LOOKUP.dat

Now i want to ask if there a more efficient way for this ? Wouldn't using grep mutiple times be a performance concern as I am reading single file everytime in each line ?

Yes that would be very inefficient..

You can operate the greps on all the files at once, or if there are too many and the names of the files do not need to be in the result you can concatenate the files first and run your greps on that, maybe you can combine greps?

Bit of a guess because it is not clear at this point what you need to do with the results, what the files and the output looks like and what results you are looking for..

I need the file names also , so i guess concatenation is not possible.

From the multiple values that i extract , i have to add them(comma separated for each individual file) to an already existing file .

Provide us the input, expected output and other details

The input files as logs of production environment and are sensitive , so cannot share the input files . Let me know if i am not clear in my requirements.

We already let you know that we need additional information. If it is sensitive data, then you could anonymize it.