Finding data in large no. of files

I need to find some data in a large no. of files. The data is in the following format :

VALUE A       VALUE B    VALUE C      VALUE D
10                   4                    65               1
12                   4.5                 65.5            2
10.75               5.1                 87               3
9.8                  4                    67               4

All the files have data is the same format (above). I need to write a script that copies those files (to a subdirectory, which also the script should create) that have ANY row satisfying the search criteria that : 10.5<VALUE A<11.5 && 4.5<VALUE B<5.5 && 80<VALUE C<90, and then also displays the VALUE D for those particular rows in the selected file which fulfill the above criteria.
The files are in "bin/models" which has two subdirectories,"model 1"and "model 2", each of which contain 10 data files. The files end in ".track". The new subdirectories are to be named "new_sub", and are to be created both in "model 1" and "model 2".

Thanks a TON in advance; I really need to know this one quick for a project!!

Welcome to the forum, cooker97.
Did you try something? If yes, please post your efforts so that we can help you further.

Also, It is always better to post small points one-by-one instead of all summing up to a long sentence. just to increase readability and better understanding.

Regarding your problem, please post the sample output for the given input data.

thx for the pointers clx.
Will keep 'em in mind next time.
I am a complete noob to linux, but I need this script for a project I'm working on.

The sample o/p should be smthing like this :

The script should simply create new sub-directories and copy those files there, any one row of which satisfy the criteria I mentioned. (i.e, no. of rows satisfying criteria >=1).

Also, it should give me VALUE D of those row(s) in the copied files which have satisfied the criteria.