I am trying to look into multiple directories and pluck out the latest version of a specific file, regardless of where it sits within the directory structure.
Ex:
The file is a .xls file and could have a depth within the directory of anywhere from 1-5
Working directory - Folder1
Similar .xls files may be in Folder1/x1/x2/x3 and also Folder1/y1/y2, but I want to output only the latest modified .xls file to pipe into a cp command for use in a shell script.
I am using csh (I know, I know!) and any help in doing this would be appreciated.
I've tried using forms of find.-type f -name "*.xls" , but am not sure how to only take the latest from that listing of all the .xls files within the directory tree.
You don't give -l to ls when you want to automate it. Why bother getting username, file permissions, modified date and all that when all you're doing to do is throw it away?
You can use ls to sort the files by date with -t, then just grab the first.
find . -type f -name "*.xls" -exec ls -t '{}' '+' | head -n 1
Unfortunately the latest checklist file (.xls) could be in any of the 400 folders, so I will need to search them all. Sadly the script will need to loop through this routine for a small sampling from an input array, but is much better than having to manually hunt down a checklist file and compare against others.
Disk cache should make finding the second and third files easier after you've searched once. The directory entries will be in memory and not need to be read from disk again.