Script to filter by date

Hello,

I currently have the need to perform backup, naming the file by date. How do I get the script, you can choose the most current file or current date and then upload it? My script is related to this topic that is already closed.

Read Post

Can anyone help me?

The most popular tool for this is 'find ... -newer' with a marker file. Just remember that UNIX time is in seconds, and you do not want to miss files dated on the critical second. I recall I touched a new marker file, waited a second, and then found and processed all files newer than old marker. Files last modified after that second as I ran find might get picked up again in the next run but be identical. You can tell find and not newer than new marker. Move the new marker to be the old marker at successful completion. Check all exit codes!

You can configure rsync and have backup be a continuous process if the output is another subtree or host.

1 Like

Sorry, but I applied the command and did not return the result I wanted. For example, I have 2 files in a directory and they are named like this:

www-Wed-24-Oct-20-00.tar.gz and www-Wed-24-Oct-10-48.tar.gz

How should I apply the command so that it would return the file 8 o'clock that the most current ?

For the most current, something like:

ls -t | head -1
1 Like

if its only for the same date and same month then try using this..

ls www* | sort  -t"-" -k5 -r
1 Like

Yes, sometimes file get accidentally touched (updated modify time) when they did not change.

1 Like

Hi, good afternoon. I've managed to get to the command that desire. It is this:

find /DIRECTORY -iname "*.sql" -mtime -1

Now, I want the output of this command, which is the value "db-Thu-25-Oct-20-00.sql" it is recognized by a file that will upload the same. I already have the upload script. I do not know how do I relate this value to my file upload. How do I do this?

The problem with find -mtime is that it checks just the last 86400 seconds, so it it starts at a different second some days, say recovering from an outage, you may get 2 or zero files. It is great for casual file browsing, but for tighter control, you are better off with a tight time bracket from two marker files or timestamps in file names or in the file (header, trailer, etc.).

If you avoid backing up file still open for write using 'file', and no file is reopened for addend, then you can make a list of backed up files and back up those not on the list. If you put the date, length or checksum into the list, too, you can detect those that changed and back them up again.

1 Like

I'll try that too. But you know how do I include the command output into another file? Should I use a variable? How would this be?
Using the example command, what I should add more to it?

I like pipes. One command produces a list into a pipe, the next modified that list, and so on down the pipe. A shell subscript can be on the pipe, like " | while read x y z ;do . . . done | ". Variables are nice to catch one field of one line, or the output of one command, for testing and output. You can process lists using the comm command, which takes sorted files or pipelines and tells you which lines are in file a, only, which in file b only, and which in both files. In bash and some systems' ksh, this drops a name of a pipe from a sort " <( sort file1 ) ". So, this produces a list of lines new in file b and passes it to a subshell loop for processing, one line $l at a time:

comm -13 <( sort filea ) <( sort fileb ) | while read l
do
 ...
done
1 Like

---------- Post updated at 07:03 PM ---------- Previous update was at 06:51 PM ----------

I could use the find command in this script? How would I use the command's output on it?

DGPickett, can you help me?

If you have sftp, you may have ssh, so you can run scripts there on the ssh command line or by creating or installing them there. You can ssh host find ... | whatever to see what is up there, compare it to what is down here, and make a list of files needing to be moved, read that all on the same pipeline and scp or sftp the files. I prefer scp, as it has a simple syntax.

Learning a little scripting is a good thing, enjoyable to most. It's a learn by doing, copying, one bi at a time, always growing thing. It allows you to modify scripts or make special scripts or command sequences when you need. Most of us use the same 20% of what shell tricks we know every day. I am a big fan of '... | while read var ; do . . . done | ....' and long pipelines: parallel processing, no temp files, good data and flow control. I learned 'while : do ... done' for unconditional loops, here, just this year, and I started UNIX in 1990 (with 23 years of other stuff not too different).

A summary email, even when nothing is backed up, periodically, is good form,, best practices, as it tells you it is running.