Snapshot backup

Hi all:

I'm trying to do the following:

1) Each monday (for every week or bi-weekly) I'll perform a full backup of my 2 Tb RAID 1 system to an external eSATA 2 Tb HDD. I'll move this HDD to a different physical place (my home i.e).

2) Each day after monday until the next backup, I want to perform an incremental/diff backup from my RAID to a file, that will automatically uploaded to a cloud storage system (like dropbox, google drive etc). This incremental backup should be small, so easy to full upload in few hours per day.

3) Next Monday, perform a full backup on other disk, move it away to home and start again next monday.

I was reading about ditto, rsync, cp, Time Machine etc. All of them (if I'm not wrong) force me to keep the backup HDD in order to do an incremental backup.

Is there any way to do this on the way that I'm looking for?

Thanks

---------- Post updated at 09:29 PM ---------- Previous update was at 09:24 PM ----------

Sorry.. I forgot mention that ZFS seems to do what I was looking for.. but I'm not so confident on my capabilities of manage issues on ZFS.

Any help.. welcomed.

Thanks

I have a continuous Mozy Internet backup so intermediate changes are not vulnerable long, and a usb 1TB hard drive I plug in to each box in rotation to catch it up. The Mozy took 3 days to restore my saved files last time, alone. So, I have no gap.

Now, to do offsite with external hard drive, get extras so the one on site to be written is not the most recent prior copy, like backup tapes. The Internet product is still wise, as it makes the window of vulnerability very short. If you use network not sneakers to move the backup data offsite, you can merge the two. Have a local copy very quickly updated and great for restores, and a remote copy that may lag more, is slow for restores but ensures the data is still online if that center goes down. Disk is cheap, data is priceless. Mirrored data centers can ensure that data is and processing are both in multiple distinct places. A compressed stream of updates both ways can keep them pretty close in sync without slowing either host.

Good old MULTICS had no hard links, and any change rippled up the tree into directory status all the way to the root, so you could find just the modified files with zero effort. Modern file systems and volume managers can support backup systems with similar lists. After the fact new file discovery is slow and loads the system more.

So... you propose 100% online backup right?. Sure that this by far the easiest way to do that. Seems like 8$ per month can do the job.

Thanks for your comment. I'll take a view to this way.

Anyone else comment something about the first way?. Just to know if I should kill it and go to cloud :slight_smile:

Cheers

Like I say, both local backup and remote Internet, so you can get it all fast and never lose much.