Advice on a backup script, maybe one is out there already?

Hi,

Not sure whether this is the right place to post it. I decided to post it here 'coz Advanced and Expert users will most likely have the answer to what I am looking for.

I want to backup scripts that I have access to to a tar file file and zip it. At the moment I am creating a directory that sort of mimic the directory structure that contains the file that I want to backup.

For example, the source directory make look like this:

/u01/db/mysql
/u01/db/mysql/scripts
/u01/db/mysql/scripts/user01
/u01/db/mysql/scripts/user02
/u01/db/mysql/sql

And at the moment, I am creating a backup directory script that look like below:

/u02/bak/db/mysql
/u02/bak/db/mysql/scripts
/u02/bak/db/mysql/scripts/user01
/u02/bak/db/mysql/scripts/user02
/u02/bak/db/mysql/sql

I then manually copy the file from each directory, the .sh, .ksh, .sql, .log etc. to each directory, then I do below.

cd /u02
tar -cvf bak.tar ./bak
gzip bak,tar

As you may have guess, this takes a loooong time :eek: So I am wanting to know if someone can recommend a short process that will more or less give the same result.

In its simplest form, I want to find files of .sh, .ksh, .sql and other file types tar them to a file and gzip the tar file. I only want to find files that I have permission to. Yeah, some of the files are root owned and I do get permission denied when I am doing the cp manually

Kinda hoping that maybe someone had done a similar thing of what I am trying to do, preferably a script. Any advice will be very much appreciated. Thanks in advance.

There seems to be a misunderstanding. A backup means an exact copy, not a parallel copy. Plus if the data is important, consider getting the backup onto separate media as well.

Why: 'restore a backup' means extracting the backup file into many correctly named files. Correctly named includes the path name. Your backups all have files with the wrong name, unless - to restore - you want to unpack the tarball, then manually copy each file back where it came from.

Not that this will not work. It is duplicated effort and wastes disk space - you actually have 3 copies of each file.

tar cvfz /path/to/backup/backup_`date +%Y%m%d`.tar.gz /u01/db/mysql

This creates a backup file with the backup date as part of the name, already compressed, on whatever device you choose to place it. Suppose someone then accidentally trashes a file, /u01/db/mysql/scripts/foo.ksh. So. You want to restore it to the way it was before it got munged:
choose the date you want to restore back to, say 20161201. - Dec 1, 2106

cd /u01/db/mysql/scripts
tar xvfz /path/to/backup/backup_20161201.tar.gz /u01/db/mysql/scripts/foo.shl .

If the content is a database, then backing up in this way may also not give you a consistent restore in any case, especially if you are unable to read some of them!

You might be better getting the database tools to make a backup. Another alternate (but I'm guaranteeing it) might be to use LVM snapshots if you OS supports it. This way, you can be sure that the files are idle and although the database might go into crash-recovery (as though you've had a power loss) it should give you a point-in-time restore from when you split off the copy.

What OS are you using?

One worries that this is not a good plan though. If you cannot read all the data, how would you plan to restore it?

On your point about getting it to run faster, it will always depend on a number of factors, such as volume of data, if there is I/O contention etc. I/O contention can occur for various reasons, such as sharing physical media/access path (shared controllers, fibre etc.) or other users even.

Is the device SAN based? There may be a way to use SAN tools to copy a LUN that would give a suitable restore point. Of course it would revert everything on that LUN, so that might not be what you want, but perhaps you could mount the copy LUN as another filesystem and back that up away from the live database.

Can you explain a bit more about your server setup?

Kind regards,
Robin

One other concern is relative to file access. Do you lock out users during the process? Otherwise (and especially since you say this takes time) by the time the last file is processed and first could have changed. And your backup set of files will no longer be in synch with each other.

I forgot to mention that some databases allow you to freeze the data files allowing you to backup the files. It keeps a separate log of all updates pending and refers to it in normal processing. When you remove the freeze (would that be a thaw?) the updates are written to the files.

They have to be set up to allow this functionality.

Robin