There are two different problems, lets discuss it one by one:
I suppose the data reside on some sort of SAN device. To read and write 20GB of data is a matter of some minutes, maybe a quarter hour (if things go really wrong). We can afford to have the backup reside on the local machine.
We need a script, which does the following:
- stop the application
- tar (and gzip?) the data to the (local) backup destination
- start up the application again
The script should issue some sort of alert (alerting a management system like Patrol/HP-Openview/etc. or mail to root or whatever) if anything goes wrong.
Then lets turn our attention to the backup server. We need a script there which does the following:
- contact the server to be backed up.
- pull the file the server has already created that day
- delete old backups on the system if we want to have only a certain number of generations online
Again, if anything goes wrong the system should complain somehow.
It is good to have the taking and the pulling over of the backup NOT tied together (that would have been possible) because this way you can take the backup when the client has time and then pull over the backup when the backup system has time.
Both these scripts will be put into crontab to run daily (or whatever timely pattern you want them to run).
Lets start with the scripts. Since i don't know your exact environment these scripts are a sketch of course. You will have to fill in the details yourself:
#! /bin/ksh
# backup script. Takes a copy of some data, with tar and stops/restarts the
# application
function send_alert
{
# This will send all the alerts. It gets some message, like "file not found",
# and issues an alert with exactly this message. Probably you will have to
# adapt this function to suit your needs, i just output at <stderr> for
# demonstration purposes
typeset msg="$1"
print -u2 "$msg"
return 0
}
function take_backup
{
typeset destination="$1"
typeset source="$2"
tar -cvf ${destination} ${source}
# alternatively, with compression:
# tar -cvf - ${source} | gzip -9 > ${destination}
if [ "$?" -ne 0 ] ; then
send_alert "problem writing backup file"
return 1
fi
return 0
}
function stop_application
{
<...whatever is necessary to stop the app...>
# control if the app is really stopped. If not we raise an alert and main() will exit:
if [ $(ps -fe | grep -c <application>) -gt 1 ] ; then
send_alert "Not able to stop application"
return 1
fi
return 0
}
function start_application
{
<...whatever is necessary to start the app...>
# control if the app is really started. If not we raise an alert and main() will
# exit. Note that the test if there is at least one process running is just an
# example. Replace the test with whatever would make you sure the app is
# running again:
sleep 30 # give things time to settle
if [ $(ps -fe | grep -c <application>) -lt 2 ] ; then
send_alert "Not able to start application"
return 1
fi
return 0
}
# -------- main()
# The following is necessary, we will run in cron once. You might want to set
# other necessary environment variables here too
typeset PATH=<whatever you need>
typeset TERM=<whatever you need>
export TERM; export PATH
typeset approot="/path/to/your/data"
typeset budest="/path/to/backup/destination"
typeset today="$(date '+%Y%m%d')" # we will use this to name the backup files
stop_application
if [ "$?" -ne 0 ] ; then
exit 1
fi
take_backup ${budest}/backup_${today}.tar $approot
if [ "$?" -ne 0 ] ; then
exit 1
fi
start_application
if [ "$?" -ne 0 ] ; then
exit 1
fi
exit 0
Note that this is really a sketch: for instance there is no provision to delete old backup files when they are no longer necessary. Ideally the script for the backup server should do this after pulling over the file successfully. Still it would probably be nice to have the last backup generation on the originating server in case we have to restore something, so there is room for some scripting for you.
Lets turn to the matter of software installation: You find a pinned thread with important URLs at the top of this forum. Among these URLs is IBMs "Linux toolbox for AIX", where you can download all the necessary tools already packaged in rpm-format.
You first have to install the rpm installer itself, which is a package in AIX' native bff-format. (AIX has its own packaging format.)
To install packages in bff-format:
- copy the downloaded packages to a directory
- change to this directory and issue "inutoc .", which will create a file ".toc"
- issue "installp -ac -d. -Y -g <package_name>" to install a package
To install packages in rpm-format:
- copy the downloaded packages to a directory
- change to this directory
- issue "rpm -i <packagefile>" to install to package
I hope this helps.
bakunin