Rsync Scripts for Offsite backups

I am currently bringing up an offsite location, right now I am in the process of copying some data offsite (about 400GB).
The problem I see is that running a single rsync for everything is not using the available bandwidth and testing shows that I double in speed for each instance of Rsync I am running upto about 3 concurrent.

The script I am using runs only a single instance, but I would like to be able to scan the directories and start 2-3 rsync at a time. Is there a way to do this?

Current script:

#!/bin/bash
    PIDFILE=/var/log/rsync/fortrust_sync.pid
    LOGFILE=/var/log/rsync/fortrust_sync.log

    [[ -r $PIDFILE ]] && ps -p $(< $PIDFILE) 2>/dev/null && {
      echo "program already running" >> $LOGFILE
      exit 1
    }
rm -f $PIDFILE
echo $$ > $PIDFILE

# Source Machine Name (This is local)
SOURCE="SOURCEMACHINE"

# Destination host machine name (This is a remote machine)
DEST="DESTINATIONMACHINE"


# User that rsync will connect as
# Are you sure that you want to run as root, though?
USER="User"

# Directory to copy from on the source machine.
BACKDIR="/storback/rman"

# Directory to copy to on the destination machine.
DESTDIR="/mnt/storage/storback/"

# excludes file - Contains wildcard patterns of files to exclude.
# i.e., *~, *.bak, etc.  One "pattern" per line.
# You must create this file.
#EXCLUDES=/rsync/excludes

# Options.
# -n Don't do any copying, but display what rsync *would* copy. For testing.
# -a Archive. Mainly propogate file permissions, ownership, timestamp, etc.
# -u Update. Don't copy file if file on destination is newer.
# -v Verbose -vv More verbose. -vvv Even more verbose.
# See man rsync for other options.

# For testing.  Only displays what rsync *would* do and does no actual copying.
# OPTS="-n -vv -u -a --exclude-from=$EXCLUDES --stats --progress"
# Does copy, but still gives a verbose display of what it is doing
OPTS="-v -u -a --progress --rsh=ssh --exclude-from=$EXCLUDES --stats"
# Copies and does no display at all.
#OPTS="--archive --update --rsh=ssh --exclude-from=$EXCLUDES --quiet"

# May be needed if run by cron
export PATH=$PATH:/bin:/usr/bin:/usr/local/bin

# Only run rsync if $DEST responds.
VAR=`ping -s 1 -c 1 $DEST > /dev/null; echo $?`
if [ $VAR -eq 0 ]; then
echo $DEST ALIVE
else
    echo "Cannot connect to $DEST."
fi

# Only run rsync if $SOURCE responds.
VAR=`ping -s 1 -c 1 $SOURCE > /dev/null; echo $?`
if [ $VAR -eq 0 ]; then
echo $SOURCE ALIVE
rsync $OPTS $BACKDIR $USER@$DEST:$DESTDIR
else
    echo "Cannot connect to $SOURCE."
fi
rm -f $PIDFILE

It sounds like rsync could use a bit of parallelization. The only thing I can imagine doing is getting the contents of BACKDIR, splitting it into equal parts, and running rsync on each part.

dirsize=`find $BACKDIR -type d -a -prune |wc -l`
let splits= "($dirsize+1) / $THREADS"
find  $BACKDIR-type d -a -prune  | split -l $splits 
#split outputs files like xxa xxb etc
#tell each rsync instance to use a different file

However, this doesn't work well if the files in the subdirectories of BACKDIR are not equally distributed. Another mechanism would be to do a find of all directories, and split that list in a way that rsync can use... but that's tricky because you might have subdirectories of one list also in another list.

Well the only thing that might help is that the directories under BACKDIR are static, only the dmp's in them change. I was wondering if it might help to run rsync --list-only dump that to a static file, split it and run it in a for loop. But then I am pretty new to this.

Haven't tried that, but that sounds like a great idea.