Problem running zip from cron - bad zipfile offset

I've created a pretty straightforward shell script to do backups via rsync. Every night it rsyncs a copy of a website (approx 60GB) to the local machine. Once a week, after the rsync step, it zips the local copy of the whole site into 2GB chunks and places them in another folder.

When running 'unzip filename.zip' I get a long string of errors similar to this:

file #17934:  bad zipfile offset (lseek):  1726201856
file #17935:  bad zipfile offset (lseek):  1726775296
file #17936:  bad zipfile offset (local header sig):  1702568762
file #17937:  bad zipfile offset (EOF):  1545311941

Approx 60% of the archived series of 2GB chunks unzips without any problem, but the rest shows those errors above. Those errors "seem" to be happening on files that were already .zip files on the webserver, and which were originally created with a mix of windows WinRAR and command line windows 'zip'. (I don't see how the original method of creation would impact them being zipped into a larger archive, but I'm mentioning it to be thorough).

ALSO: If I ftp the entire linux-created series of 2GB chunks to a Windows machine and open them in WinRAR, they all extract without any problem.

Here's the crontab I'm using:

2 * * 2-7 /home/user/Scripts/backup.rsync.sh > /mnt/data/Backups/logs/$(date +\%Y\%m\%d_\%H\%M\%S)_website.rsync.cron.log 2>&1
0 2 * * 1 /home/user/Scripts/backup.full.sh > /mnt/data/Backups/logs/$(date +\%Y\%m\%d_\%H\%M\%S)_website.full.cron.log 2>&1

And here are the 2 scripts referenced in that same cron with a lot of additional commands removed for clarity:

backup.rsync.sh

# Rsync website.com to local machine
/usr/bin/rsync -Pavzhe "ssh -p 22" --log-file=$dirlog/$date\_website.rsync.log --bwlimit=2500 --skip-compress=jpg/zip --delete user@website.com:'/home/user/' "/mnt/data/Backups/website.com"

backup.full.sh

# Rsync website.com to local machine
/usr/bin/rsync -Pavzhe "ssh -p 22" --log-file=$dirlog/$date\_website.rsync.log --bwlimit=2500 --skip-compress=jpg/zip --delete user@website.com:'/home/user/' "/mnt/data/Backups/website.com"
# Zip site to 2G chunks
/usr/bin/zip -r -s 2g -y -2 $dirzip/website_$date.zip $dircom/

Something in that final zip command seems to be the cause but I can't figure it out. The fact that I can take those resulting files into windows and extract them with winRAR is even more confusing, since linux 'unzip' throws thousands of errors... Can anyone suggest what's wrong with my batch script?

PS: The 'zip' version is 3.0-6 and 'unzip' is 6.0-8+deb7u2. The local machine is Crunchbang 11.

Just as $PATH is not inherited from your environment to be used when a cron job you submit runs, the variables in your script $dirlog , $date , $dirzip , and $dircom are undefined when cron runs this script.

Are the results what you would expect to see if you ran the script:

# Rsync website.com to local machine
/usr/bin/rsync -Pavzhe "ssh -p 22" --log-file=\_website.rsync.log --bwlimit=2500 --skip-compress=jpg/zip --delete user@website.com:'/home/user/' "/mnt/data/Backups/website.com"
# Zip site to 2G chunks
/usr/bin/zip -r -s 2g -y -2 /website_.zip /

Yes, the script runs perfectly. All the variables are defined at the top of the script. (I did mention that I was removing everything but the command itself to make the post shorter).

Everything runs.. the initial mysqldump.. the rsync.. the deletion of backups older than 2 weeks.. the zipping of everything into 2GB chunks.. the copying of the 2GB chunks to external USB.

The only thing I'm concerned with here is why the resulting 2GB zip chunks can't be unzipped without errors, and I think it's specifically something in the zip command in the last line of the last snippet I posted, but I can't figure out what it is.

Here is the relevant part of the script with variables:

#!/bin/bash

date=`/bin/date +%Y%m%d_%H%M%S`

dir=/mnt/data/Backups
dirlog=/mnt/data/Backups/logs
dircom=/mnt/data/Backups/website.com
dirdb=/mnt/data/Backups/website.db
dirzip=/mnt/data/Backups/website.zip
dirscript=/home/user/Scripts

source $HOME/.keychain/${HOSTNAME}-sh

# Rsync website.com to local machine
/usr/bin/rsync -Pavzhe "ssh -p 22" --log-file=$dirlog/$date\_website.rsync.log --bwlimit=2500 --skip-compress=jpg/zip --delete user@website.com:'/home/user/' "/mnt/data/Backups/website.com"
# Delete files older than X days
/usr/bin/find $dirdb/*sql.gz -mtime +14 -exec /bin/rm {} \;
/usr/bin/find $dirlog/*.log -mtime +30 -exec /bin/rm {} \;
/usr/bin/find $dirzip/*.z* -mtime +30 -exec /bin/rm {} \;
# Zip site to 2G chunks
/usr/bin/zip -r -s 2g -y -2 $dirzip/website_$date.zip $dircom/

Here's what happens:
The website has approx 120,000 files, including tens of thousands of pre-existing zipfiles in folders.
My script goes to the website and rsyncs everything, the entire hosting account, and brings it to my local computer.
(Those zipfiles on the website, once rsynced to my local machine, can be manually unzipped without any problem whatsoever, so the problem is not with the zipfiles on the site, nor with their rsynced copies on my local machine)
The next step in my script takes the entire rsynced website on my local machine and zips it into 2GB chunks.
As soon as those 2GB zip chunks are created, the 2GB zip chunks cannot be unzipped again without errors. And not 100% errors. A large part of the site unzips properly. But it seems that the pre-existing zipfiles that were on the website are the files which cannot be extracted from the 2GB zip chunks. That's what I'm trying to figure out...

Well, turns out it was pretty straighforward. You can create multipart archives with 'zip' but 'unzip' doesn't support extracting them.

The solution is to:

cat chunk.z01 chunk.z02 ... chunk.zip >joined.zip
zip -FF joined.zip --out joinedfix.zip #fixfix concatenation errors
z #tells zip to look for the main archive
unzip -t joinedfix.zip #test the result, should show no errors
unzip joinedfix.zip