I could not use tar as it has restrictions on file size and was giving error:
Tar too large to archive. Use E function modifier
Now the issue coming is i am getting following error as files keep on modifying:
/usr/sfw/bin/gtar: Exiting with failure status due to previous errors
I want to ignore these and let the tar ball be created.
Final tar ball created skips all files above these errors when i check the contents of final file.
Since this is a one time task so we are not bothered much even if it takes little extra time. We can also avoid zip in case that solves the issue, but i tried and got same result
A comment. The errors mean you will not get all files over to AIX. So ignoring errors is not a viable strategy. There are other options.
Consider scp which does not have file size limits, for example, run in parallel
#!/bin/bash
# run 20 file copy streams at one time, in parallel
# good approach for larger files
cnt=0
find /path/to/files -type f |
while read fname do
scp $fname username@server_ip:/path_to_remote_directory
cnt=$(( $cnt + 1 ))
if [ $(( $cnt % 20 )) -eq 0 ] ; then
wait
fi
done
You could also try using pax -x pax to create the archive on Solaris and read the archive on AIX. The pax pax format is an extended tar format that uses additional header records to get around the header fields in the normal tar headers that limit file sizes.
Note that this post originally said the extended headers were available in the ustar format; that was not correct (although some implementations now accept pax format header extensions when reading ustar format archives).
The pax available on hp-ux uses ustar format by default, the extended interchange format with support for files > 2GB ("largefiles"). So there is no problem on the receiving end.
pax example usage: How to use pax - Wiki-UX.info
Thanks all for your support. I created the tar using pax command and have shared the tarball with our system admin to uncompress it on the new server.
Contents and count of files look good.