Optimal block size in dd

I cannot tar an ORACLE backup file with the size of > 8GB. So I am using "dd" to copy file to tape. What is the optimum block size for this process?

Have you tried cpio?

The only rule of thumb I know is for disks - use a dd blocksize that == 2 or 4 times the size of the physical blocks on the disks, whether reading or writing. This is the fundamental blocksize, not what df reports. For example you can use fdisk on linux to see blocksize. Don't remember for HPUX.

Since tape is far slower than disk i/o, I dunno if this helps.

The f_frsize value returned in struct stavfs:
u_long f_frsize; /* fundamental filesystem block
(size if supported) */

Even with "conv=sync" a "dd" to tape should be of a structured file. My "cpio" and "tar" won't do files above 2Gb. We don't know what O/S version you have.
The optimum block size to tape depends on the properties of the tape deck and driver.
May be better to use proper backup software, checking whether it will handle very large files of 8Gb or more.