ufsdump from Solaris to ubuntu fails with bad file descriptor

Hi All

I have a dedicated backup server running ubuntu 10.04, which has recently been rebuilt (same OS, just different h/w)

This is used to receive ufsdump output from a number of Solaris servers, using the following syntax:

ufsdump 1uf [remote server]:/path/to/backup/file /fs/to/be/backed/up

This has worked fine up until the rebuild of the backup server, when I'm now getting the following error:

DUMP: write: bad file descriptor
DUMP: Write error 0 blocks into volume 1

Other commands work OK, such as tar, rcp, rsh, and so on. It's just the ufsdump that's suddenly failing!

I've looked at things like max num of files (OK) and permissions on the backup server's destination (OK), to no avail.....

I've even tried googling the error, but no-one seems to have come across this problem, or else they've not mentioned it online!

Can anyone help / suggest anything that might cause this? I'm assuming there's some issue between the ufsdump output and negotiations with the ubuntu server's filesystem, but I can't see what :frowning:

Thanks
Dave Parker

May have found a solution to my problem.....

As a test I tried to run "restore" on the ubuntu server, only to be told that it wasn't available!

"click"

I installed the dump package, which brings in "dump" and "restore", and then tried to run a remote dump as originally, and it appeared to work successfully!

So, does "dump" add something to the filesystem to alter the file descriptors? Perhaps the remote ufsdump needs to refer to something on the server that is only provided by having dump installed....?

Who knows, but it seems to work now :b: