UFS on solaris 10, compatible with solaris 8?

we have a server runnning solaris 10 (sparc)
this server is attached the a SAN (HP EVA)

we created 23 LUN's and filled them with data.

the we unmounted them and tried to attach the LUN's to a solaris 8 system.

this is where thing get strange...

when we just mount the LUN's it works fine, the data looks to be all good.
after this we updated the vfstab and re-booted.

at boot the OS had issues with the super block.

see errors below.
when we fsck the file system it repairs the super block and marks it as clean.

we still have the same problem at next re-boot.

errors:

/dev/rdsk/c3t600508B400010AC30001200000200000d0s2: UNEXPECTED INCONSISTENCY; RUN fsck MANUALLY.
/dev/rdsk/c3t600508B400010AC30001200000380000d0s2: BAD SUPERBLOCK AT BLOCK 16: BAD VALUES IN SUPER BLOCK
/dev/rdsk/c3t600508B400010AC30001200000380000d0s2: USE AN ALTERNATE SUPERBLOCK TO SUPPLY NEEDED INFORMATION;
/dev/rdsk/c3t600508B400010AC30001200000380000d0s2: e.g. fsck [-F ufs] -o b=# [special ...]
/dev/rdsk/c3t600508B400010AC30001200000380000d0s2: where # is the alternate super block. SEE fsck_ufs(1M).

/dev/rdsk/c3t600508B400010AC30001200000380000d0s2: UNEXPECTED INCONSISTENCY; RUN fsck MANUALLY.
/dev/rdsk/c3t600508B400010AC30001200000440000d0s2: BAD SUPERBLOCK AT BLOCK 16: BAD VALUES IN SUPER BLOCK
/dev/rdsk/c3t600508B400010AC30001200000440000d0s2: USE AN ALTERNATE SUPERBLOCK TO SUPPLY NEEDED INFORMATION;
/dev/rdsk/c3t600508B400010AC30001200000440000d0s2: e.g. fsck [-F ufs] -o b=# [special ...]
/dev/rdsk/c3t600508B400010AC30001200000440000d0s2: where # is the alternate super block. SEE fsck_ufs(1M).

any thoughts??

If you are going to have to share this filesystem between a Solaris 8 and a Solaris 10 host then you will probably need to use something like VXFS, but the cheaper solution would be to make both hosts Solaris 10.

we are going to have to file systems on the solaris 8 system for every more.

the only need for solaris 10 was as a tempory staging area to get 500GB of data in to the LUN's.

we are now thinking about setting up a solaris 8 system for a staging area for the few days we need to get the data loaded into the LUN's.

I wonder if the LUNs are not being unmounted cleanly on reboot. Perhaps some processes were still using them. I also wonder why s2 is being referenced, are you mounting the whole disk?

yes, we have decided to just use the full disk, and so we are just using s2

you can't use s2 for a filesystem! s2 is the hole disk.

I have never used S2 as a file system before and the idea did seem a bit odd to me...

I did voice concerns about this, but another unix admin I work with said that they do this all the time and 'it's all good".

It is a bad and risky practice, better to create another slice, say s0, and grant it all the available space. That said, on SPARC, newfs on slice 2 should be harmless assuming no other slices are used.

I have also used slice 2 in order to use the whole of a Solaris disk, it is fine as long as no other slice is in use!
What you don't do is use slice 2 for a raw volume, e.g. for a database, when using a raw volume you avoid cylinder 0 otherwise you will overwrite the superblock, not good!

In any case, Solaris interfaces compatibility is expected when the target version is upper but not guaranteed the other way around. I'm not aware of specific enhancements with UFS between Solaris 8 and 10 outside logging being the default but I would have created the filesystem on Solaris 8 instead of 10 just to be sure.