Acessing files on Sun Storage J4200 array

Hi.

I'm maintaining a small network of Solaris/Linux clients with a NFS/NIS server running Solaris 10. The server is a SPARC Enterprise M4000 sharing NFS filesystem that the clients are mounting. The hard drives on the server are on a Sun Storage J4200 array of 12 disks. I don't have details on what is the configuration of the array (how the disks are used, some RAID etc).

Now problem is the server broke (XSCF unit failure) and I need to access the files on the J4200 storage array. Is there any way I can do this? What would be the needed steps? Or are the files lost, as I don't have details on the array configuration?

(...the files have been backed up to a simple Linux based backup client...but as we had a major malfuntion, the backup client also has failed)

Thanks for any help,
Jh.

If the XSCF board has gone down you should be able to replace that. You'll need to power everything down to do it though.

The data on the storage array should be fine provided you don't reformat or try to write to it in any way before you reconnect it to the Sun box or replacement Sun box.

The array(s) configuration will be stored on the J4200 disks.

What version of Solaris (I assume it was running Solaris) was it running?

Unfortunately, I don't have a replacement XSCF right now. I had one spare a couple of months ago, but had to allocate it elsewhere. So that's not a quick option to access the files. I don't have info, on which version of Solaris10 was running on the server.

What I'm trying now, is to install the SAS controller (PCI-E) card to another Solaris machine. Idea is to try and access the disk array from this machine. This machine being a Netra T5220 running Solaris 10 (SunOS Release 5.10 Version Generic_147147-26 64-bit).

Command "format" gives output :

AVAILABLE DISK SELECTIONS:
       0. c0t0d0 <SUN300G cyl 46873 alt 2 hd 20 sec 625>  M1rootdg
          /pci@0/pci@0/pci@2/scsi@0/sd@0,0
       1. c0t1d0 <SUN300G cyl 46873 alt 2 hd 20 sec 625>  O1rootdg
          /pci@0/pci@0/pci@2/scsi@0/sd@1,0
       2. c3t7d0 <SEAGATE-ST314655SSUN146G-0892-136.73GB>
          /pci@0/pci@0/pci@8/pci@0/pci@9/LSILogic,sas@0/sd@7,0
       3. c3t8d0 <drive type unknown>
          /pci@0/pci@0/pci@8/pci@0/pci@9/LSILogic,sas@0/sd@8,0
       4. c3t9d0 <SEAGATE-ST314655SSUN146G-0892-136.73GB>
          /pci@0/pci@0/pci@8/pci@0/pci@9/LSILogic,sas@0/sd@9,0
       5. c3t10d0 <SEAGATE-ST314655SSUN146G-0892-136.73GB>
          /pci@0/pci@0/pci@8/pci@0/pci@9/LSILogic,sas@0/sd@a,0
       6. c3t11d0 <SEAGATE-ST314655SSUN146G-0892-136.73GB>
          /pci@0/pci@0/pci@8/pci@0/pci@9/LSILogic,sas@0/sd@b,0
       7. c3t12d0 <SEAGATE-ST314655SSUN146G-0892-136.73GB>
          /pci@0/pci@0/pci@8/pci@0/pci@9/LSILogic,sas@0/sd@c,0
       8. c3t13d0 <SEAGATE-ST314655SSUN146G-0892-136.73GB>
          /pci@0/pci@0/pci@8/pci@0/pci@9/LSILogic,sas@0/sd@d,0
       9. c3t14d0 <SEAGATE-ST314655SSUN146G-0892-136.73GB>
          /pci@0/pci@0/pci@8/pci@0/pci@9/LSILogic,sas@0/sd@e,0
      10. c3t15d0 <drive type unknown>
          /pci@0/pci@0/pci@8/pci@0/pci@9/LSILogic,sas@0/sd@f,0
      11. c3t16d0 <SEAGATE-ST314655SSUN146G-0B92-136.73GB>
          /pci@0/pci@0/pci@8/pci@0/pci@9/LSILogic,sas@0/sd@10,0
      12. c3t17d0 <drive type unknown>
          /pci@0/pci@0/pci@8/pci@0/pci@9/LSILogic,sas@0/sd@11,0
      13. c3t18d0 <SEAGATE-ST314655SSUN146G-0892-136.73GB>
          /pci@0/pci@0/pci@8/pci@0/pci@9/LSILogic,sas@0/sd@12,0

Those "sas" disks are the array disks.
Is it now a simple task of mounting one of them to check the content, like c3t16d0 for instance?

No harm in trying it. The issue might be that the device nodes eg, c3t16d0s1 will be missing on that Solaris configuration.

Can you see the VTOC's using 'format', select disk, then 'p' and 'p' again?
Then simply quit out. Do NOT write anything.

If the device nodes being missing stop them mounting we can create them.

1 Like

Your answers are highly appreciated!

The partition printout for c3t16d0 looks like this :

Current partition table (original):
Total disk sectors available: 286722911 + 16384 (reserved sectors)

Part      Tag    Flag     First Sector         Size         Last Sector
  0        usr    wm               256      136.72GB          286722911
  1 unassigned    wm                 0           0               0
  2 unassigned    wm                 0           0               0
  3 unassigned    wm                 0           0               0
  4 unassigned    wm                 0           0               0
  5 unassigned    wm                 0           0               0
  6 unassigned    wm                 0           0               0
  8   reserved    wm         286722912        8.00MB          286739295

partition>

The prtvtoc printout :

# prtvtoc /dev/dsk/c3t16d0
* /dev/dsk/c3t16d0 partition map
*
* Dimensions:
*     512 bytes/sector
* 286739329 sectors
* 286739262 accessible sectors
*
* Flags:
*   1: unmountable
*  10: read-only
*
* Unallocated space:
*       First     Sector    Last
*       Sector     Count    Sector
*          34       222       255
*
*                          First     Sector    Last
* Partition  Tag  Flags    Sector     Count    Sector  Mount Directory
       0      4    00        256 286722656 286722911
       8     11    00  286722912     16384 286739295
#

And fstyp for partition 0 shows it's ZFS :

# fstyp /dev/dsk/c3t16d0s0
zfs

I'm having hopes the data on the disks is accessible and all is not lost. Never worked with ZFS mounts though. During Solaris10 installation, I've installed UFS on the T5220 I'm using.

Good so far.

Create a mount point eg,

 
 # cd /
 # mkdir mymount
 # zfs set mountpoint=legacy mymount
 

When you try to mount, eg,

# mount -F zfs /dev/dsk/c3t16d0s0 /mymount

you may get an error like /dev/dsk/c3t16d0s0 does not exist since this system hasn't had these disks before.

That gives me the following error :

# cd /
# mkdir mymount
# zfs set mountpoint=legacy mymount
cannot open 'mymount': dataset does not exist
#

I'm assuming this is due to having installed Solaris10 with UFS option? I would need to have ZFS filesystem created? Would it make sense to do a clean Solaris 10 re-install and select ZFS filesystem?

Sorry, that was wrong.

Reference this:
https://docs.oracle.com/cd/E19253-01/819-5461/gaztn/index.html

1 Like

Yes, I will have look at the Oracle documentation to understand the zfs filesystem. Thanks a lot for the answers so far!

No, you shouldn't have to reinstall Solaris to get it to understand ZFS filesystems. Solaris 10 understands both UFS and ZFS.

If underlying disks are untouched and visible to the operating system, a simple

zpool import

should be all the magic required.

system will rescan the devices and import the pool if possible.
Also, it will output errors or numeric zpool identifiers to force the operations and such.
Be sure to read those.

Hope that helps
Regards
Peasant.

2 Likes

Had one additional HW failure and was not able to see any disks from the array using the "format" command. Had to replace the SAS controller, after which I was able to import the pools created, and can now access the files.

Thank you!

Phew! Well done! Close run thing that.

I'm not trying to insult your intelligence but it's time to urgently backup the whole thing.

You're right. This has been a "very low probability, very high impact" risk, that just has not been mitigated. This was maybe too close of a call.

I'm actually now setting up a repository for the files from the old server. Moving the whole system we have to a standalone client based system, with critical work files stored in a data center version control. Anyways, need the files from the old server...which I now seem to have :slight_smile: