13 disk raidz2 pool lost

Hi guys, I appreciate any help in this regard, we have lost sensitive data in the company.

One box with 2 disk mirrored and a 3ware controller handling 13 disks in a raidz2 pool. Suddenly the box restart and keeps "Reading ZFS config" for hours.

Unplugging disk by disk we isolate the disk was causing the system not to be able to restar and we execute 'zpool clear -F' as suggested by 'zpool status' command. During hours of proccess we get a console error from the controller, and the system hangs, so we decide to change such disk, getting the pool from DEGRADED to FAULTED. After one 'zpool clear' we get the pool again DEGRADED, but no access to data, so we try to roll back with previous disks. (we didn't commit any 'zpool replace').

The box keeps restarting, freezing and unable to boot, so we decide to plug the original 13 disks in another box with same hardware.

Now we are trying to import the pool here, after hours of proccess and huge disk activity, the box hangs and the import doesn't succeed. This is the result of 'zpool import' command:

state: DEGRADED
status: The pool was last accessed by another system.
action: The pool can be imported despite missing or damaged devices.  The
        fault tolerance of the pool may be compromised if imported.
   see: http://www.sun.com/msg/ZFS-8000-EY
config:

        zsan08rz2     DEGRADED
          raidz2-0    DEGRADED
            c10t2d0   FAULTED  corrupted data
            c10t2d0   ONLINE
            c10t5d0   ONLINE
            c10t9d0   ONLINE
            c10t0d0   ONLINE
            c10t1d0   ONLINE
            c10t4d0   ONLINE
            c10t8d0   ONLINE
            c10t12d0  ONLINE
            c10t11d0  ONLINE
            c10t3d0   ONLINE
            c10t7d0   ONLINE
            c10t6d0   ONLINE

Any ideas? Note that c10t2d0 is duplicated, and note that during las import process we got this error from the controller in the console:

zsan08 tw: WARNING: tw0: tw_aen_task AEN 0x000a Drive error detected unit=7 port=13

This drive seems to be different than the drive c10t2d0.

Suggestions? Thanks!

Just quick-studying this product, but it seem it should recover on its own if you place a labeled empty volume where the bad volume was: https://blogs.oracle.com/partnertech/entry/a\_hands\_on\_introduction_to1

How up-to-date is your Solaris system? Do you have the "zdb" utility?

Just Google "zfs zdb".

For a good laugh, read the man page.

Has Oracle/Solaris support been able to help? You might be missing a patch, or have a bug they need to patch.