How to reattach a mirror?

OK, I upgraded to the latest version of Solaris 10. Perhaps 'upgrade' isn't the right term because I reinstalled the root/boot drive with Solaris 10. Prior to this I had 4 physical drives. The first two had "/" and "/usr", the other two had a /var/audit and /home. I initially booted from cdrom and reinstalled using a flash archive. I ended up with what I had, when I did a more on /etc/release it was the same version. SO I reinstalled and it broke all the mirrors. I can mount the old '/' and old /usr/ as single drives. What I want to do is recover the /var/audit and /export/home. I inherited this system, but this is how the two were before. I believe the db are on slice 7. How do I recover them:

/dev/md/dsk/d30 /dev/md/rdsk/d30 /var/audit ufs 2 yes nosuid
/dev/md/dsk/d31 /dev/md/rdsk/d31 /export/home ufs 2 yes nosuid

Does this help: Chapter 10 RAID 1 (Mirror) Volumes (Tasks) (Solaris Volume Manager Administration Guide)

It seems there are several optional file systems which can mirror that Solaris systems use, so you may need a file system specific procedure if they are not simple mirrors.

The problem I have, and I did look at what you offered yesterday, is since the reloading of the OS, there are no databases and I don't know how to recover them, though:

metadb there are no existing databases

---------- Post updated at 03:36 PM ---------- Previous update was at 03:34 PM ----------

I do know that the databases is on slice7 of the disk, just don't know how to recover it.

---------- Post updated at 04:45 PM ---------- Previous update was at 03:36 PM ----------

OK, I figured it out. First remember the databases were on slice7, I found a note that said you can look for them using

metadb -a /dev/dsk/c1t2d0s7
d40: Submirror of d30
    State: Needs maintenance
    Invoke: metasync d30
    Size: 49160256 blocks (23 GB)
    Stripe 0:
        Device     Start Block  Dbase        State Reloc Hot Spare
        c1t2d0s0          0     No            Okay   Yes


d50: Submirror of d30
    State: Needs maintenance
    Invoke: metasync d30
    Size: 49160256 blocks (23 GB)
    Stripe 0:
        Device     Start Block  Dbase        State Reloc Hot Spare
        c1t3d0s0          0     No            Okay   Yes

That told me there were databases. 

Next I did metasync

metasync d30
metasync d31

Some of the mirrors were gone because I had mirrors on the now root disk. But after running the metasync, I was now able to mount the old mirros.

So, you got back to normal? Great! Fancy file systems can be a pain, which sells SAN and storage appliances. :smiley:

The last remaining item is the remove the databases that were associated with the root drive. It is not longer mirrored but the state databases remain.

What is the output of

# metadb -i

?

# metadb -i
        flags           first blk       block count
    M  m  p  lu         16              unknown         /dev/dsk/c1t0d0s7
    M     p  lu         8208            unknown         /dev/dsk/c1t0d0s7
    M W   p  l          16              unknown         /dev/dsk/c1t1d0s7
    M W   p  l          8208            unknown         /dev/dsk/c1t1d0s7
     a m   c luo        16              8192            /dev/dsk/c1t2d0s7
     a     c luo        8208            8192            /dev/dsk/c1t2d0s7
     a     c luo        16              8192            /dev/dsk/c1t3d0s7
     a     c luo        8208            8192            /dev/dsk/c1t3d0s7
     a     c luo        16              8192            /dev/dsk/c1t1d0s2
 r - replica does not have device relocation information
 o - replica active prior to last mddb configuration change
 u - replica is up to date
 l - locator for this replica was read successfully
 c - replica's location was in /etc/lvm/mddb.cf
 p - replica's location was patched in kernel
 m - replica is master, this is replica selected as input
 W - replica has device write errors
 a - replica is active, commits are occurring to this replica
 M - replica had problem with master blocks
 D - replica had problem with data blocks
 F - replica had format problems
 S - replica is too small to hold current data base
 R - replica had device read errors

I understand that c1t0 was / and c1t1was /usr.
You can delete corupted state database from c1t0d0s7 and c1t1d0s7 with

# metadb -d c1t0d0s7
# metadb -d c1t1d0s7

Did you create additional state database on c1t1d0s2?
Is c1t1d0 EFI or SMI labeled?
What is the output of

# prtvtoc /dev/rdsk/c1t1d0s2

?