insufficient metadevice database replicas ERROR

First I would like to thank this forum for assisting me in setting up my 1st sunbox.
Could not have done it if it had not been for you guys and google :smiley:

I have mirrored my box and have SUCCESSFULLY tested booting from the rootdisk and rootmirror successfully.
I am now looking at configuring the system to automatically boot the rootmirror when the rootdisk is not available/offline.

I initially received "md : d10 unavailable" messages, which I believe needed "nologging"
to be associated w/the root device in my /etc/vfstab file.

unfortunately, I am now receiving "insufficient metadevice" messages:

Insufficient metadevice database replicas located.

Use metadb to delete databases which are broken.
Ignore any "Read-only file system" error messages.
Reboot the system when finished to reload the metadevice database.
After reboot, repair any broken database replicas which were deleted.

Type control-d to proceed with normal startup

Is there a way to configure the box to automatically boot off the alternate?
Or do I have to DELETE the primary from the metadb before being able to bring the system backONLINE after a failed drive?

thanks,
manny

my eeprom contains the following:

test-args: data not available.
diag-passes=1
local-mac-address?=false
fcode-debug?=false
scsi-initiator-id=7
oem-logo: data not available.
oem-logo?=false
oem-banner: data not available.
oem-banner?=false
ansi-terminal?=true
screen-#columns=80
screen-#rows=34
ttyb-rts-dtr-off=false
ttyb-ignore-cd=true
ttya-rts-dtr-off=false
ttya-ignore-cd=true
ttyb-mode=9600,8,n,1,-
ttya-mode=9600,8,n,1,-
output-device=screen
input-device=keyboard
auto-boot-on-error?=true
error-reset-recovery=sync
load-base=16384
auto-boot?=true
boot-command=boot
diag-file: data not available.
diag-device=rootdisk rootmirror
boot-file: data not available.
boot-device=rootdisk rootmirror
use-nvramrc?=true
nvramrc=devalias rootdisk /pci@1f,700000/scsi@2/disk@0,0
devalias rootmirror /pci@1f,700000/scsi@2/disk@1,0
security-mode=none
security-password: data not available.
security-#badlogins=0
verbosity=normal
diag-trigger=error-reset power-on-reset
service-mode?=false
diag-script=normal
diag-level=max
diag-switch?=false

my metadb contians the following:
# metadb -i
flags first blk block count
a p luo 16 8192 /dev/dsk/c1t0d0s7
a p luo 8208 8192 /dev/dsk/c1t0d0s7
a m p luo 16 8192 /dev/dsk/c1t1d0s7
a p luo 8208 8192 /dev/dsk/c1t1d0s7

when I created my DiskSuite database replicas I used the following command:
metadb -a -f -c2 /dev/dsk/c1t0d0s7 /dev/dsk/c1t1d0s7

my Drive slices are setup as follows:

# df -h | grep dsk
/dev/md/dsk/d30 14G 4.6G 9.6G 33% /
/dev/md/dsk/d31 5.8G 539M 5.2G 10% /var
/dev/md/dsk/d34 4.8G 4.9M 4.8G 1% /tmp
/dev/md/dsk/d32 14G 15M 14G 1% /apps
/dev/md/dsk/d35 15G 16M 15G 1% /dbaexports
/dev/md/dsk/d33 6.3G 8.0M 6.2G 1% /home
# swap -l
swapfile dev swaplo blocks free
/dev/md/dsk/d36 85,36 16 10247216 10247216

Note the two last lines.

See DiskSuite User's Guide

If you only have two drives, it won't matter how many state database replicas you have - if you lose one drive, you won't have "one more than half the total state database replicas"

thank you sir...
two drives = no automated solution in drive failure