Solaris Cluster Device Problem

I build up two node cluster (node1, node2) in virtualbox. For these two nodes I add 5 shared disk. (Also each node have own OS disk).

1 shared disk for vtoc
2 shared disk for NFS resource group
2 shared disk for WEB resource group

When I finished my work; two nodes was ok and shared disk was successfully working. After cluster shutdown and restart now shared disk status

root@node1:/> cldev status

=== Cluster DID Devices ===

Device Instance               Node              Status
---------------               ----              ------
/dev/did/rdsk/d1              node1             Ok
                              node2             Ok

/dev/did/rdsk/d3              node2             Ok

/dev/did/rdsk/d4              node1             Ok
                              node2             Ok

/dev/did/rdsk/d5              node1             Ok
                              node2             Ok

/dev/did/rdsk/d7              node1             Ok

root@node1:/> cldev show

=== DID Device Instances ===

DID Device Name:                                /dev/did/rdsk/d1
  Full Device Path:                                node1:/dev/rdsk/c1t0d0
  Full Device Path:                                node2:/dev/rdsk/c1t1d0
  Full Device Path:                                node1:/dev/rdsk/c1t1d0
  Full Device Path:                                node2:/dev/rdsk/c1t0d0
  Replication:                                     none
  default_fencing:                                 global

DID Device Name:                                /dev/did/rdsk/d3
  Full Device Path:                                node2:/dev/rdsk/c0t0d0
  Replication:                                     none
  default_fencing:                                 global

DID Device Name:                                /dev/did/rdsk/d4
  Full Device Path:                                node1:/dev/rdsk/c1t4d0
  Full Device Path:                                node2:/dev/rdsk/c1t4d0
  Replication:                                     none
  default_fencing:                                 global

DID Device Name:                                /dev/did/rdsk/d5
  Full Device Path:                                node2:/dev/rdsk/c1t3d0
  Full Device Path:                                node1:/dev/rdsk/c1t3d0
  Full Device Path:                                node2:/dev/rdsk/c1t2d0
  Full Device Path:                                node1:/dev/rdsk/c1t2d0
  Replication:                                     none
  default_fencing:                                 global

DID Device Name:                                /dev/did/rdsk/d7
  Full Device Path:                                node1:/dev/rdsk/c0t0d0
  Replication:                                     none
  default_fencing:                                 global

root@node1:/> cldev list -v
DID Device          Full Device Path
----------          ----------------
d1                  node2:/dev/rdsk/c1t0d0
d1                  node1:/dev/rdsk/c1t1d0
d1                  node2:/dev/rdsk/c1t1d0
d1                  node1:/dev/rdsk/c1t0d0
d3                  node2:/dev/rdsk/c0t0d0
d4                  node2:/dev/rdsk/c1t4d0
d4                  node1:/dev/rdsk/c1t4d0
d5                  node1:/dev/rdsk/c1t2d0
d5                  node2:/dev/rdsk/c1t2d0
d5                  node1:/dev/rdsk/c1t3d0
d5                  node2:/dev/rdsk/c1t3d0
d7                  node1:/dev/rdsk/c0t0d0

and cluster resource groups (NFS and WEB) offline now. I cant figure out how d1 and d5 shared disk happen like this. They should be spareded like d1 d2 and d5 d6. But they are now combined in one. Can you help me how can solve thise problem?

Have you worked through the whitepaper?

https://blogs.oracle.com/TF/resource/WhitePaper/Whitepaper-PracticingSolarisClusterUsingVirtualBox-extern.pdf

I didnt use that one. I use "Setup a Oracle Solaris Cluster on Solar...alBox (part 1) _ Benjamin Allot's Blog" and "Oracle Solaris Cluster Administration Activity Guide". But your suggested document better than these. I will read it for any solution. If I can figure out did I do something wrong; I ll write down here. Thanks for your reply.

Most important part is to use ISCSI protocol instead of shared virtualbox devices.

Get a box to act as ISCSI target, while your cluster nodes are initiators.
This is your disk subsystem from which your will make failover zpools or metasets.

Also the document is outdated for current releases, you might want to keep that in mind.

Once you have ISCSI setup done, you can just follow the regular documentation for your release keeping mind of iscsi notes if existing.

Hope that helps
Best regards
Peasant.

Thank you for your reply Peasant.

For the document, it is just suits me, because I am using also old version of cluster and solaris (cluser 3.3 and sol10) for my environment.

I used solaris volum manager for FS. I m guessing that most probably I had this issue because of that but I want to be sure. DukeNuke2's document using zfs for it. Now I ll start over and I will try zfs. I am wonderig that do I have same problem with zfs or not.
I used svm because svm support global file system. now I ll try zfs.

And also after that I ll look for the ISCSI protocol for virtualbox environment.

I ll share my exprience in here.

Thank you again.