Hi, I'm new in this forum hoping that someone can give some tips.
Since we have a new project that requires to migrate the existing storage of a SUN clustered host (T8-2's) from a zfs storage appliance into a different brand/harwdware IBM Storwize i think..
How complex would the task would be?
Can we just do a 1 to 1 replication of the LUNS? Assuming that downtime is acceptable.
Only the host's storage would be migrated out the server will be retained.
TBH I'm very new to sun clusters so my apologies if I've used a general terminology. Although I've been doing support for oracle sun hardwares and solaris. I haven't got into this level yet.
Just to add:
I've been searching for a proper guide for this for quite a while and checked some KB articles related to this from My Oracle Support but to no avail. Only guides I saw was to move the existing server into a different machine which involves a complete reconfiguration.
Welcome to the forum!
If you keep the current server and the zfs file system, then it is probably only a matter of adding new lun devices to the zfs pool, moving data (zpool replace), and detaching the old devices.
I have not much experience with this, but found an article:
Zone the storage and present FC luns.
You will probably need to use multipath and activate the same (if it was never done, a reboot might be required for new storage/luns to have multipath ON in solaris OS)
Read the docs specific to Solaris / storage combination to see if some specific configuration is required for multipath.
Create cluster disks/devices (cldevice populate) from those new FC luns to have /dev/did/...
Use those devices to via zpool commands (replace or mirror / detach)
Remove old devices from zpool.
Check config and failover the cluster resources to confirm cluster functionality.
Umonitor the old cluster devices, remove them and cleanup the device files, unconfigure old storage (devfsadm, cfgadm, cldevice).
It's a bit work, covered by multiple oracle docs - be sure to read for your cluster / OS version.
Take it slow and if possible test it in lab environment.
Avoid replicating on storage level e.g having replicated and original zpools on same host, as this will break stuff.
Present the new Storage to the Host (SAN Zone and multipath)
Create cluster disk and add the new LUN or storage (Not sure if i got this correctly)
Detach the old LUN or storage from the ost
Perform testing and conduct failover tests for functionality
But i got a couple of questions:
For item 2, once the new lun has been added and attached, obviously need to monitor the rebuild or syncing has been completed then after that perform a detach on the old LUN. For the quorom part any dependencies that we need to be aware off? (Sorry to ask as I am still grasping the behavior of Oracle clusters on how they handle resource groups) Normally the best practice right now is just use application based clustering like ASM
For Solaris clusters ,Zpool or ZFS based cluster resources are only available on local disks not for FC. So let's say the client is using SVM or Veritas to manage their volumes does the procedure stay the same?
First you present the (new) lun(s), rescan fabric with cfgadm, then you populate the cluster devices and proceed with zpool operations when all nodes see the newly presented luns.
When you attach or replace in ZFS you can see zpool status for resilvering to complete.
After that you can detach the old lun e.g break mirror or just use it replace and wait for resilver.
As for quorum devices, check if you have those as device (a disk) or a 3rd quorum server is in use.
The follow docs for adding or replacing quorum device.
Regarding ASM i would always suggest running oracle databases using ASM if possible for numerous benefits such as stripe and lack of copy of write zfs semantics which is not so beneficial for databases in general in a long run e.g requires more love and tuning to even get close to ASM performance.
ASM is cake to migrate, you present the luns and add them to ASM - presuming using EXTERNAL_REDUNDACY e.g ASM is not keeping mirrors but trusting storage for raid protection.
After it has added and distributed the data new luns, you remove old ones.
Only thing to keep in mind is that this will generate alot of disk traffic as it distributes back and forth, but can be done online (while database is working).
This can effect external storage replication to another site for instance or similar tech due to large traffic volume but will not effect Oracle standbys as this is transaction level not block (disk) level.
For Veritas i cannot be of help, never used it.
If SVM is used, i used metasets back my solaris days.
Of course, things go differently with SVM as you will be using meta* commands.
This is a bit more complex topic to cover in forum post and will require you to read about SVM as it is a beast of its own.
Don't forget metadb
ZFS will be less work then SVM to migrate.
Hint :
For all these topics you can run a Solaris x86 cluster in lab or laptop in KVM / virtualbox /vmware workstation and experiment - this is most important advice -> test if not sure, document and replicate steps on production environment.
If you used entire devices for zpools (e.g no c2t2d0s but c2t2d0 only) you can import those zpools on x86 or SPARC systems as it is endian aware.
So you could for instance, clone the data part on storage size, present those luns to say vmware or KVM as raw devices (RBD) and import the zpool in solaris x86 VM and vice versa.