We are upgrading from sun4u to T4 systems and one proposal is to use LDOMs and also zones within LDOMs.
Someone advised using only zones and not LDOMs because the new machines have fewer chips and if a chip or a core fails then it doesn't impact the zones, but impacts the corresponding LDOMs.
What's the failure rate / probability of core failure on sun4v?
What are your experiences with the sun4v systems?
Do you avoid using LDOM for this or for any other reason?
Is a system with LDOMs inherently less reliable than one without, esp. since all guest LDOMs depend on the primary.
If each LDOM is given at least 2 cores each, then does that mitigate the total loss of the LDOM if one core is lost?
I use this exact configuration. LDOMS with zones installed in the LDOMS. it really comes down to money and complexity. Money in the sense you can use your LDOM configs to limit your exposure to licensing by physical resource limitations in the LDOM config (you can't do this with zones alone) and complexity in that it's essentially a double VM, or a VM running in a VM and all the complexity that brings add mounted storage from a SAN and you can see where it gets complicated very fast.
If you use resource pool with processor sets attached to specified zones, Oracle recognizes it as hard partition for software licensing.
As for LDOM and high availability you can use migrate option and/or import the ldom configuration on another server manually (this will require FC or ISCSI disk for root available to multiple nodes).
Other option is to have, for instance, two LDOM on local disks with zone inside on FC/ISCSI disk.
If one host goes down, you can always attach and boot the zone(s) on another LDOM (node).
Since zone fc zpool now will warn you that the zone/zpool is active on another node(LDOM), you will not be able to accidentally attach the zone on both LDOM as the same time.
And of course you can always buy Solaris cluster which will do above things for you with HA agents (but that's rather expensive).