VM server (LDOMS) HOWTOs/Examples [Request]

Hi

I have a T4-1, equipped with Solaris 10 & VM 2.1-server Build (default build of 6 Guest domains. There 2 single port Fibre Cards

(1) How to I make accessible Local storage accessuibel to a Guest domain ?
I have 6 x 300GB disk and was thinking of cretaing a ZFS RAIDZ Pool from them prior, or is that not the correct logical first step ?

(2) How do I reconfigure a guest domain for, let say 8GB of Memory & 300GB of Local storage or 300GB of Fibre storage

(3) Also exmaple of configuring the vSwitch. with 4NICs & upto 6 VMS, I'm thinking of bonding two NICs with the remain NICS as failovers, with both production, managment & Backup traffic accross the same NICs - There is a spearte Backup Network

(4) How to monitor these guest domains

Many thanks in advance for yor examples

ms. stevie

Hi
I've read the administration guide thanks, but am after real world examples
ms stevie

Take a look at that blog entry then: How to show off live migration with a SPARC system? - c0t0d0s0.org

@bartus11
Thanks thats great Backgorund info

@everyone
More on my questions,
Assume my host is calle #t4-1 and there are 6 domain already created by default,

Thanks in advance ms. stevie

I will post a script I used to create bunch of LDOMS on a T3-1 tomorrow.

The way I do it is I created a master system and created a snapshot. I then use zfs send / recv to duplicate the snapshot to another ldom. A few quick LDOM commands and you can have servers up in a few minutes.

---------- Post updated at 02:12 PM ---------- Previous update was at 12:23 AM ----------

First you need to create a snapshot of your master machine. I loaded the Solaris 10 8/11 ( buggy release but it's the lastest ) ISO into a folder on the local machine in /DVD/Solaris10/iso/

First create the zfs dataset that will hold your installation of LDOM1
zfs create logicalpool/ldoms
zfs create logicalpool/ldoms/ldom1
zfs set mountpoint=/ldom1 logipool/ldoms/ldom1

Create the image file that will hold our OS install:
mkfile -n 100g /ldom1/disk0.img ( 100gb may seem big but it leaves you plenty of room to live upgrade in the future for patches. We started out doing 25, then 50 and now the standard is 100gb. )

Note: I also suggest each machine have a physical lun attached. This way in the future you can make use of the ldm migrate feature.

Here is the script I use:

ldm add-domain ldom1
ldm set-vcpu 10 ldom1
ldm set-mau 0 ldom1
ldm set-mem 2g ldom1
ldm add-vnet vnet1 primary-vsw0 ldom1
ldm add-vdsdev /ldoms/ldom1/disk0.img vol200@primary-vds0
ldm add-vdisk vdisk200 vol200@primary-vds0 ohcinpuppettest01

Connect the Solaris 10 iso to the host:

ldm add-vdsdev /DVD/Solaris10/iso/sol10u10.iso s10u10iso@primary-vds0
ldm add-vdisk vdisk_iso s10u10iso@primary-vds0 ldom1

Misc final steps:
ldm set-variable autoboot\?=true ldom1
ldm bind ldom1
ldm start ldom1

At this point you should be able to run an ldm list command and see your local port that you can connect to.

Once you telnet to the ldom you will be at the {0} ok prompt. Type boot vdisk_iso and start the install process. You have a lot of options at this point. You can use a flar you have.. use a jumpstart server whatever you fancy. Note the T3/T4 are sun4v arch and a sun4u image will not work. I suggest creating a new image. Once you are complete you have patched and completed all your customizations you can then create a zfs snapshot and clone the machine. Run sys-unconfig and clear any naming IP from this master image before your snapshot.

zfs snapshot logipool/ldoms/ldom1@master

zfs send logipool/ldoms/ldom1@master | zfs recv logipool/ldoms/ldom2

Take the commands from above and create a new machine.

ldm add-domain ldom2
ldm set-vcpu 10 ldom2
ldm set-mau 0 ldom2
ldm set-mem 2g ldom2
ldm add-vnet vnet1 primary-vsw0 ldom1
ldm add-vdsdev /ldoms/ldom2/disk0.img vol201@primary-vds0
ldm add-vdisk vdisk201 vol201@primary-vds0 ldom2
ldm set-variable autoboot\?=true ldom2
ldm bind ldom2
ldm start ldom2

I script the whole thing and can be done with a new machine in less then 15 minutes.

---------- Post updated at 02:23 PM ---------- Previous update was at 02:12 PM ----------

Storage:

Create your pool of disks, depends on what type of devices you have.

zpool create ldomstor emcpower1c empower2c emcpower3c etc.
zfs create ldomstor/ldom1
zfs create -V 300g ldomstor/ldom1/ldom1-stor0
zfs create -V 300g ldomstor/ldom1/ldom1-stor1

ldm add-vdsdev /dev/zvol/dsk/ldomstor/ldom1/ldom1-stor0 ldom1stor0@primary-vds0
ldm add-vdsdev /dev/zvol/dsk/ldomstor/ldom1/ldom1-stor0 ldom1stor1@primary-vds0

ldm add-vdisk ldomstor0 ldom1stor0@primary-vds0 ldom1
ldm add-vdisk ldomstor1 ldom1stor1@primary-vds0 ldom1

Restart ldom1.

Once inside the domain you can see the new devices as disk using the format command. We create a second pool inside the domain for whatever the intended purpose may be.

@jlouki

thanks for that I did it in a very similar way & even had a go at creating a s9 brandz inside

jumpstart was fine on 4u

many many thanks

..............
ms. stevie