Ldom OS on SAN based zfs volume

Is it possible to use zvol from SAN LUN to install LDOM OS ? I 'm using following VDS from my service domain

VDS
    NAME             LDOM             VOLUME     DEVICE
    primary-vds0     primary          iso        sol-10-u6-ga1-sparc-dvd.iso
                                      cdrom      /data03/sol-10-u6-ga1-sparc-dvd.iso
                                      zvol       /dev/zvol/rdsk/newpool/VSAN/vol2

and when i boot my Ldom1 i get following error

{0} ok  boot /virtual-devices@100/channel-devices@200/disk@0 -s
Boot device: /virtual-devices@100/channel-devices@200/disk@0  File and args: -s
Bad magic number in disk label
ERROR: /virtual-devices@100/channel-devices@200/disk@0: Can't open disk label package

ERROR: boot-read fail

To keep the forums high quality for all users, please take the time to format your posts correctly.

First of all, use Code Tags when you post any code or data samples so others can easily read your code. You can easily do this by highlighting your code and then clicking on the # in the editing menu. (You can also type code tags

```text
 and 
```

by hand.)

Second, avoid adding color or different fonts and font size to your posts. Selective use of color to highlight a single word or phrase can be useful at times, but using color, in general, makes the forums harder to read, especially bright colors like red.

Third, be careful when you cut-and-paste, edit any odd characters and make sure all links are working property.

Thank You.

The UNIX and Linux Forums

Hi. i dont think its related to that volume from zfs.

ex:/

host1# zpool create -f tank1 c2t42d1
host1# zfs create -V 100m tank1/myvol
host1# zfs list
        NAME                   USED  AVAIL  REFER  MOUNTPOINT
        tank1                  100M  43.0G  24.5K  /tank1
        tank1/myvol           22.5K  43.1G  22.5K  -

Configure a service exporting tank1/myvol as a virtual disk:
host1# /opt/SUNWldm/bin/ldm add-vdiskserverdevice /dev/zvol/rdsk/tank1/myvol zvol@primary-vds0

Add the exported disk to domain (domain2 in this example):

host1# /opt/SUNWldm/bin/ldm add-vdisk vzdisk zvol@primary-vds0 domain2

.. let me see how you created and assigned zfs volume to your dommain.

Good luck

yep i think its not related to zfs as i tried adding a disk slice too .. from internal disk but i gives me following error

{0} ok boot cdrom - install
Boot device: /virtual-devices@100/channel-devices@200/disk@2  File and args: - install
SunOS Release 5.10 Version Generic_137137-09 64-bit
Copyright 1983-2008 Sun Microsystems, Inc.  All rights reserved.
Use is subject to license terms.
Configuring devices.
NOTICE: [0] disk access failed.

Do you what could be the problem i 'm new to Ldoms , i will appriciate if any one can help me in this.

fugitive, i asked above ..
give much more details about your system and how u configured your LDoms.
otherwise i cant say anything, according to behaviour of startup process, something wong with your configuration, certainly with assignment of devices to LDoms.

good luck!

What version of LDM are you running as well?

Details as u asked SAMAR

ldm list-bindings ldom1
NAME             STATE      FLAGS   CONS    VCPU  MEMORY   UTIL  UPTIME
ldom1            active     -n----  5000    16    8G       0.0%  21m

MAC
    00:14:4f:fb:45:e5

HOSTID
    0x84fb45e5

VCPU
    VID    PID    UTIL STRAND
    0      32     0.3%   100%
    1      33     0.0%   100%
    2      34     0.0%   100%
    3      35     0.0%   100%
    4      36     0.0%   100%
    5      37     0.0%   100%
    6      38     0.0%   100%
    7      39     0.0%   100%
    8      40     1.4%   100%
    9      41     0.0%   100%
    10     42     0.0%   100%
    11     43     0.0%   100%
    12     44     0.0%   100%
    13     45     0.0%   100%
    14     46     0.0%   100%
    15     47     0.0%   100%

MEMORY
    RA               PA               SIZE
    0x8000000        0x408000000      8G

VARIABLES
    autoboot?=false
    boot-device=/virtual-devices@100/channel-devices@200/disk@0

NETWORK
    NAME             SERVICE                     DEVICE     MAC               MODE   PVID VID
    vnet0            primary-vsw0@primary        network@0  00:14:4f:fb:2e:78        1
        PEER                        MAC               MODE   PVID VID
        primary-vsw0@primary        00:14:4f:fb:da:f8        1

DISK
    NAME             VOLUME                      TOUT DEVICE  SERVER         MPGROUP
    iso              iso@primary-vds0                 disk@1  primary
    cdrom            cdrom@primary-vds0               disk@2  primary
    vdisk0           vol0@primary-vds0                disk@0  primary

VCONS
    NAME             SERVICE                     PORT
    ldom1            primary-vcc0@primary        5000

root@essapl020-u006 #

# ldm list-services primary
VCC
    NAME             LDOM             PORT-RANGE
    primary-vcc0     primary          5000-5100

VSW
    NAME             LDOM             MAC               NET-DEV   DEVICE     DEFAULT-VLAN-ID PVID VID                  MODE
    primary-vsw0     primary          00:14:4f:fb:da:f8 e1000g1   switch@0   1               1

VDS
    NAME             LDOM             VOLUME         OPTIONS          MPGROUP        DEVICE
    primary-vds0     primary          iso                                            sol-10-u6-ga1-sparc-dvd.iso
                                      cdrom                                          /data03/sol-10-u6-ga1-sparc-dvd.iso
                                      vol0                                           /dev/dsk/c1t1d0s0


If you need more details let me know .. i already tried changing zfs vol to a internal disk slice but still i 'm getting

NOTICE: [0] disk access failed.

Hi fugitive,

I see it seems ok with your own domain , but not with your control domain.
in LDoms control domain can use the same operating system, i mean your global OS, but not guest domains. thats why you must assign free disk, that is available for new installation of OS for guest domains in future. in your situation you have assigned /dev/dsk/c1t1d0s0 ..
with assigning disk to primary (control) domain your going to give disk service to your logical domains in future. hence they need free available disks for new installation of OS.
is that disk free and not active.??? I suspect that is your system disk :))

let me see your format command result as well as df -h.

good luck.

root@essapl020-u006 # df -h
Filesystem             size   used  avail capacity  Mounted on
/dev/md/dsk/d0          15G    13G   1.4G    91%    /
/devices                 0K     0K     0K     0%    /devices
ctfs                     0K     0K     0K     0%    /system/contract
proc                     0K     0K     0K     0%    /proc
mnttab                   0K     0K     0K     0%    /etc/mnttab
swap                    40G   1.6M    40G     1%    /etc/svc/volatile
objfs                    0K     0K     0K     0%    /system/object
sharefs                  0K     0K     0K     0%    /etc/dfs/sharetab
/platform/SUNW,SPARC-Enterprise-T5220/lib/libc_psr/libc_psr_hwcap2.so.1
                        15G    13G   1.4G    91%    /platform/sun4v/lib/libc_psr.so.1
/platform/SUNW,SPARC-Enterprise-T5220/lib/sparcv9/libc_psr/libc_psr_hwcap2.so.1
                        15G    13G   1.4G    91%    /platform/sun4v/lib/sparcv9/libc_psr.so.1
fd                       0K     0K     0K     0%    /dev/fd
swap                    40G    32K    40G     1%    /tmp
swap                    40G   104K    40G     1%    /var/run
/dev/md/dsk/d2          39G    34G   4.7G    88%    /zones
emcpool3/FMW6/FMW       98G   5.9G    53G    10%    /FMW
newpool/SAR            437G   7.1G   419G     2%    /SAR
emcpool3/swdump         98G    17G    53G    25%    /data03

root@essapl020-u006 # echo | format
Searching for disks...

done


AVAILABLE DISK SELECTIONS:
       0. c1t0d0 <SUN146G cyl 14087 alt 2 hd 24 sec 848>
          /pci@0/pci@0/pci@2/scsi@0/sd@0,0
       1. c1t1d0 <SUN146G cyl 14087 alt 2 hd 24 sec 848>
          /pci@0/pci@0/pci@2/scsi@0/sd@1,0
       2. c3t5006016841E0A08Dd0 <DGC-RAID5-0326 cyl 65533 alt 2 hd 16 sec 890>
          /pci@0/pci@0/pci@8/pci@0/pci@2/SUNW,qlc@0/fp@0,0/ssd@w5006016841e0a08d,0
       3. c3t5006016041E0A08Dd0 <DGC-RAID5-0326 cyl 65533 alt 2 hd 16 sec 890>
          /pci@0/pci@0/pci@8/pci@0/pci@2/SUNW,qlc@0/fp@0,0/ssd@w5006016041e0a08d,0
       4. c3t5006016041E0A08Dd1 <DGC-RAID5-0326 cyl 51198 alt 2 hd 256 sec 16>
          /pci@0/pci@0/pci@8/pci@0/pci@2/SUNW,qlc@0/fp@0,0/ssd@w5006016041e0a08d,1
       5. c3t5006016841E0A08Dd1 <DGC-RAID5-0326 cyl 51198 alt 2 hd 256 sec 16>

u have SVM configured .. give me output of metastat d0 .
its coming closer to fact that disk you have given to LDOM is in use :))))

d2 -m d12 1
d12 1 1 c1t0d0s3
d1 -m d11 1
d11 1 1 c1t0d0s1
d0 -m d10 1
d10 1 1 c1t0d0s0
d3 -m d13 1
d13 1 1 c1t0d0s4

and fyi ... its the disk i 'm using is c1t1d0s0 :wink:

Ok then fugitive,
u r saying that mentioned device absolutely free and ready for use.
Try relabel it. Certainly that disk. Check is it available on system level.
Try create fs,mount etc.

One more moment look at obp which devices are available there.does the path of that disk visible on obp.I mean on #ok prompt

good luck

You need to look at your LDM version as well. Older versions do not support guest domains on slices.

# ldm -V

Logical Domain Manager (v 1.1)
Hypervisor control protocol v 1.3
Using Hypervisor MD v 0.1

System PROM:
Hypervisor v. 1.7.2. @(#)Hypervisor 1.7.2.a 2009/05/05 19:32\015

    OpenBoot        v. 4.30.2       @\(\#\)OBP 4.30.2 2009/04/21 09:28

fugitive
show me output of show-disks command under OBP.

Samar, the issue was resolved after i assinged the complete disk to the LDOM the same disk. I 'm initiating another thread for a related issue to same LDOM hope you can help me there too. Thanx for your efforts and time. :o

I will refer you to Logical Domains for CoolThreads Servers - zfs volumes broken post os patching - spec_getpage called for character...

Where you find that for exporting zfs volumes, you need to have it "sliced", e.g.
# ldm set-vdsdev options=slice vol2@primary-vds0

Details:
By default ZFS volumes are now exported as full disk instead of being exported as single slice disk.
This is causing the problem you describe if you were exporting a raw ZFS volume.