Network of Virtual machine not working

Hello,

I have a problem.
I have a server SPARC T3-1 with solaris 11 on the base. The server is working well. And then i did a virtual machine using again Solaris 11 to do this, but now my notwork on my virtual machine is not working anymore. There says that the network that i have created is unknow, and anything work neither ping on my network. Should i declare something on my solaris base, or do something else on my virtual machine to do it work?

Can someone help me please?

How did you create the virtual machine? We need exact steps.

Is it an LDOM or a zone or VirtualBox?

im rly desapointed, i think has been 1 week trying to do that work... Please, someone help! If need more details just ask and i'll send...

---------- Post updated at 03:06 PM ---------- Previous update was at 03:02 PM ----------

So, i created a rpool on my server with zfs, and a partition where i installed my virtual machine separated... Ill show you exactally how it looks.

root@srvdth06:/# zfs list
NAME..........................................USED  AVAIL  REFER  MOUNTPOINT
rpool ...........................................235G  39,1G  75,5K  /rpool
rpool/ROOT...................................8,69G  39,1G    31K  legacy
rpool/ROOT/solaris.........................8,69G  39,1G  8,38G  /
rpool/ROOT/solaris/var....................315M  39,1G   311M  /var
rpool/VARSHARE............................2,57M  39,1G  2,47M  /var/share
rpool/VARSHARE/pkg.......................63K  39,1G    32K  /var/share/pkg
rpool/VARSHARE/pkg/repositories......31K  39,1G    31K  /var/share/pkg/repositories
rpool/VARSHARE/zones....................31K  39,1G    31K  /system/zones
rpool/dump..................................7,99G  39,3G  7,75G  -
rpool/export..................................63K  39,1G    32K  /export
rpool/export/home.........................31K  39,1G    31K  /export/home
rpool/swap..................................2,06G  39,1G  2,00G  -
rpool/vdisk..................................216G  39,1G    31K  /rpool/vdisk
rpool/vdisk/srvdth06-ldom1.hdd0....9,54G  39,1G  9,43G  -
rpool/vdisk/srvdth06-ldom1.hdd1....206G   245G    16K  -
root@srvdth06:/#

My virtual machine is installed on *.hdd0

You still didn't tell us in which virtualization technique did you use to create your virtual machine (Solaris supports many of them).

We also need to know the exact steps performed by you when creating/configuring the virtual machine.

Device validation can be disabled by setting the device_validation SMF property of the Logical Domain manager service to 0.

# svccfg -s ldmd setprop ldmd/device_validation=0
  # svcadm refresh ldmd
  # svcadm restart ldmd

ldm set-mau 0 primary
ldm set-vcpu 28 primary
ldm set-mem 4G primary
ldm add-vds primary-vds primary
ldm add-vdsdev primary-vds primary@primary-vds
ldm add-vswitch net-dev=igb0 vsw0-pub primary
ldm add-vconscon port-range=5000-5100 primary-vcc primary
ldm add-vnet `hostname`-igb0 vsw0-pub primary

echo "`hostname` netmask + broadcast + group ipmp_pub up" >> \
/etc/hostname.igb0

echo "addif `hostname`-ipmp_pub1 -failover deprecated up" >> \
/etc/hostname.net0

reboot

ldm add-domain `hostname`-ldom1
ldm set-vcpu 100 `hostname`-ldom1
ldm set-memory 11776m `hostname`-ldom1
ldm add-vnet `hostname`-igb0 vsw0-pub `hostname`-ldom1

# creating disk POOL.
zfs list # List pools
zfs create rpool/vdisk
zfs create -V 20G rpool/vdisk/`hostname`-ldom1.hdd0 # creating a partition
zfs create -V 200G rpool/vdisk/`hostname`-ldom1.hdd1 # creating a partition


ldm add-vdsdev rpool/vdisk/`hostname`-ldom1.hdd0 sol11.hdd0@primary-vds
ldm add-vdisk hdd0 sol11.hdd0@primary-vds `hostname`-ldom1
ldm set-variable auto-boot\?=false `hostname`-ldom1

# Getting an image of the disk Solaris11
# I did the download from Oracle (sol-11_2-text-sparc.iso)

ldm add-vdsdev /opt/solaris11.iso iso@primary-vds
ldm add-vdisk iso iso@primary-vds `hostname`-ldom1
ldm bind-domain `hostname`-ldom1

# Enable the services from LDOMs

svcadm enable ldmd
svcadm enable ldoms/vntsd

And finally:

ldm start-domain `hostname`-ldom1

telnet localhost 5000

press "Enter"

{0} ok  devalias

Output:

iso        ****
vnet0    ****
vnet1    ***
hdd0    **** (ZFS rpool/vdisk)
disk    **** 

{0} ok boot iso - install

#installing normally following the procedures from the image

After all i just removed the vdisk and device that i had create.

ldm stop `hostname`-ldom1
ldm remove-vdisk iso `hostname`-ldom1
ldm remove-vdsdev iso@primary-vds
ldm set-variable auto-boot\?=true `hostname`-ldom1
ldm start `hostname`-ldom1

Reference:
https://blogs.oracle.com/vreality/entry/ldom\_with_zfs

---------- Post updated at 04:20 PM ---------- Previous update was at 03:16 PM ----------

Do you need some more details?
:confused:

---------- Post updated 01-07-15 at 02:30 PM ---------- Previous update was 01-06-15 at 05:20 PM ----------

Some idea? Please? :frowning:

You shouldn't use a 8 years old blog about Solaris 10 as a reference while you use Solaris 11.2.

One obvious issue is you seem to use the /etc/hostname.igb0 and /etc/hostname.net0 while these files are no more the way to set persistent networking configuration with the Solaris release you are installing.

Have a look to Managing Network Configuration - Transitioning From Oracle� Solaris 10 to Oracle Solaris 11.2

Hi,

That was the configuration of the network on my base machine, and it is working well... the problem is in my network of my virtual machine which is solaris 11 aswell...

There i was just trying to create a ip to do it work, but i dont know why it doesn't...

i have set my netstat to Default and use ipadm to create a network in there, it should work, but it is like my virtual machine couldn't see the port of the network recognized. And i dont know how to solve this issue.

Do you have some ideia or a guess of why my virtual machine network doesn't work?

Please post the commands you run and their output. My crystal ball is out of order ...

Sorry... :stuck_out_tongue:

Anyway... I have tryed a lot to configure my network and nothing work...

root@nagios2:~# ifconfig -a
lo0: flags=2001000849<UP,LOOPBACK,RUNNING,MULTICAST,IPv4,VIRTUAL> mtu 8232 index 1
        inet 127.0.0.1 netmask ff000000
ipmp0: flags=8011000802<BROADCAST,MULTICAST,IPv4,FAILED,IPMP> mtu 1500 index 2
        inet 0.0.0.0 netmask 0
        groupname ipmp0
net0: flags=39040843<UP,BROADCAST,RUNNING,MULTICAST,DEPRECATED,IPv4,NOFAILOVER,FAILED,STANDBY> mtu 1500 index 3
        inet 192.168.4.120 netmask ffffff00 broadcast 192.168.4.255
        groupname ipmp0
        ether 0:14:4f:f9:b9:83
lo0: flags=2002000849<UP,LOOPBACK,RUNNING,MULTICAST,IPv6,VIRTUAL> mtu 8252 index 1
        inet6 ::1/128
ipmp0: flags=28012000800<MULTICAST,IPv6,FAILED,IPMP> mtu 1500 index 2
        inet6 ::/0
        groupname ipmp0
net0: flags=20032000841<UP,RUNNING,MULTICAST,IPv6,FAILED,STANDBY> mtu 1500 index 3
        inet6 ::/0
        groupname ipmp0
        ether 0:14:4f:f9:b9:83
root@nagios2:~# ping 192.168.4.55
ping: sendto No route to host

Some times the host give me the message "Network is unreachable"

Pleaseee, help me :frowning:

The NIC you need to use to ping that adress is in status ...FAILED,STANDBY . From what I can see in your ifconfig output that won't work. I can't find to much useful informations to troubleshoot network problems in your previous posts. Be more specific with your configuration and what you are trying to do, so someone might be able to help.

On Solaris 11.2, you should be using ipadm to configure network, not ifconfig.

As a glanced thru your posts, there is a lot of errors regarding initial configuration of primary domain (the hypervisor), since you were using Solaris 10 guide on Solaris 11 operating system.

In this point, due to above stated, quickest way would be to reinstall the entire machine and configure it properly following the documentation for your solaris release, in this case Solaris 11.2
After initial installation of operating system, the primary domain will need to have network configured.

The configuration of network will differ depending on the technology used and how are network ports on switch configured (VLAN tagging, trunking etc.)

The most simplest configuration of primary domain would be :

Using ipadm command and selected interfaces (e.g. net0,net1... dladm show-phys will show you connected and available interfaces to configure).

ipadm create-ip net0
ipadm create-ip net1
ipadm create-ipmp ipmp0
ipadm add-ipmp -i net0 -i net1 ipmp0
ipadm set-ifprop -p standby=on net1 # this is optional, we are using active passive, where net0 is active and net1 is on standby if net0 fails.
ipadm create-addr -T static -a <youraddress>/bitmask ipmp0/v4
route -p add default <yourdefault router>
# you might want to configure resolving, ntp and additional parameters here...

Now that we are over with initial network configuration of primary (control domain)...

ldm start-reconf primary # we will intitiate a delayed reconfiguration which will be active after reboot
ldm set-vcpu 8 primary # we are giving 8 vcpu to hypervisor
ldm set-mem 4G primary  # we are giving 4 GB of memory to hypervisor
ldm add-spconfig <yourconfigname>

Reboot the host using init 6 command as root.

Network on the hypervisor should work now with persistent route added and static address configured.

Now you need to configure network for ldoms by creating virtual switch(es) to be used with ldoms.
You can use net0 and net1 for virtual switches as well as any other available physical interface (not ipmp groups, but, for instance, VLAN tagged interfaces - yes)

IPMP groups are created inside ldoms.

Aggregated interfaces are created on the hypervisor (control/service) domain, and the newly create interface (aggr0) is the net-dev used for virtual switch creation.

All configuration commands noted by me can and should be expanded for your specific needs.

You will need to read the documentation with understanding, not just paste commands from it to servers.

Hope this clears things a bit.

Regards
Peasant.