Read speed too bad on NAS share on VM

Hi,
We have two servers in scenario (vmsoldot01 is Oracle VM with Linux and tldtppod15 is physical Linux server). One NAS share is mounted on both servers with similar permissions and access. But READ speed is too bad on virtual in comparison to physical server.
While trying to diagnose this, I see RX-DRP value is high. Can I assume that this is (also) playing role in it ? What else can I check ? Any pointer ?

[root@vmsoldot01 ~]# dd if=/dev/zero of=/tmp/bdev/testing/vmsoldot01 bs=1M count=1024 oflag=direct 
1024+0 records in 
1024+0 records out 
1073741824 bytes (1.1 GB) copied, 9.73523 s, 110 MB/s 
[root@vmsoldot01 ~]# dd if=/tmp/bdev/testing/vmsoldot01 of=/dev/null bs=1M count=1024 iflag=direct 
1024+0 records in 
1024+0 records out 
1073741824 bytes (1.1 GB) copied, 1246.77 s, 861 kB/s 
[root@vmsoldot01 ~]# dd if=/tmp/bdev/testing/vmsoldot01 of=/dev/null iflag=direct 
2097152+0 records in 
2097152+0 records out 
1073741824 bytes (1.1 GB) copied, 471.854 s, 2.3 MB/s 
[root@vmsoldot01 ~]# 
------------------------------------ 
[root@tldtppod15 tmp]# dd if=/dev/zero of=/tmp/bdev/testing/tldtppod15 bs=1M count=1024 oflag=direct 
1024+0 records in 
1024+0 records out 
1073741824 bytes (1.1 GB) copied, 100.014 s, 10.7 MB/s 
[root@tldtppod15 tmp]# dd if=/tmp/bdev/testing/tldtppod15 of=/dev/null bs=1M count=1024 iflag=direct 
1024+0 records in 
1024+0 records out 
1073741824 bytes (1.1 GB) copied, 92.4556 s, 11.6 MB/s 
[root@tldtppod15 tmp]# dd if=/tmp/bdev/testing/tldtppod15 of=/dev/null iflag=direct 
2097152+0 records in 
2097152+0 records out 
1073741824 bytes (1.1 GB) copied, 1112 s, 966 kB/s
--------------------------------------
[root@vmsoldot01 ~]# netstat -i
Kernel Interface table
Iface MTU Met RX-OK RX-ERR RX-DRP RX-OVR TX-OK TX-ERR TX-DRP TX-OVR Flg
eth0 1500 0 22895564 0 101660 0 19052738 0 18 0 BMRU
eth1 1500 0 822638 0 221 0 13997 0 17 0 BMRU
lo 65536 0 4281740 0 0 0 4281740 0 0 0 LRU
[root@vmsoldot01 ~]#
------------------------------------
[root@tldtppod15 ~]# netstat -i
Kernel Interface table
Iface MTU Met RX-OK RX-ERR RX-DRP RX-OVR TX-OK TX-ERR TX-DRP TX-OVR Flg
bond0 1500 0 30489184 0 5095 0 21940113 0 0 0 BMmRU
104b03261a 1500 0 2587924 0 673723 0 0 0 0 0 BMRU
108e4ab487 1500 0 804990 0 7994 0 0 0 0 0 BMRU
100225caed 1500 0 4589169 0 128230 0 0 0 0 0 BMRU
105042eef6 1500 0 10 0 0 0 6 0 0 0 BMRU
1087241aa6 1500 0 115495 0 2590 0 0 0 0 0 BMRU
1061370512 1500 0 2827742 0 21722 0 0 0 0 0 BMRU
bond0.839 1500 0 2837706 0 0 0 0 0 0 0 BMRU
bond0.840 1500 0 4592116 0 0 0 0 0 0 0 BMRU
bond0.842 1500 0 23051901 0 0 0 19030177 0 0 0 BMRU
bond1 1500 0 12194321 0 93820 0 12223822 0 0 0 BMmRU
bond1.350 1500 0 1035850 0 0 0 1068301 0 0 0 BMRU
bond1.941 1500 0 11062114 0 20537 0 9803285 0 0 0 BMRU
bond1.981 1500 0 10 0 0 0 6 0 0 0 BMRU
bond2 1500 0 949087 0 5092 0 309561 0 0 0 BMmRU
bond2.151 1500 0 116429 0 0 0 0 0 0 0 BMRU
bond2.152 1500 0 825020 0 0 0 13997 0 0 0 BMRU
eth4 1500 0 12105592 0 1 0 12223822 0 0 0 BMsRU
eth5 1500 0 949087 0 0 0 309561 0 0 0 BMsRU
eth6 1500 0 88729 0 88730 0 0 0 0 0 BMsRU
eth7 1500 0 30489184 0 0 0 21940113 0 0 0 BMsRU
lo 65536 0 174 0 0 0 174 0 0 0 LRU
vif1.0 1500 0 19030177 0 0 0 22864644 0 0 0 BMRU
vif1.1 1500 0 13997 0 0 0 820349 0 0 0 BMRU
[root@tldtppod15 ~]#

if your mount options *(tcp/udp-rsize/wsize-nfs/cifs and vers and the other opt) and nic settings are same on the both of them server
then we may check the rx/tx (eg.number of packets which hold by driver) buffer queue sizes.because of there is seem drops in the rx buffer.

if any operations needs high traffic we can increase the driver buffers.
we are facing the slow read operations so you likely pointed the correct value.:b:

check the your (vm) nic card driver and search for , is there a bug or update or patches and then apply it.

# ethtool -i eth0

check the current ring buffer settings.

# ethtool -g eth0

and try to increase the rx value to defined maximum values.(eg..256/512/1024)

# ethtool -G eth0 rx 512

and test again and check the drops count.

if there is no change after dd tests then you can try to change the virtual nic to avaliable other one in the oracle vm net settings.

good lucks
regards
ygemici

Hi,

Something to check before delving into this too deeply is that the networking is set up properly, if you are running through any intelligent network infrastructure ensure that all the TX/RX settings are the same.

Points worth looking at are is auto negotiate set anywhere. Are the port parameters the same all the way through I.E. 100Mb Full Duplex etc.

Regards

Dave